IGF 2019 OF #20 Algorithmic Impact Assessments - a key to transparency?


The Open Forum will introduce the concept of Algorithmic Impact Assessments (AIAs) and engage participants, via concrete cases of use of AI by public institutions, in “designing” such Assessments in terms of procedures, criteria to be used and impact indicators.
The underlying assumption for the Open Forum session is that Algorithmic Impact Assessments should be compulsory prior to implementing AI-driven technological solutions by the public administrations. Their introduction will increase algorithmic transparency because every affected party will be engaged from the very beginning in the process of their creation. The parties (in particular users/citizens) would know what the government wants to achieve, how it will measure the results, what groups will be impacted, what risks can occur and by which means they can be prevented. The AIAs, if done in a responsible way, should also provide the ground for refusing the implementation of algorithms, when risks are likely to be higher than benefits.
Policy questions: how should Algorithmic Impact Assessments look? Which institutions should be entrusted to verify them? What questions should be asked before creating and using algorithmic systems, and how to measure the impact? How to ensure stakeholder involvement in the process?


Council of Europe
European Commission against Racism and Intolerance; ePaństwo Foundation (Poland)working on government transparency and data, Amnesty International


Krzysztof Izdebski, Policy Director of ePaństwo Foundation (Poland)
Merel Koning, Senior Policy Officer Technology and Human Rights, Amnesty International
Prof. Frederik Zuiderveen Borgesius, Professor of Law, Radboud University, NL

Online Moderator: 

Sandrine Marroleau


GOAL 9: Industry, Innovation and Infrastructure
GOAL 16: Peace, Justice and Strong Institutions