IGF 2019 OF #13 Human Rights & AI Wrongs: Who Is Responsible?


The impact of artificial intelligence (AI) on human rights and the viability of our democratic processes has become starkly visible during the Cambridge Analytica scandal and is increasingly debated since.
Countries committed to protecting human rights must ensure that those who benefit from developing and deploying digital technologies and AI are effectively held responsible for their risks and consequences. Effective and legitimate mechanisms are needed that will operate to prevent and forestall violations of human rights and to promote an enabling socio-economic- environment in which human rights and the rule of law are anchored. Only legitimate mechanisms ensure that we can properly, sustainably and collectively reap the many benefits of AI. This open forum addresses the following questions:
Who bears responsibility for the adverse consequences of advanced digital technologies, such as AI? How can we address the ‘control problem’ that flows from the capacity of AI-driven systems to operate more or less autonomously from their creators? What consequences stem from the fact that most data processing infrastructures are in private hands? What are the effects of the increasing dependence of public services on few, very large private actors?
The open forum will discuss the respective obligations of states and responsibilities for private actors regarding the protection and promotion of human rights and fundamental freedoms in the context of AI and machine learning systems. It will also explore a range of different ‘responsibility models’ that could be adopted to govern the allocation of responsibility for different kinds of adverse impacts arising from the operation of AI systems.
As background resources, the debate will build on the Council of Europe study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework and on the draft Recommendation of the Committee of Ministers to member States on the human rights impacts of algorithmic systems, available at: https://www.coe.int/en/web/freedom-expression/msi-aut


Council of Europe
Council of Europe (CoE)
EU Agency for Fundamental Rights (FRA)


- MODERATOR – senior Council of Europe representative
- Seda Gürses, Associate Professor University of Delft
- David Reichel, FRA
- Moustapha Cisse, Google AI, Ghana (tbc)
- Eileen Donahoe, Executive Director, Global Digital Policy Incubator, Stanford University (tbc)

Online Moderator: 

Peter Kimpian


GOAL 16: Peace, Justice and Strong Institutions