IGF 2019 WS #264
AI and Human Rights: Bridging the Gaps to Real Impact in the

Subtheme

Organizer 1: Jessica Fjeld, Berkman Klein Center
Organizer 2: Ryan Budish, Berkman Klein Center for Internet & Society, Harvard University
Organizer 3: Sandra Cortesi, Berkman Klein Center for Internet & Society

Speaker 1: Preetam Maloor, Intergovernmental Organization, Intergovernmental Organization
Speaker 2: Effy Vayena, Civil Society, Western European and Others Group (WEOG)
Speaker 3: Rasha Abdul-Rahim, Civil Society, Western European and Others Group (WEOG)

Moderator

Jessica Fjeld, Civil Society, Western European and Others Group (WEOG)

Online Moderator

Ryan Budish, Civil Society, Western European and Others Group (WEOG)

Rapporteur

Sandra Cortesi, Civil Society, Western European and Others Group (WEOG)

Format

Round Table - Circle - 60 Min

Policy Question(s)

Adapting principles to reality: How do governments and public and private sector entities translate high-level principles into operational priorities and decisions? How do they address situations in which important principles are in competition with each other?

Measuring human rights compliance of AI: What lessons can be learned from other sectors (such as the extractive
industries) that have a long history of human rights violations and -- more recently -- efforts to measure and reduce those violations? How can tools like Human Rights Impact Assessments be adapted to AI? What specific metrics need to be tracked? How can AI risks be better measured and anticipated?

Legal frameworks: What is the relationship between human rights principles and law? What regulatory frameworks should be put in place to better mitigate the human rights impacts of AI? What do entities developing and deploying AI technology need to understand about these frameworks?

SDGs

GOAL 3: Good Health and Well-Being
GOAL 4: Quality Education
GOAL 8: Decent Work and Economic Growth
GOAL 9: Industry, Innovation and Infrastructure
GOAL 10: Reduced Inequalities
GOAL 12: Responsible Production and Consumption
GOAL 16: Peace, Justice and Strong Institutions

Description: The rapid development and deployment of artificial intelligence has many stakeholders concerned about both the ethical and human rights impacts of these technologies. Unfortunately the conversation to date has largely been fractured, with little overlap between the “ethics” and “human rights” frames, and limited engagement between different groups of key stakeholders. These silos have constrained the exchange of key information and insights, the ability to build necessary coalitions, and the effectiveness of proposed solutions.

The ethical frame, which is largely adopted by technology companies and public sector organizations with a technology focus, paradoxically is perceived by non-specialists to have a low barrier to entry while it is also is undergirded by decades of scholarly work. This frame has begun to generate sets of ethical practical principles for organizations implementing AI. At the same time, public sector human rights professionals have increasingly applied their own framework to the governance of AI, with an explosion of publications on the topic beginning in 2018. That frame is premised on the idea that the international human rights regime provides a strong framework for assessing the impacts of, and providing accountability for, new technologies like AI. In both frames, existing proposals have largely operated at the theoretical level and often lack a clear sense of how humans rights or ethics principles could be operationalized as real-world AI systems are developed and implemented.

The challenge is significant, but not insurmountable. In fact, outside of the AI context, there are numerous examples of ethical and human rights frameworks applied, often in concert with each other, to improve the impacts of innovation. The examples span areas as diverse as extractive industries to freedom of expression online.

This Roundtable will place experts and leaders from across the ethical and human rights frames into direct discussion, collaboratively working toward operationalizing existing high-level principles. The Roundtable will eschew panelist presentations in favor of a moderator-led group exploration of key themes such as accountability, bias, privacy, health and safety, and impacts on workers whose jobs AI may change or replace. Working theme by theme, we’ll bring the panelists’ broad array of perspectives — companies building and using AI, civil society, academia, government initiatives, and more — into dialogue with one another. Within each theme, the panel will explore relevant lessons from past initiatives, uncover areas of substantive overlap even where language may diverge, highlight broadly applicable insights, and articulate concrete possibilities for productive interaction. Ample time will be accorded for questions from and engagement with the audience.

Expected Outcomes: This session will reflect on the existing proliferation of ethical and human rights approaches to regulating, at a high level, AI’s challenges, and develop a collaborative process for operationalizing them. The goal is not to identify one dominant set of a principles, but to identify areas of overlap and mutuality in mission where it may actually be easier to begin focusing on operational next steps rather than try to reach agreement on a perfectly worded universal document. By articulating key points of overlap and translating how similar concepts are differently expressed, we hope to lower the barriers to future collaboration and progress. To this end, the Rapporteur will collate from the discussion a list of the key themes present in both the human rights and ethics conversations about AI, as well as any operational next steps.

The Roundtable will eschew panelist presentations in favor of a moderator-led group exploration of key themes such as accountability, bias, privacy, health and safety, and impacts on workers whose jobs AI may change or replace. Working theme by theme, we’ll bring the panelists’ broad array of perspectives — companies building and using AI, civil society, academia, government initiatives, and more — into dialogue with one another. Within each theme, the panel will explore relevant lessons from past initiatives, uncover areas of substantive overlap even where language may diverge, highlight broadly applicable insights, and articulate concrete possibilities for productive interaction. Ample time will be accorded for questions from and engagement with the audience as well as online participants.

Relevance to Theme: The Data Governance track seeks to ensure that the benefits of emerging technologies like AI contribute to inclusive economic development while protecting the rights of people. The tremendous promise of AI for both the public and private sectors comes hand-in-hand with significant challenges to the exercise of numerous human rights. This proposed session will seek to bridge the gap between existing high-level (but often not readily actionable) human rights principles for AI with existing business practices. The discussion will bring together experts from the domains of AI, human rights advocacy, law, and policy to discuss how to operationalize the exercise of human rights as AI solutions are implemented around the world.

Relevance to Internet Governance: AI is an increasingly important part of Internet. Issues of e-commerce, digital citizenship, freedom of expression, harmful speech online, and so much more are increasingly entangled with AI technologies. Human rights and Internet governance has been core to the IGF for years, but now it is increasingly important to bring together several divergent conversations, as Internet companies are increasingly releasing their own AI principles, often separate from similar efforts taking place in the public sector.

Online Participation

The online moderator will bring key points from the online discussion into the room. Additionally, as the group moves theme by theme, there will be time set aside within each theme to bring in online discussion points.