IGF 2023 WS #140 Marginalized voices in AI human rights impact assessments

Subtheme

Artificial Intelligence (AI) & Emerging Technologies
Chat GPT, Generative AI, and Machine Learning

Organizer 1: Marlena Wisniak, 🔒European Center for Not-for-Profit Law (ECNL)
Organizer 2: Vanja Skoric, 🔒

Speaker 1: John Nelvin Lucero, Civil Society, Asia-Pacific Group
Speaker 2: Yunwei Aaryn, Government, Western European and Others Group (WEOG)
Speaker 3: Badalich Savannah, Private Sector, Western European and Others Group (WEOG)
Speaker 4: Lindsey Andersen, Civil Society, Western European and Others Group (WEOG)

Moderator

Marlena Wisniak, Civil Society, Eastern European Group

Online Moderator

Vanja Skoric, Civil Society, Western European and Others Group (WEOG)

Rapporteur

Marlena Wisniak, Civil Society, Eastern European Group

Format

Round Table - 60 Min

Policy Question(s)

A. How do human rights impact assessments for AI product design and use fit within the broader human rights due diligence process, and what minimal criteria are needed for an impactful process?
B. How should governments regulate and support human rights impact assessments for AI, as consistent with the UN Guiding Principles on Business and Human Rights and emerging regulation?
C. When assessing the human rights impacts of AI systems, how can AI developers and deployers ensure that stakeholder engagement is fully inclusive, particularly of marginalized groups, including those in the Global South?

What will participants gain from attending this session? This workshop aims to strengthen policy makers, academics, civil society, and companies’ understanding about the emerging fields of AI impact assessments and meaningful stakeholder engagement, based on ECNL’s research and practical work. Participants will have the opportunity to learn about AI systems and human rights impact assessments, reflect on Global South inclusion, capacity building, and resourcing needs for groups and communities. It will also provide an opportunity for participants to critique existing approaches to impact assessments, and explore how they could be conducted and regulated for AI systems.

ECNL has been piloting the framework with a social media company, a public entity/European city, and a UN agency on a specific product. We aim to share our learnings from previous consultations and initial findings from ongoing pilots. Their feedback will then inform our framework, and spearhead future collaboration.

Description:

Human rights impact assessments of AI systems are an essential part of identifying, assessing and remedying risks to human rights and civic space resulting from the development and use of AI systems. Recent and ongoing regulatory efforts in Europe, such as the EU AI Act and the EU Digital Services Act include obligations for conducting fundamental rights impact and/or risk assessments centered on algorithmic systems. This presents a pivotal opportunity for the international community to establish an effective and rights-respecting process, ensuring that emerging technologies such as AI are safe and AI actors take effective risk mitigation measures globally. Fundamental questions for meaningful process remain, for instance related to stakeholder engagement, methodology, scope, independence, public disclosure, and remedy.

This interactive workshop will provide an introduction to risk and impact assessments for AI-driven platforms. Centering meaningful stakeholder engagement as a key component, participants will discuss how best to include civil society and affected communities from around the world, especially from the Global South, in the process. The workshop will draw on ECNL’s framework on human rights impact assessments and stakeholder engagement, which collaborators see as a valuable tool not only for including civil society (especially rightsholders) in human rights impact assessments, but within the broader human rights due diligence process as well. We also consider the inequality of opportunities within civil society itself, focusing on historically and institutionally marginalized groups.

The collaborative exercise will include a short case study assessing the impacts of AI systems on human rights, with emphasis on rights to freedom of expression, assembly, and association. Participants will explore how impact assessments could be conducted in practice in such a context, with meaningful participation from local and regional civil society and communities.

Expected Outcomes

Participants will strengthen their understanding about AI impact assessments and how to meaningfully engage with external stakeholders, including civil society and affected communities, in the process. As this will be an interactive workshop, they will have ample opportunity to provide input into ECNL’s framework for meaningful engagement, which will directly inform the content thereof as well as future pilots.

ECNL will summarize the findings from the workshop and publish them, making them available to a broader public. Disseminating the document will target both AI developers and deployers conducting impact assessments, and civil society organizations who regularly engage in such processes.

ECNL will also share next stages of their piloting process with the city of Amsterdam as well as with a social media platform, and invite participants to collaborate in any future multistakeholder consultations with government, private sector, civil society, and academic representatives.

Hybrid Format: The session will be structured in three parts. First, the invited speakers will give a brief background about human rights impact assessments and stakeholder engagement for AI systems, sharing key challenges and opportunities, especially in the Global South. Second, participants will be invited to share their thoughts and reflections through an open (but guided) conversation and a case study. Open discussion will be available both for attendees participating remotely, and those who are attending in-person. Third, the organizer will provide a high-level overview of what was discussed, as well as open questions and ideas for future work, based on the group discussion.

The in-person moderator will be responsible for facilitating conversations of participants in the room. Remote participants will have the possibility to contribute via chat and orally. The onsite moderator will work closely with the online moderator, who will facilitate virtual breakout groups.