IGF 2023 WS #296 Addressing Machine Bias to Foster Sustainable Development

Subtheme

Sustainability & Environment
Digital Technologies to Achieve Sustainable Development Goals

Organizer 1: James Amattey, Norenson IT
Organizer 2: Tess Buckley, EthicsGrade
Organizer 3: Marjorie Mudi Gege, Emerge Africa

Speaker 1: Abigail Oppong, Civil Society, African Group
Speaker 2: Daniel Jr Dasig, Technical Community, Asia-Pacific Group
Speaker 3: Tshifhiwa Joshua Maumela, Technical Community, African Group
Speaker 4: Denise Leal, Private Sector, Latin American and Caribbean Group (GRULAC)

Moderator

Tess Buckley, Private Sector, Western European and Others Group (WEOG)

Online Moderator

Marjorie Mudi Gege, Civil Society, African Group

Rapporteur

James Amattey, Technical Community, African Group

Format

Other - 60 Min
Format description: It's a two part workshop which begins with a keynote, a panel discussion, continues with a breakout session and climaxes with a report back to the panel for conclusion

Policy Question(s)

1. How can we ensure transparency and accountability in algorithmic decision-making systems to address and mitigate machine bias? What policy frameworks or guidelines can be implemented to promote fairness and equitable outcomes?

2. What are the best practices for collecting, curating, and utilizing diverse and representative datasets to minimize bias and ensure equitable decision-making in sustainable development systems?

3. What measures can be taken to enhance public awareness, understanding, and participation in algorithmic decision-making processes? How can policies and initiatives promote public engagement, access to information, and mechanisms for challenging and rectifying biased decisions in sustainable development systems?

What will participants gain from attending this session? The workshop titled Algorithmic Accountability: Addressing Machine Bias to Foster Equitable Decision Making in Sustainable Development Systems, aims to highlight the critical issues of algorithmic bias in decision making systems and their potential to cause harm. Further exploring ways to mitigate these risks and explore their implications for a sustainable future. By examining AI and emerging technologies which are restricting human rights and freedoms we hope to showcase the promise of data governance in building user trust.

Description:

Humans hold unconscious and conscious bias, often these biases are integrated into machines through the use of bias data and undiverse teams.

This machine bias is the unfair outcomes produced by algorithms and ML systems which undermine fairness, human rights and the trust of users. Mitigating the ethical risks of algorithmic and machine bias is crucial and requires measures such as representative training data, algorithmic transparency, and as will be discussed in this workshop, rigorous evaluations, auditing and ongoing monitoring of machine bias to promote fair outcomes.

Expected Outcomes

The collaborative environment which will be fostered in the session with participants from diverse backgrounds (researchers, policymakers, technologists, and private sector) will allow for sharing of insights, and best practices. We will be happy to provide innovative solutions to addressing algorithmic bias.

The tangible outcome of the workshop is a paper in the form of a report which captures the dialogue of the workshop to make it accessible to those who were unable to be present and to centralise key findings and the mentioned thematic areas.

This workshop will pave the way for a more equitable decision-making system that contributes to SDG goals. We hope to encourage participants to continue exploring the interplay between emerging technologies, their impact on human rights and freedoms and the promise of data governance for increased trust.

Hybrid Format: We will begin with a survey to ascertain the understanding of machine bias. Physical and Remote audiences will be allowed to speak or ask questions.