IGF 2018 WS #350 A framework of best practices for algorithmic accountability

Organizer 1: Civil Society, Western European and Others Group (WEOG)
Organizer 2: Civil Society, Asia-Pacific Group

Algorithms play an essential role in shaping our lives online, including determining what content we can see and post, as well as how we are being targeted by advertisers. Researchers and advocates have responded to concerns over flawed and biased algorithmic systems putting users’ rights at risk by calling on internet companies to show greater transparency and accountability regarding their use of algorithms. What is currently missing from this debate on companies’ deployment of algorithms and the threats they can pose to human rights is a framework of best practices that could help foster algorithmic accountability. Likewise, there is an urgent need for a shared understanding among the digital rights community regarding the kinds of tools and strategies that different stakeholders can use to evaluate the extent to which companies’ use of algorithms conform with human rights principles. The proposed session will bring together civil society, government and industry representatives to address these urgent concerns, and brainstorm ways stakeholders can come together to create benchmarks that set standards and best practices for algorithmic accountability.

Format: 

Round Table - 60 Min

Interventions: 

Diverse perspectives representing different stakeholders will allow for an in-depth discussion of various interventions that could help establish best practices on algorithmic accountability that put human rights center stage.

Diversity: 

Developing assessment tools and best practices for algorithmic accountability requires the expertise of a diverse group of stakeholders. The proposed session will bring together diverse voices from different stakeholder groups to allow for unique insights. In addition, the roundtable will include diverse perspectives in regards to both geographic region (with experts representing Africa, the Asia Pacific region and Western Europe) and gender. While the current list of confirmed speakers includes representatives from civil society and industry, we have also reached out to government representatives in preparation of this session proposal, and hope to confirm a speaker representing government perspectives upon acceptance of this session.

While algorithms can be used to improve and even save lives, such as in humanitarian and medical contexts, they can also pose significant threats to human rights, including to freedom of expression and privacy. Despite serious concerns about potential violations to human rights, internet companies lack transparency about how they use algorithms to shape information flows online, as well as what these tools collect and share about us. There is a growing call among advocates and researchers for algorithmic accountability, but there is less consensus about the kinds of strategies and tools we should employ to assess companies’ accountability and transparency in regards to their algorithms. Moreover, there is a need for a framework of best practices that can help guide companies in creating and deploying algorithms more protective of human rights.

The proposed panel brings together stakeholders from civil society, government, and industry to discuss how companies can be held accountable for their use of algorithms and the possible risks to users’ human rights that these tools can pose. The main objective of this session is to address ways in which companies can ensure that algorithms are designed and deployed in a way that sustains human rights, economic fairness, and accountable governance. We will explore this objective by addressing three overarching questions: (I) What are some of the human rights risks that can result from internet platforms’ algorithms?; (II) What are potential strategies and tools that advocates, researchers, regulators, and companies themselves can deploy to assess companies’ accountability and transparency regarding their use of algorithms?; (III) How can civil society actors, advocates and industry representatives work together to address these human rights risks, and establish a framework of best practices to hold internet companies accountable? Ideas for best practices include industry commitments to human rights impact assessments that assess the privacy and freedom of expression-related risks posed by algorithms, as well as detailed disclosures in terms of service and user agreements that inform users on these risks.

Discussion Facilitation: 

During the first 15 minutes of the session, the moderator will introduce the topic and experts at the table. This will be followed by a discussion during which all attendees are invited to provide input on best practices and assessment tools that could help foster algorithmic accountability.

Online Participation: 

Online participants will be able to ask questions and provide feedback using the livestream (in case there is a webcast), as well as by using the Twitter hashtag. The onsite and online moderators will coordinate to make sure there is ample opportunity for remote participation.

Contact Information

United Nations
Secretariat of the Internet Governance Forum (IGF)

Villa Le Bocage
Palais des Nations,
CH-1211 Geneva 10
Switzerland

igf [at] un [dot] org
+41 (0) 229 173 678