IGF 2019 WS #103
How journalists can hold algorithms to account

Subtheme

Organizer 1: Nicolas Kayser-Bril, Deutsche Welle Akademie
Organizer 2: Michael J. Oghia,

Speaker 1: Ansgar Koene, Civil Society, Western European and Others Group (WEOG)
Speaker 2: Vidushi Marda, Civil Society, Asia-Pacific Group
Speaker 3: Judith Duportail, Civil Society, Western European and Others Group (WEOG)
Speaker 4: Jillian York, Civil Society, Western European and Others Group (WEOG)

Moderator
Online Moderator

Nicolas Kayser-Bril, Civil Society, Western European and Others Group (WEOG)

Rapporteur

Nicolas Kayser-Bril, Civil Society, Western European and Others Group (WEOG)

Format

Other - 90 Min
Format description: The workshop is a hands-on simulation, where participants play the role of journalists investigating an algorithm.

It requires a room with chairs that can be moved (to create groups) and no tables (the workshop can be adapted to a different setting).

It is designed for 40 participants at a maximum.

Policy Question(s)

How can journalists and the media contribute to internet governance through the investigation of algorithms?

What are the policy-level issues that need to be addressed to enable investigations on algorithms?

SDGs

GOAL 9: Industry, Innovation and Infrastructure
GOAL 16: Peace, Justice and Strong Institutions

Description: ## Introduction to the topic of algorithm accountability (5 minutes)

## Review of 3 examples and how they are investigated (15 minutes):

Compas (judiciary, United States, investigated by ProPublica), Schufa (credit reference, Germany, investigated among others by Der Spiegel) and Sesame Credit (credit reference, China, investigated by scholars).

## Introduction of the simulation (15 minutes):

Workshop moderators will introduce an imaginary algorithm to the participants. The algorithm under scrutiny will be a matching algorithm similar to those running in real-world social search apps (which includes matchmaking apps such as Tinder and OkCupid). In the United States, two in five heterosexual couples and three in five homosexual couples met online (Rosenfeld et al., 2019), so that matching algorithms can reinforce exo- or endogamic practices at a very large scale, changing or inducing caste-like structures throughout society, not to mention the various stereotypes a matching algorithm can reinforce (e.g. if it favors people of a certain type who behave in a certain way).

Participants will have to answer a series of questions, such as:
- Why is the issue of public interest?
- What can happen at the personal level if the algorithm is biased against certain persons? At the societal level?
- How could the algorithm be investigated?
- What would you need to assess the effects of the algorithm?
- How can it be communicated in a news outlet?

## Hands-on simulation (30 minutes):

Participants, in groups of five to 10 (max. four groups), are given a stack of printed material, in order to help them ideate on the topic. The papers given include the (imaginary) profile of the researcher on the social search app, a description of the matching service, a selection of (imaginary) profiles which were seen by the researcher during preliminary research, the terms of use of the service, the patents filed by the service provider and excerpts from relevant legislation in certain jurisdiction.

Workshop facilitators discuss issues with participants as they carry out their task.

The material is also published online on the online participation platform of the IGF. Online participants are invited to share their ideas, which are then reported to the in-room participants in the presentation of results.

## Presentation of results (20 minutes):

Each group present its results to the room. Workshop moderators highlight when a solution they offer is especially relevant to the topic - or on the contrary, when it is impractical.

## Wrap-up and conclusion (5 minutes).

___

# References

M Rosenfeld, RJ Thomas, and Sonia Hausen, 2019. "Disintermediating your Friends." [draft paper]


Expected Outcomes: Participants will:
- Understand why it is important to hold algorithms to account
- Be made aware of the link between algorithmic accountability and Internet governance
- Appreciate what is required from platforms to make their algorithms interpretable, and what is required to explain the issue to different audiences
- Receive first-hand experience in the intricacies of investigating algorithms

During the simulation, which resembles in its format to breakout group discussions, each group will benefit from the presence of an expert who will foster the conversation as needed (for instance, by pointing out which are the most interesting bits of information in long documents such as the Terms of Service or the patent).

The fifth expert will moderate the online conversation and link it to the in-room ones.

During the presentation of the results, all moderators will make sure to highlight a diversity of viewpoints, diverse both in their content and in their geographic origin (investigating an algorithm may not be made in the same way in California and in Bihar).

Relevance to Theme: Artificial intelligence impacts the lives of citizens and corporations alike. Despite its omnipresence, assessing the role AI plays in driving policy and the economy is no easy task, not to mention bringing AI to account. Additionally, AI is often anthropomorphised, ascribed agency and intentionality, and used as a curtain to conceal their creators’ and operators’ intentions and biases.

Ensuring that journalists and civil society can and do investigate algorithms and AI is a prerequisite for human-centric governance. Current rules can make it very hard to hold algorithms -- or, in fact, their creators and operators -- to account. The workshop will explore which levers can be activated in order for journalists to investigate AI effectively and how governance could be adjusted to ensure that a balance be found between the need for openness and the need for privacy and confidentiality, be it at the commercial or administrative level.


Relevance to Internet Governance: Algorithms largely determine what kind of content users are exposed to. Content moderation relies heavily on them. Small changes to algorithms can have a significant impact on publishers and news outlets in terms of traffic and financial sustainability. Although access to information and free expression is a priority for sustainable development, the use of AI can hamper efforts directed at both these goals. Remedying this is difficult, however, due to the closed and complex nature of the underlying code, which is largely exempt from oversight and is often a closely guarded industry secret. Furthermore, the engineers who design algorithms and managers who make decisions to deploy them do not always understand (or care about) their program’s decision-making processes. Given the multi-stakeholder nature of the challenge at hand, this proposal aims to offer a hands-on way to demonstrate what algorithmic transparency and accountability looks like in practice.

Online Participation

The IGF tool WebEx will be set up for a new meeting associated with the workshop, were remote participants will be able to follow the workshop and where, during the hands-on sessions, the documents will be made available in electronic format.

The online moderator will guide the discussion on the online participation platform just as the in-room experts help offline participants.

Proposed Additional Tools: The online pendant of the workshop, on the Online Participation Platform, will be advertised on the social media channels of the moderators.