IGF 2018 WS #427 AI will solve all problems. But can it?

Room
Salle VII

Organizer 1: Fanny Hidvegi, Access Now

Organizer 2: Jan Gerlach, Wikimedia Foundation

Organizer 3: Charlotte Altenhöner-Dion, Council of Europe

Organizer 4: Suzor Nicolas, Queensland University of Technology

Speaker 1: Grace Mutung'u, Civil Society, African Group
Speaker 2: Malavika Jayaram, Civil Society, Asia-Pacific Group
Speaker 3: Amba Kak, Civil Society, Asia-Pacific Group
 

Additional Speakers

Prof. Karen Yeung (Birmingham School & School of Computer Science)

Moderator

Jan Gerlach

Online Moderator

Fanny Hidvegi

Rapporteur

Nicolas Suzor

Format

Other - 90 Min
Format description: Micro-multistakeholder community debate

Interventions

The format allows carefully selected speakers to use their expertise to lead group discussion. The format emphasises diversity and participation in order to develop more reliable estimates of the suitability of AI to diverse problems than could be achieved only through a small group of analysts in a panel or other traditional format. The goal is explicitly to generate a high level of diversity of viewpoints, stakeholders and geographical perspectives. Our expert moderators will be briefed to facilitate the group discussions in an inclusive manner that promotes diversity of opinion and helps to include newcomers to the debate. The group discussion will also enable other participants to contribute and shape the workshop discussion. Finally, the brief debate in the second half of the session presents an opportunity for any participant to become an active member of the workshop in presenting the outcomes of their group discussions and allows all participants’ votes to contribute to the group consensus-based confidence estimates.

Diversity

Diversity will be ensured by different ways in many perspectives and in particular gender, geography, stakeholder, policy and remoteness. The format will not only allow but also enable that the moderators of the groups (including the people who are listed as speakers) representing a diverse group of people, and also to hear voices in the room who will be encouraged to speak up during the debate phase so diversity of viewpoints is even higher. The organizers and highlighted speakers already ensure gender, geography, stakeholder and policy diversity which will be further enhanced by adding more moderators and having different people in the room as active participants.

The session will address the applicability of AI to specific challenges to create a more nuanced analysis of the potential for AI to solve the pressing challenges that society is currently facing online. In diverse policy debates and private sector initiatives worldwide, there is often a remarkably strong belief in AI as a solution. This belief was fittingly summarized by technology news outlet The Verge after Mark Zuckerberg’s testimonies in the US Congress and at the European Parliament. “Moderating hate speech? AI will fix it. Terrorist content and recruitment? AI again. Fake accounts? AI. Russian misinformation? AI. Racially discriminatory ads? AI. Security? AI.” This proposed workshop will be bring together experts from a diverse range of constituents to lead and moderate groups. Moderators will be invited and assigned a societal issue that can potentially be solved by AI, focusing particularly on online content regulation questions including terrorist content, hate speech, and misinformation. For each issue, there will be one moderator representing the pro and one to represent the contra side, respectively. They will prepare one slide each in advance of the workshop to start the group discussions. The participants of the groups will discuss each issue with the goal of preparing a short set of talking points for the debate in the second part of the workshop. After one hour the groups will reconvene into a short debate where selected representatives (not necessarily the original moderators) will present the outcome of the group discussion. Instead of a win/lose vote on each issue, we will develop a range of confidence about the applicability of AI for each issue in the near term. We use the ‘wisdom of the crowd’ (including online participants) to develop a set of reasonable estimates from the debate. After the session we will create an infographic that can be used to disseminate these predictions to better inform future policy debates.

The format is designed to be interactive and participatory throughout the entire session. The group discussions will be led by experts who are briefed to facilitate discussions that involve a wide range of opinions from a diverse group of participants. We deliberately move away from traditional speaker-audience formats in order to facilitate a more inclusive discussion that improves the quality of the final debate. We use a small group breakout format to ensure that we are able to engage as many participants as possible. The final output -- an estimate of participant confidence in the applicability of AI to various policy issues -- is explicitly designed to reflect consensus and record the extent of differences of opinion among participants in the room and online.

The development and deployment of algorithmic decision-making, machine learning systems, artificial intelligence (AI), and other related emerging technologies to societal ills online is the subject of many timely and pressing policy debates. Both private and public sector initiatives are mostly based on the belief that, through AI, society will be able to better address a broad spectrum of issues, ranging from hate speech and extremist content, to copyright violations, and the spread of misinformation online. Unfortunately, there is still a great deal of uncertainty in policy debates about which of these issues AI is most likely to be useful to address. The session uses an open debate format, led by experts representing different stakeholder communities, to develop more concrete predictions about the extent to which the application of AI could help solve pressing challenges online while also taking into account its human rights implications.

Online Participation

The online participation of a diverse group of stakeholders is a crucial component of this session’s plan to develop a set of reliable indicators of confidence about the potential for AI to be usefully applied in various pressing policy debates. During the breakout component, we dedicate a moderator to rotate between groups, reporting on discussion to the online participants as it progresses, and feeding the comments of online participants back to each of the breakout groups. In the debate phase, online participants will have the opportunity to present comments and responses (through our dedicated online moderator) on an equal basis with participants in the room. Finally, the online moderator will aggregate votes from the online participants in the final tally of confidence levels on each issue.

Agenda

Opening of the session by the co-organizers [5-10mins]

  • Fanny Hidvegi (Access Now)
  • Jan Gerlach (Wikimedia Foundation)
  • Charlotte Altenhöner-Dion (Council of Europe)
  • Nicolas Suzor (QUT Law School)

 

The opening introduction will be short, we will focus on explaining the format and invite all participants to actively contribute to the session. 

 

Small group discussions [50mins]

  • The four group leaders will kick off the group conversations which will facilitated by the main pros and cons that are prepared by the assigned speakers. There will be no traditional presentation but we might use one slide (or flipchart) for the pros and cons respectively. 
  • The participants of the groups will discuss each issue with the goal of preparing a short set of talking points for the debate in the second part of the workshop.

 

Debate [20-25mins]

  • The groups will reconvene into a short debate
  • selected representatives (not necessarily the original moderators) will present the outcome of the group discussion. 

 

Vote and closing of the session [5-10mins]

  • Instead of a win/lose vote on each issue, we will develop a range of confidence about the applicability of AI for each issue in the near term. 
  • Participants will express this range of confidence based on the small group discussion and the debate. 
Session Time
Session Report (* deadline 26 October) - click on the ? symbol for instructions

- Session Type (Workshop, Open Forum, etc.): Workshop (Micro-multistakeholder community debate)

 

- Title: AI will solve all problems. But can it?

 

- Date & Time: Tuesday, 13 November, 2018 - 09:00 to 10:30

 

- Organizer(s):

 

Organizer 1: Fanny Hidvegi, Access Now

Organizer 2: Jan Gerlach, Wikimedia Foundation

Organizer 3: Charlotte Altenhöner-Dion, Council of Europe

Organizer 4: Suzor Nicolas, Queensland University of Technolog

 

- Chair/Moderator: Jan Gerlach and Fanny Hidvegi

 

- Rapporteur/Notetaker: Nicolas Suzor

 

- List of speakers and their institutional affiliations (Indicate male/female/ transgender male/ transgender female/gender variant/prefer not to answer):

 

Grace Mutung'u, Kenya ICT Action Network Associate, female

Malavika Jayaram, Digital Asia Hub, female

Amba Kak, Mozilla, female

Prof. Karen Yeung (Birmingham School & School of Computer Science), female

 

The format is designed to enable participation from many people in the room. Small group discussions will be led by pre-assigned discussants but we won’t have speakers for panels.

 

- Theme (as listed here): Emerging technologies

 

- Subtheme (as listed here): Artificial Intelligence

 

- Please state no more than three (3) key messages of the discussion. [150 words or less]

 

  • One key message of the session is to explore specific societal challenges related to the digital space that are impacted by artificial intelligence, either positively or negatively.

  • The goal is for AI and other emerging technologies is to be individual centric and human rights respecting. We will discuss areas where AI can be an enabler for human rights as well as use cases that demonstrates human rights risks or violations.

  • More specifically, the discussion will focus on the question of automation and AI as an alleged solution to content regulation whether the problem to solve is hate speech, terrorist content, or disinformation.