IGF 2019 WS #380
What about trust? What about us? automation/human oversight

Subtheme

Organizer 1: Lucena Claudio, Georgetown University, USA
Organizer 2: Salvador Camacho Hernandez, Kalpa Proteccion.Digital
Organizer 3: Varsha Sewlal, ISOC NCSG
Organizer 4: ,

Speaker 1: Aisyah Shakirah Suhaidi, Civil Society, Asia-Pacific Group
Speaker 2: Martin Silva Valent, Civil Society, Latin American and Caribbean Group (GRULAC)
Speaker 3: Olga Kyryliuk, Civil Society, Eastern European Group

Moderator

Olga Kyryliuk, Civil Society, Eastern European Group

Online Moderator

Salvador Camacho Hernandez, Private Sector, Latin American and Caribbean Group (GRULAC)

Rapporteur

Varsha Sewlal, Civil Society, African Group

Format

Other - 60 Min
Format description: Open Fish Bowl - the key requirement would be to have primary speakers seating in semicircle in front of the audience with no physical barrier between them (no stage, or tables, or tribunes). One chair near the speakers should be left unoccupied for anyone from the audience to join at any point during the discussion.

Policy Question(s)

What should be understood as meaningful human oversight/control for the purposes of automated decision-making?
Is this control indispensable in the environment of automated systems?
Are there activities that require higher levels of human control over automated decision-making than others? Over which criteria?
What are the key obstacles to achieve human oversight over automated processes?
Are existing institutions, body or organizations equipped with the necessary tools to discuss and implement this oversight? Are new structures necessary?
What is the current state of regulatory measures in different countries and regions concerning this issue? Have they been addressing it properly?
What kind of enforcement is available in such cases?
What concrete measures can be taken to ensure some extent of human oversight over automated decision making?

SDGs

GOAL 10: Reduced Inequalities
GOAL 11: Sustainable Cities and Communities
GOAL 12: Responsible Production and Consumption
GOAL 15: Life on Land
GOAL 17: Partnerships for the Goals

Description: The session will map and explore regulatory and governance initiatives that attempt to address how human oversight is exercised over decisions that are taken by automated systems. The necessity of this oversight will be discussed and different human areas will be tested against this necessity. The existing regulatory initiatives are going to be brought to the table and analyzed in terms of reach, nature, and content. Most of them provide an abstract mechanism that mandates human oversight, but none of them address how it will be concretely exercised. Alternatives for this are going to be the core of the debate. Once these alternatives are disclosed and explained, the audience will start exploring the issues together with the speakers, exchanging views, attempting to tackle weaknesses and trying to highlight the best practices.

The interactive format of fish bowl will perfectly serve for open and inclusive exchange of ideas between the audience and the key speakers. Remote participation will be strongly encouraged in the discussion phase, which will take the major part of the workshop.

The speakers will present their perspectives on the policy questions raised above based on both their professional expertise, and experience as regular Internet users. Coming from different stakeholder groups, the speakers will present difficulties and proposals to implement ways to ensure human oversight when autonomous systems are in charge of initial decision-making, providing food for thought to onsite and online participants. The moderators will keep an eye on timely welcoming the interventions from the audience (both onsite and remote).


Expected Outcomes: Bringing the issue of human oversight over automated decision making systems to the IGF ecosystem, with the space and attention that comes with it, will certainly help mature the issue by ensuring that the debate takes place in the scope of a multistakeholder scenario. The first contribution of the session is to highlight this issue as an important, autonomous one stemming from data protection, use of artificial intelligence and human rights in digital spaces. That is a first awareness takeaway. It will also put the theme under a global perspective, where the diversity and different points of view will help the people who are involved with the more concrete measures around the issue to enrich their repertoire, share views and perspectives, voice their concerns and listen to alternatives. This will place them in a position to shape better proposals for solutions. This would be the other expected takeaway, namely, to inform and create conditions so that regulatory and governance measures that address human oversight over decisions taken by autonomous systems consider the highest possible number of perspectives and variables.

The issue we discuss has relevance to each and every of us, and, therefore, the most interesting ideas might come from the least expected places. We will make sure that onsite and online moderators are working in tandem, notifying each other about the interventions from the audience. By opting for an open fish bowl format we will make discussion as inclusive as possible, giving participants the possibility to jump into discussion at any point, without dividing the workshop into classic presentations and Q&A parts. After a short intro speech by primary speakers any participant from the audience will have a chance to take an empty chair near the speakers and present his/her perspective. Throughout the whole workshop one of the chairs has to be kept free for new people to join and speak. Thus, once new person joins the semicircle of speakers, one of the presenters who has already spoken should free his/her chair. The moderator will facilitate the process and explain the rules in the beginning of the workshop.

Relevance to Theme: More and more human activities are incorporating automated decision support, partial and full decision making techniques into their everyday environment. The use of analytics over large amounts of data already streamlines processes among promises of efficiency, adequacy, and of an escalation of the correlation of information in a way that would otherwise not be feasible. Recent research, studies and the observation of some of these experiences show that there are solid grounds for concern that automated systems might not give the most appropriate response for a number of tasks that involve an evaluation of fairness and of other highly semantically-charged aspects. This is not an unknown issue. A number of national and international regulatory and/or governance initiatives are already in place, in attempts to address the issue of ensuring some extent of human review or oversight of these automation. Legislative or regulatory safeguards are a good starting point towards this goal, but important as they may be, their mere existence is still not enough. The existing provisions, as well as the ones which are currently under elaboration, tend to be rather conceptual, abstract, comprising overall prescriptions that often do not acknowledge - and consequently do not adequately tackle - the concrete obstacles in implementing human review over semi or fully automated systems. These obstacles range from the scale of the human intervention that is necessary to oversee contested automated decisions to the organic, way in which code behaves in learning applications, self-reorganizing itself according to parameters or following rules that are not predictable. This session will map those initiatives, question the nature and the extent of human oversight that is necessary in different human activities, and attempt to identify concrete ways to implement this oversight in light of the technical constraints that are involved.

Relevance to Internet Governance: There are a number of regional, international and national regulatory initiatives that are already addressing the issue of safeguarding human oversight over automated decision-making systems. These initiatives themselves already relate closely to Internet Governance, since most of them currently come from the scenario of data protection, this discussion being a spin-off of the former. Bringing it to the IGF ecosystem, with the space and attention that comes with it, will certainly help mature the issue by ensuring that the debate takes place in the scope of a multistakeholder scenario. It will also put it under a global perspective, where the diversity and different points of view will help the people who are involved with the more concrete measures around the issue to enrich their repertoire, share views and perspectives, voice their concerns and listen to alternatives. This will place them in a position to shape better proposals for solutions.

Online Participation

We make a strong focus and expect extensive online participation. For that purpose, we will share in advance the information about the session and possibility to join remotely with our professional networks. The online moderator will notify the onsite moderator whenever there is an intervention from a remote participant, and we will read it out and provide comments if any from the onsite participants. We truly want the most diverse voices to be heard.


Proposed Additional Tools: We will use Twitter and other social media pages administered by the workshop organizers. We will also ask the participants and speakers to make tweets and share the most interesting ideas via social media directly during the session.