IGF 2018 WS #321 Algorithmic accountability and societal responsibility

Organizer 1: Civil Society, Asia-Pacific Group
Organizer 2: Technical Community, Western European and Others Group (WEOG)
Organizer 3: Technical Community, Asia-Pacific Group

The policy issue our round table will address is the growing concernof the accountability of algorithmic systems (used in A.I.). Decisions are increasingly being made, nudged or recommended by algorithmic systems and as such there is a pressing need to guarantee the accountability of said systems. Closely linked to this is the need to ensure that societal responsibility is included in the decision making logic, and optimization criteria, of these algorithmic systems.
In response to these concerns the IEEE (Institute of Electrical and Electronics Engineers) launched the IEEE Global Initiative on Ethics for Autonomous and Intelligent System in 2016, which includes the initiation of the P7000 series group of technology standards that focus on ethical dimensions of technology. At the end of 2017, the international standards organization ISO/IEC also launched a series of working groups to develop standards to address AI issues such as trustworthiness. At the same time various regional and national government bodies are engaged with inquiries and development of policy documents to address the rising issue of algorithmic accountability (e.g. [1,2,3])

[1] The European Group on Ethics in Science and New Technologies (EGE), European Commission, https://ec.europa.eu/research/ege/index.cfm
[2] How can humans keep the upper hand? The ethical matters raised by algorithms and artificial intelligence, CNIL, https://www.cnil.fr/en/algorithms-and-artificial-intelligence-cnils-repo...
[3] Artificial Intelligence and Human Rights - towards a Canadian Foreign Policy, Global Affairs Canada.
[4] AI Standardization White Paper, China Electronics Standardization Institute (CESI), https://baijia.baidu.com/s?id=1589996219403096393

Format: 

Round Table - 90 Min

Interventions: 

Ansgar Koene chairs the IEEE P7003 Standard for Algorithmic Bias Considerations working group and leads research projects on policy implications of AI/algorithmic systems at the University of Nottingham. Based on his work he has submitted evidence to numerous UK parliamentary inquiries related to AI, algorithmic decision making and Internet/platform regulation. Building on this work he will provide a UK/European perspective on the challenges and options for embedding societal responsibility and accountability in the algorithmic system development and deployment process.
https://unbias.wp.horizon.ac.uk/
http://sites.ieee.org/sagroups-7003/

Yohko Hatada is the founder and director of the Evolution of Mind, Life and Society Research Institute which is developing systems promoting on Humane, Democratic, Symbiotic future society. She presented a workshop on “Paradigm shift to develop genuine global civilization and the role of ICT” at WSIS2018.
https://emlsri.org

Edmon Chung, is serving as the CEO for DotAsia Organization and heads the secretariat for the Asia Pacific Regional Internet Governance Forum (APrIGF). Edmon serves also on the Executive Committee of Internet Society Hong Kong, which serves as the secretariat for the Asia Pacific Regional At-Large Organization (APRALO) and participates extensively on Internet governance issues.
Since 2002, Edmon played a leadership role in the region-wide .Asia initiative, bringing together an open membership of 29 official country-code top-level domain authorities and regional Internet bodies. DotAsia is a not-for-profit organization with a mandate to promote Internet development and adoption in Asia. Edmon has served on many technical and policy development working groups, that made it possible for the introduction of multilingual domain names and email addresses on the Internet.

Maroussia Lévesque works at the crossroads of law and technology. She has a background in interactive arts, having lead interdisciplinary teams at the Obx lab for experimental media within the Hexagram research-creation institute. She was called to the Bar in 2013 and clerked for the Chief Justice at the Quebec Court of Appeal, Canada. She participated to the inquiry commission on the protection of journalists’ sources, investigating law enforcement’s electronic surveillance practices. She currently researches artificial intelligence and human rights at the Digital Inclusion Lab within Global Affairs Canada.

Charlie Martial NGOUNOU is Founder of AfroLeadership, a civil society organization promoting Democracy, Technology and Human Rights in francophone Africa. Charlie is an activist for Digital Rights, Internet Access, Open Governance, Accountability and Participation.

Shrisha Rao is a full professor at the International Institute of Information Technology – Bangalore (IIIT-Bangalore), a graduate school of information technology in Bangalore, India. His interests include distributed computing, cloud computing, and agent-based modeling. He is particularly interested in applications of agent-based modeling to understand group dynamics, and in understanding how social behavior can be influenced by biases and hidden presumptions.

Each of the speakers will provide an opening statement from their regional (Europe, Asia, North America and Africa) and sector (Academia, Policy think-tank, Regional IGF, Government, Human Rights NGO) perspective. The discussion will then be opened to the floor.
One of the speakers, Shrisha Rao, will be a remote online participant.

Diversity: 

The invited speakers/organizers for the roundtalbe comprise
1 European male
2 Asian male
1 Asian female
1 North American female
1 African male

This session poses the question of how to promote beneficent socially responsible uses of AI algorithms while avoiding harms, such as unintended, unjustified and/or socially unacceptable algorithmic bias.

One aspect of the contemporary world is our heavy reliance on computation in most aspects of personal life and social interaction. For many of us, we are likely to decide which movie to watch, which book to buy, or which restaurant to eat at, based on some computational systems running some poorly understood but implicitly trusted algorithms. Such ubiquitous artificial intelligence (AI) systems have thus displaced our own independent thinking, as well as taken on the possibly undeserved role of trusted advisors and confidants. AI algorithms are now not merely tools and procedural abstractions that define the manner in which computers work---they now have a large cultural context, and are implicitly trusted with our deepest secrets, and used as aids to our most consequential judgments. Yet at the same time, many AI algorithms are completely opaque even to their own creators, and may incorporate social biases and other shortcomings with users none the wiser.

This panel intends to consider some aspects of the prevalent "algorithmic culture," and suggest some points of concern that bear further scrutiny. It also intends specifically to consider the issues of algorithmic bias and its consequences, such as concerns arising from the use of algorithmic recommendations in law enforcement (e.g. PredPol [1]), sentencing and bail setting (e.g. COMPAS [2]), employee performance evaluation (e.g. Houston teachers [3]) or even the way cultural experiences are guided by opaque algorithms (e.g. Netflix [4]). We will also raise some historical and possible future consequences of the use of artificial intelligence outside a proper ethical framework.
[1] Predictive Policing, https://theintercept.com/2018/01/27/nypd-predictive-policing-documents-l...
[2] Machine Bias, Pro Publica (2016), www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sen...
[3] Houston Federation of Teachers v. Houston Independent School District, https://www.courthousenews.com/wp-content/uploads/2017/05/HoustonTeacher...
[4] Netflix movie recommendations,
http://uk.businessinsider.com/how-the-netflix-recommendation-algorithm-w...

Discussion Facilitation: 

We propose a 90 minute roundtable discussion format, comprised of two rotations of oral presentations with discussion with attendees seated around tables. Each rotation will include 15 minutes of presentation (3 of the invited speakers presenting for 5 minutes each) followed by 30 minutes of discussion and feedback. Roundtable presenters will bring targeted questions to pose to others at the table in order to learn from and with those attending.

Online Participation: 

We will use the existing University of Nottingham, UnBias project, website and UnBias, IEEE, ISOC, .Asia, FATML, and IIIT-Bangalore mailing lists and Twitter accounts of each of the speakers to gather interested from online participants, channeling them into the official IGF WebEx environment to participate in the session. It will also be possible for these online participants to submit contributions to the session in advance by email.
One of the speakers will be an online participant who will also support the online moderator in monitoring online input.
Online participants will also have a separate queue and microphone, which will rotate equally with the mics in the room; the workshop moderator will have the online participation session open, and will be in close communication with the workshop’s trained online moderator, to make any adaptations necessary as they arise.

Contact Information

United Nations
Secretariat of the Internet Governance Forum (IGF)

Villa Le Bocage
Palais des Nations,
CH-1211 Geneva 10
Switzerland

igf [at] un [dot] org
+41 (0) 229 173 678