Description: “They ignore long-term risks, gloss over difficult problems (“explainability”) with rhetoric, violate elementary principles of rationality and pretend to know things that nobody really knows.” Professor Metzinger, European Commission’s the High-Level Expert Group on Artificial Intelligence This was the scathing critique Professor Metzinger gave about the report of European Commission’s the High-Level Expert Group on Artificial Intelligence (HLEG) which he helped draft in April 2019. The debate on AI governance and ethics is disproportionately influenced by industry initiatives and corporate aims [1]. Even though a variety of actors are developing ethical frameworks, concerns from civil society and academia struggle to get industry support, and even in multi-stakeholder settings, are easily diluted [2]. For instance, during deliberations at the (EU-HLEG) [3], while some non-negotiable ethical principles were originally articulated in the document, these were omitted from the final document, because of industry pressure [4]. Civil society is not always invited to partake in deliberation around ethical AI, and when it is, the division of seats at the table is not equitable. In India for instance, an AI task force to create a policy and legal framework for the deployment of AI technologies was constituted without any civil society participation[5]. In the EU-HLEG, industry was heavily represented, but civil society did not enjoy the same luxury [6]. In the United Kingdom, the Prime Minister’s office for AI has three expert advisors - one academic and two industry representatives [7]. A recently disbanded AI ethics Council set up by Google included zero civil society representatives. Such ethics frameworks and councils are often presented as an alternative or preamble to regulation. However, in practice, they regularly serve to avoid regulation under the guise of encouraging innovation. Many ethical frameworks are fuzzy, lack shared understanding, and are easy to co-opt. By publishing ethical principles and constituting ethics boards, companies and governments are able to create the illusion of taking the societal impact of AI systems seriously, even if that isn’t the case. This kind of rubber stamping is enabled particularly because of the lack of precision around ethical standards. When such initiatives have lack accountability mechanisms or binding outcomes they are little more than “ethics washing” [8]. Yet, when done right such self-regulatory initiatives can play an important role as one facet of robust AI governance. In this roundtable we will do three things: first, we will discuss the recent surge in ethical frameworks and self-regulatory councils for AI governance. Second, we will discuss their promises and pitfalls. Third, we discuss other strategies and frameworks - including those based on human rights law - as viable alternatives for, and additions to, ethical frameworks for AI governance. The agenda is as follows: 00”00 - 00”05: short scene setting by moderator 00”05 -00”45: four panellists provide their take on the issue, representing industry, government, civil society and academic perspectives 00”45 - 01”00: panellists engage in discussion with each other, guided by the moderator 01”00 - 01”25: panellists engage with the audience, guided by the moderator 01”25 - 01”30: moderator summarizes best-practices from panellists and audience, rounds off the conversation by suggesting next steps for AI governance. References: [1] https://tech.newstatesman.com/guest-opinion/regulating-artificial-intell... [2] https://royalsocietypublishing.org/doi/full/10.1098/rsta.2018.0080 [3] European Commission 2018. High-Level Group on Artificial Intelligence. https://ec.europa.eu/digital-single-market/en/high-level-group-artificia... [4] http://www.europarl.europa.eu/streaming/?event=20190319-1500-SPECIAL-SEM... [5] https://www.aitf.org.in/members [6] http://www.europarl.europa.eu/streaming/?event=20190319-1500-SPECIAL-SEM... [7] https://tech.newstatesman.com/business/demis-hassabis-office-ai-adviser [8] https://www.privacylab.at/wp-content/uploads/2018/07/Ben_Wagner_Ethics-a...
Expected Outcomes: Cross-industry and stakeholder dialogue on how to govern AI systems Rough consensus on the modes and methods for effective AI governance Concrete suggestions for alternative frameworks to govern AI governance Identification of best and worst practices surrounding ethical frameworks and councils for AI governance Creation of a network of likeminded knowledge experts on AI governance