IGF 2020 Pre-Event #63 Aiming for AI explainability: lessons from the field

Time
Wednesday, 4th November, 2020 (17:00 UTC) - Wednesday, 4th November, 2020 (18:30 UTC)
Room
Room Poland I
About this Session
AI systems will soon determine our rights and freedoms, shape our economic situation and physical wellbeing, affect market behaviour and natural environment. With the hype for ‘problem-solving’ AI, claims for (more) accountability in this field are gaining urgency. In this session we will not repeat them. Instead, we will consider their feasibility and discuss practical approaches to 'piercing the black box'. We will draw on practical lessons from the field.

Panoptykon Foundation and Access Now

Description

Theme: DATA

Subtheme: Data governance, Data-driven emerging technologies

One of recurrent themes in AI discussions is what - for the purpose of this session - we called “the black box argument”. In the mainstream debate it is often argued that the logic of most sophisticated (and allegedly most efficient) AI systems cannot be explained and we simply have to accept that; that explainability would come at the price of stalling progress in developing successful AI applications. During the session we will deconstruct this argument and prove some of its assumptions wrong. In particular:

  1. We will address why it is important to optimise end-to-end transparency and accountability above other parameters, in particular in the case of AI systems that have (potentially) significant impact on humans. Hence we will try to define the desired scope of explainability requirement.
  2. We will showcase a few practical approaches to AI explainability:
  • model cards for transparent model reporting (disclosing the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information);
  • AI blind spot “discovery process” for designers and ML practitioners;
  • reverse-engineering methods and ex-post/local explanations as described by ICO/Alan Turing Institute guidelines, which pave the way for independent auditing.
  1. In an open discussion with all remote participants, we hope to discuss the following issues: 
  • whether there is an emerging consensus in academic and business community regarding the explainability standard for AI systems that affect humans;
  • what is the value of explainability from a government, business, research, and individual perspective;
  • what are the technological, political, legal barriers to ensuring full explainability of AI systems that we need to take into account;
  • is a “layered” approach - assuming that different stakeholders have access to different levels of explanations - the best way to implement explainability in practice;
  • how to deal with users’ expectations regarding AI transparency/explainability; is it realistics to propose ‘AI labels’ that will be both meaningful and intelligible; * what regulatory mechanisms seem most appropriate to achieve explainability of AI systems (technical standards; industry self-regulation and best practices; binding regulation)?

Moderator: Katarzyna Szymielewicz, Panoptykon Foundation

Online Moderator: Fanny Hidvégi, Access Now

Speakers:

Rapporteur:

Katarzyna Szymielewicz, Panoptykon Foundation

1. Key Policy Questions and related issues
Even black boxed systems can be explained, given the choice of the right type of explanation. However, the most important aspect to be explained is not technical, but rather political: what is the purpose of the system and what are its success factors ("this is often the most striking blind-spot" – Hong Qu).
It is crucial to give affected individuals access to justice -- a chance to contest unfair or erroneous decision – rather than mere explanation of why the system produced certain result.
We must take a human-centred design approach to monitor for disparate impact which harm real people, and second order externalities that might affect vulnerable populations.

2. Summary of Issues Discussed

Practical approach to explainability: how can it be achieved?

While there are test and edge cases for hardware and software, there is no "testing standard" for AI. Speakers agreed that responsibility of the impact of AI systems needs to start inside of organisations that produce them. AI system developers should perform tests to detect potential flaws and risks ("surface unintended consequences" – Hong Qu), before implementation and they should make results of these tests public (e.g. using the model cards framework).

Auditing AI systems is only a starting point to more substantial reform. In many cases, legal regulations that we need to challenge the corporate and government uses of AI are already in place, they just need to be enforced or pursued through litigation. Authorities and courts are prepared to handle confidential information and therefore can be employed to audit/investigate AI systems. They also have the democratic mandate to do so.

Value of explainability for people affected by AI systems

Explainability as we see it today is not always actionable for end users. Most stakeholders don't need to understand this level of technical information. Also, the way field experts talk about explainability often misses the bigger picture. It tends to focus on the interpretation of individual result (i.e.  how the AI system went from general features to individual decision), but ignore other essential features of the system that are equally important, such as its robustness and safety or fairness.

Different kinds of explanations of how AI decisions are made might not have that much effect on whether people think they are fair; their intuitions about whether it's ok to make a decision about someone using a statistical generalisation tend to override any differences between different ways of explaining those generalisations.

3. Key Takeaways

AI can have significant impact on society, not only on individuals. In order to capture this impact we need to design different types of explanations (ICO proposed this framework in their guidance on explainability). It is also advisable to separate local vs global explainability, e.g. for a single decision (“why it has been reached and what can I do about that”) vs the entire system (understanding how the model functions, whether it discriminates certain groups, whether it is biased etc). We must take a human-centred design approach to monitor for disparate impact which harm real people, and second order externalities that might affect vulnerable populations.

Even black boxed systems can be explained, given the choice of the right type of explanation.  On the other hand, 'unknown unknowns' in sophisticated AI systems are very complex. There will always be emergent risks in complex systems that no one can explain or expect.

Affected individuals are more concerned with transparency and obtaining access to basic information such as: what data is used by the system, who may have access to their data, what types of decisions are produced, how to challenge them. Explainability is not a silver bullet. In particular it “does not solve fairness”. But, if implemented well, it may empower system users to question bias and lead to other desirable results, such as increased reliability and trust.