IGF 2023 Town Hall #105 Resilient and Responsible AI

Time
Tuesday, 10th October, 2023 (04:00 UTC) - Tuesday, 10th October, 2023 (05:30 UTC)
Room
WS 11 – Room J
Issue(s)

Digital Technologies to Achieve Sustainable Development Goals
Existing and New Technologies as Climate Solutions

Round Table - 90 Min

Description

Current issues surrounding AI governance include discussions of frameworks and checklists for AI risk management and control; as AI is being developed and utilized across countries, regions, and organizations, there are projects to share best practices and incidents. On the other hand, AI may be used to prevent incidents and crises. In particular, for long-term crises such as global warming, as well as sudden and destructive crises such as natural disasters caused by large-scale earthquakes and climate change. It is important to use AI to manage the crisis and pave the way for recovery, in other words, resilience. However, this will also lead to AI systems becoming infrastructure, using AI systems as the basis of social systems with large-scale data. There is a concern that AI that has become infrastructure may in turn bring disasters or inconvenience due to oligopoly or monopoly by certain companies. Therefore, it is important for resilient AI to be a system that guarantees fairness, transparency, and accountability. In this session, we will discuss resilience and responsibility of AI, not only in usual condition, but also in emergency, and as AI is being developed and utilized across countries, regions, and organizations, it is important that the discussion of resilient and responsible AI also include multi-stakeholder and vulnerable groups so that we do not leave no one behind. For this reason, in the discussion, we will ask experts involved in the Global Partnership on AI and Partnership on AI, who are engaged in multi-stakeholder discussions, as well as people with disabilities who have obtained new ways of working using alter ego robots, to present topics including what they expect from AI and robots in the future. Japan is an earthquake-prone country where robots and AI have become a part of daily life. We will also reflect the opinions and impressions of young people by having graduate students from the University of Tokyo involved in the organizing members. The conference in Kyoto will be an appropriate place to start discussions on the new value of resilient and responsible AI.

The format of this session will be a roundtable so that when panelists present their topics, if they couldn’t come to Kyoto, they will be able to use online conferencing tools to present their topics. One of the panelists was born with spinal muscular atrophy and is wheelchair, but uses an alter ego robot called OriHime (https://orylab.com/en/) to serve customers and give lectures. The idea is to bring this OriHime robot to the venue so that those who are able to be onsite can feel the presence of the speaker. We are also planning to bring a demo OriHime robot to the venue so that visitors can experience an alter-ego robot. By using these robot avatars, we hope to increase interaction and exchange between onsite and online speakers and participants.

Organizers

The University of Tokyo
Arisa Ema, The University of Tokyo/Japan Deep Learning Association, Civil Society, Asia
Hirotaka Kaji, Toyota, Private sector, Asia
Jun Kuribayashi, The University of Tokyo (student), Civil Society, Asia

Speakers

Rebecca Finlay, Partnership on AI, Civil Society, USA/Canada
Inma Martinez, Global Partnership on AI, Civil Society, Europe
Hiroaki Kitano, Sony, Private Sector, Asia
David Leslie, Alan The Alan Turing Institute, Civil Society, Europe
TBD, OriHime pilot, Civil Society, Asia 

Onsite Moderator

Arisa Ema

Online Moderator

Hirotaka Kaji

Rapporteur

Jun Kuribayashi

SDGs

3. Good Health and Well-Being
8. Decent Work and Economic Growth
9. Industry, Innovation and Infrastructure
11. Sustainable Cities and Communities
12. Responsible Production and Consumption
13. Climate Action
17. Partnerships for the Goals

Targets: Creating a resilient society is related to goals 3, 11, and 13 of the SDGs. It is also related to Goals 9 and 12 in terms of the use of and responsibility for AI technology itself. In addition, we are planning to invite a speaker who has a disability but works with robots. The ability of such people to work is related to Goal 8. Inviting people who are involved in PAI and GPAI is related to Goal 17.

Key Takeaways (* deadline 2 hours after session)

Considering the situations including crises where dynamic interactions between multiple AI systems, physical systems, and humans across a wide range of domains may lead to unpredictable outcomes, we need to establish the discussion of resilient and responsible AI. We propose that a large complex system should be capable of maintaining/improving the value enjoyed by humans through the system in response to various changes inside/outside the system

In order to achieve system resilience in a human-centric way by letting humans make and embody their own value judgements, an interorganizational and agile governance mechanism is needed.

Call to Action (* deadline 2 hours after session)

The points presented above require urgent discussion and action under an international and comprehensive framework.

A broad outreach to the people including the general public is also needed.

Session Report (* deadline 26 October) - click on the ? symbol for instructions

At the beginning of the session, Dr. Arisa Ema (The University of Tokyo), one of the organizers, explained the purpose of the session. The aim of this session is to expand the concept of "Responsible AI,” which is an important topic of AI governance, to "Resilient and Responsible AI" by considering the possibility of situations including crises where dynamic interactions between multiple AI systems, physical systems, and humans across a wide range of domains may lead to unpredictable outcomes.

First, Carly and Yui who are the pilots (operators of an avatar-robot) of OriHime (an avatar robot) talked about their experiences from the user's viewpoint of the technology. They have been in wheelchairs and feel the value of participating in society through the avatar robots. On the other hand, they have encountered situations where they could not handle irregulars because of the overreliance on technology.  Carly shared the experience that he was unable to turn on the power switchboard by himself and loss of communication with outside, when a power failure occurred by a lightning strike while working at home.  Yui talked about the anxiety and unnecessary apologies that people who need assistance face in a social system that is becoming increasingly automated. In a technology-driven society, where manuals are available but not always put into practice, they realized that this assumption would be broken not only in ordinary times but also in times of disaster, and that she would have to rely on people. The common conclusions of both stories, that is, the balance between technology and the manpower is important and that it should be considered that sometimes technology does not work, is suggestive. Furthermore, it made us realize that the nature of the crisis can be diverse for a diverse society. Next, Dr. Hiroaki Kitano (Sony), a researcher and executive of a technology company, who is currently working on an AI project for scientific discovery, pointed out that such an AI brings some positive effects for human being, but it also has a risk by misuse. Then, he also highlighted the possibility of future large-scale earthquakes in Japan and the importance of how to avoid excessive reliance on AI. There is a risk that AI will not be available unless communication networks, stable power and PC/mobile devices are available in accidents such as large-scale power outage when the dependency of AI in society is increased.

The organizers and three panelists, Dr. Inma Martinez (Global Partnership on AI), Ms. Rebecca Finlay (Partnership on AI), and Dr. David Leslie (The Alan Turing Institute), led the discussion based on the issues raised by OriHime pilots and Dr. Kitano. Dr. Martinez mentioned the necessity of defining resilience, and emphasized that the power of technology should be rooted in the values we have learned from our families and national cultures. By doing so, empowerment can create resilience. Ms. Finlay pointed out that while the assessments of AI systems before the launch are discussed, attention is hardly paid to how they affect different communities after they are released. The resilience and control methods are always required throughout the life cycle of AI, i.e., during the research phase, before and after launch. Focusing on machine-learning which has been the mainstream of AI in recent years, Dr. Leslie pointed out that data-driven systems may become vulnerable in a dynamic environment. As society and culture are gradually change, machine learning based systems driven by past data has the limitations. He emphasized the importance of considering resilience because excessive reliance on data driven systems has possibility to lead to stagnation in human creativity. In response to these discussions, Dr. Ema pointed out that we need to consider how technological and social perspectives on the current discussions such as generative AI will change. The following three points were pointed out by the audience.

  • The need for society to provide people with options for solutions.
  • The need for a more comprehensive impact assessment (technology, ethics, human rights, etc.) 
  • The risk of forgetting skills due to dependence on technology.

Then, a participant was asked about AI as a critical infrastructure. In response to this question, at first, Dr. Martinez said that AI is an infrastructure-based service, and it creates an unknown area for society. She mentioned the resilience of the communication infrastructure in which she was involved, and introduced an example in which a specific band continues to operate even if the whole network goes down in a disaster. She also pointed out the necessity of considering the self-repair mechanism of AI in the event of an infrastructural outage, and how to build not only systems but also human resilience. Ms. Finlay touched on the possibility that AI can be introduced in various ways with various implications, in response to Dr. Martinez. And she pointed out that systems need multiple layers of resilience. The way to understand how AI interact in a system is to map the system and understand its effects. Dr. Leslie pointed out that AI is rapidly becoming an infrastructure and general-purpose technology, and that it functions as an alternative for humans to think and act. AI is becoming a kind of a utility, but if it becomes an infrastructure, the question is who should control it. Dr. Ema said that it is difficult for individual companies to be held accountable when AI become infrastructural and go beyond the scope of a company, and that governmental and global discussions will be required.

As a summary of the discussion, the panelists highlighted the need for AI to be safe and have a solid foundation for society. They also emphasized the importance of defining and monitoring resilience to support society. In addition, they agreed the necessity of international research institutions to discuss AI from scientific and technological perspectives against the rapid commercialization of AI. In response to these comments, Dr. Ema concluded this discussion with the hope that all of us will work together to realize a resilient and responsible AI. The session received a variety of comments. A participant from public sector appreciated the uniqueness of the theme and the importance of discussion. On the other hand, another participant pointed out practical aspects such as how to handle large and complex systems composed by multiple AI systems. It is important to continue the discussion on this topic.