IGF 2022 Day 3 WS #219 Global AI Governance for Sustainable Development

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> ONIKE SHORUNKEH-SAWYERR: Hello.  Good afternoon, everyone.  I hope you can hear me.  Let me start by saying good afternoon, ladies and gentlemen.  Welcome to today's session on global AI governance for Global AI Governance for Sustainable Development which takes part here in Addis Ababa as well as online during the 17th Internet Governance Forum in Addis Ababa, Ethiopia.

The aim of today's session is to discuss the interlinkages between artificial intelligence, sustainable development and connected challenges and potentials around governance and policy.  With this event we would like to foster exchange and networking between experts and laying a ground for further activities on AI governance between the digital dialogues between different countries represented here today.

I hope you're all ready for an exciting discussion on AI and the role in development. 

It is warm here, some of you have been sitting in the room, I hope you had a chance to maybe at least have a sip of water since there was no opportunity for fresh air.

I would like to brow dues myself, my name is Onike Shorunkeh‑Sawyerr, I have the pleasure of moderating this session today.  I work as an advisor in the African Union office here in Addis Ababa and it is a German agency for international cooperation and development.

Enough about me for now, I would say we focus on our exciting speakers who are with us today and some on the panel also and before we start with that, I would like to very briefly like to guide you through our housekeeping rules.

They should also be up here on the slide.

As you know, this is a hybrid event, some of us in the room, and there are many more participants joining online hopefully and tuning in from all over the globe n is the case for most of our speakers who unfortunately cannot be with us today physically but you will get to know them in a bit.

For those of you joining online, feel free to pose questions or comments for our round of questions later on in the chat.  For those in the room, you can use the microphones in front of you.  I think by now you're probably used to this and one thing to say now in the beginning, the room here is a bit big and pretty formal, please we hope that we can still have an interactive, engaged discussion today and don't let the microphones intimidate you.

So I would say with that, I would like to move on to our opening remarks.  I hope our speakers are all with us already and ready to come in.

I would like to introduce the first speaker today, Mr. Jose Gontijo from Brazil who is a member of the OECD Working Party and network of experts on AI.  He works at the Brazilian ministry ever science, tech non‑governmental and innovation as secretary for innovation and entrepreneurship.  In the past he held positions in the Ministry of science, technology and innovation and in the Ministry of communications as science and technology analyst and project manager and a broadband department as well as Director of the Department of Industry, science, technology.  We have definite expert on AI and digital technologies here with us.

Jose Gontijo, a warm welcome to you.  Please, you now have the floor.

>> JOSE GONTIJO: Thank you.  Thank you very much.

Dear panelists, dear coorganizer, audience, it is a pleasure to be here, greetings from Brazil.

>> ONIKE SHORUNKEH-SAWYERR: We can't hear you?

>> JOSE GONTIJO: Now, okay.

Good morning, good afternoon for you I guess.

It is a pleasure to be here dear panelist, coorganizers and dear audience.  Greetings from Brazil.

It is an amazing session to participate in this workshop on Global AI Governance for Sustainable Development.  I would like to be in person but unfortunately I had to cancel my trip to Ethiopia, need nevertheless, I'm excited to share the stage with the experts from around the world and thank you for the participation.

I also thank you to our cohost, the German federal Ministry of Digital and transport as well as the Secretariat for digital dialogues for organizing this workshop together with the Brazilian ministry in innovation.  I'm happy that our cooperation continues by coorganizing this this section.

We selected the issue of AI governance because artificial intelligence has advanced happily on a global scale, alongside the technological progress, the discussion on the adequate governance approach is talked widely, and we launched the Brazilian strategy for artificial intelligence in 2021 and we are discussing a legal framework at the Congress for regulation.

In this context, such as in the European Union, AI act, they are forthcoming as well.  We see a clear distinction in this field and discussion in the global forums and this is timely.  Despite efforts in creating sustainable governance systems, it seems that the connection of AI to the United Nations Sustainable Development Goals has not yet fully arrived in the policymaking.  This is surprising given the potential that's attributed to digital technology for Sustainable Development and I'm convinced that aligning to the governance of AI with the SDGs, it may be a useful compass in the policymaking to improve lives.

This will give insight to stakeholders and groups and we will continue working on Sustainable Development.  I look forward to the fruitful discussion.

Thank you very much, Chair.

>> ONIKE SHORUNKEH-SAWYERR: Thank you very much, Jose Gontijo, for sharing welcoming remarks.

I would like to immediately move on to the person on the left side of me, from Germany, a policy officer and advisor in the German federal ministry for digital and transport and responsible for the bilateral digital policy relations between Germany and different countries.  Also here next to me, a digital expert, welcome, it is a pleasure to sit here next to you.  I kindly hand over to you.

>> HEIKO WILDNER:  Thank you very much.  Thank you Mr. Secretary Jose Gontijo for your friendly words, a pleasure to see you again, and I would love to have met you here.

To all joining us in the room and online, we appreciate your interest in the subject, Global AI Governance for Sustainable Development.  As has been said, I represent the German federal ministry for digital transport and as such, we have established with some selected partner countries a format, digital dialogue, one of the ‑‑ a successful one is the one set up with secretary Jose Gontijo, with Brazil, a very active exchange.  Our digital dialogues actually are meant to be a platform for exchange on digital policy matters.

A common objective, it is to shape better framework conditions for the Digital Transformation in our countries for the good of all, for the people, for the society and the economy, of course.

Similar to what the IGF does on a larger scale, we also include actively Civil Society, business research stakeholders and bring them in contact with government or political decision makers.

We're convinced that many questions around digital transformation can only be answered through multistakeholder initiatives.  In our digital dialogues we discuss political topics, but we also support innovative start‑ups and SMEs and to help them address topics of the future ranging from Internet Governance, data policy to emerging technologies such as AI.

It is our conviction that balanced and transparent roles are needed for implementing AI solutions on a wider scale and by aligning those roles internationally can the economy and society truly benefit from this technology.  And these roles must be made to give us trust.

Trust, for example, in the handling of our data, it is safe.  Trust in the security of the AI applications.  At the same time, the roles must give enough creative leeway so that innovation can happen and can make its way on the global markets.

Here the IGF is the ideal place for a truly global exchange on matters of digital transformation and Internet Governance and our workshop actually is a good example of this as we have speakers joining us in Brazil, Kenya, Mexico, Germany, representing public and private sectors as well as Civil Society in the community.

I'm now looking forward to engaging discussions among all participants, relying on the thoughts, opinions, questions of all of you today.  Please feel invited not only to listen but to be part of the discussion.

Thank you very much and back to you.

>> ONIKE SHORUNKEH-SAWYERR: Thank you, Mr. Wildner for inputs.  Many thanks to our first two speakers for opening our session today and providing some important insights from policy levels in Germany and Brazil.

Before we start our panel discussion and invite the next round of speakers we would like to also hear from you in the audience and listen to your own views and ideas on the topics and therefore we have prepared a little online survey that we will now like you all to join.  The platform we'll be using, it is question pro, a bit more information on this in a moment.  I would just like to say at the beginning, of course, when we talk about digital technology, there is always the risk of challenges and things maybe not going as planned.  We hope everything works out.  Please bear with us.

So now we would invite you all to use your mobile phones or tablets and do one of the following three, either you scan the QR code on the screen or you go to questions pro.io and enter the code 4248 or you click on the link in the chat.  This should then take you right to our little online survey.

In this survey, we would like to have you answer one of the following four questions:

I will give you a bit of time to actually join.

I hope some of you have found your way to questionpro.io.

The first question, what do you associate with the term AI governance.  What does that term bring to mind for you.

Secondly, we would like to look at the question of the areas which you see to have the greatest potential for AI to contribute to Sustainable Development.

The third question, related to your expectation for AI to have a rather positive or rather negative impact on Sustainable Development.

The last question, looks at the risks that you personally associate with AI.  Please share your perspectives and views with us and we will then try and have our speakers comment on that and pick it up in their presentations.

You have around 3 minutes or so to answer, but we'll come back to the results a bit later in our session, so we won't be presenting them now.  If you haven't been able to finish, there is anything that still comes to mind, you can still answer until later on.

You then may miss out on our interesting panel discussion.  I got a little whisper from my technical expert next to me supporting the technical back ground along with another colleague and she suggested to move on now to our opening statements and then we will hear from you a bit later on and present the results of your answers.

We will move on to the next round ever experts.  I'll briefly introduce them and invite all five speakers to provide brief opening statements.  Some prepared a slide for the opening statement that you will see on the screen, others haven't and we'll just prepare on the spot.

I would start with our speaker from India, Urvashi Aneja, founding Director of Digital Future Lab, a multidisciplinary research network based in Goa, India.  It examines the complex interactions between technology and society in the Global South and her current work addresses the ethics and social impacts of artificial intelligence, big tech, platform governance and labour wellbeing in digital gig complies.  Previously Urvashi Aneja was an associate professor of international affairs at the gender global University.  Urvashi Aneja, please, you have the floor.

>> URVASHI ANEJA: Thank you.  Thank you very much.  Thank you for inviting me.  It is a pleasure to share a stage with such imminent speakers.  I'm sorry I couldn't be there in person.

I think, I want to briefly make one point as my opening statement and then I'm happy to kind of come back to it during the Q&A and the discussion.

When we're talking about AI and Sustainable Development and we're talking the governance of AI so to further the Sustainable Development goals, often our focus is on the likely impacts of AI for specific sectors.  Right.  We try to think about what benefits can AI produce for healthcare, what benefits can AI bring in agriculture, what benefits could AI bring for water systems and then how will these interventions help us achieve today goals.

I would like to suggest that this kind of framing is that we often see and this is too narrow.  This risks us looking at AI in terms of products and in terms of kind of siloed interventions, but if we're thinking about AI and today we need to look beyond the specific kind of interventions and look at the conditions of AI production and development as well and think of the life cycle of the AI intervention.

What I mean about that, two points on that, we have to look at the question of how AI is being built so the labour conditions, the impact on wages, the impact and wherein the world that labour is being sourced to produce AI, a lot comes from the Global South and a lot comes from workers in poor working conditions, with low wages.

Equally, we need to look at the environmental impacts of AI so we need to look at the kind of resource extraction that's required to build AI as well as energy consumption required to train AI and to run AI machines.  When we're thinking about AI for Sustainable Development I would suggest that we expand our understanding of it away from specific products and away from specific interventions and take the system view.  Only if the production of AI is good and equitable labour relations and it is sustainable in itself can AI contribute to Sustainable Development, otherwise there is a risk it will row dues or it will contribute to the achievement of specific Sustainable Development Goals, but the conditions of the production, the conditions of its use could take us further away from the goal of Sustainable Development.

I'll leave it at that.  I'm happy to come back to this and expand on it further during subsequent rounds.

Thank you.

>> ONIKE SHORUNKEH-SAWYERR: Thank you so much, Urvashi Aneja, for sharing some of your insights already with us for now.  We look forward to hearing more from you a bit later.

I hear that the next person that's I was supposed to invite has not been able to join us.  Keep your fingers crossed.

I'll move on to the third person that is the Executive Director for digital inclusion at the Secretariat for communications and transport of the Mexico City metropolitan area, Ladenika Mackensie Mendez Gonzalez, privacy she worked as an analyst for public information at a Mexican institute for Social Security and is an expert for social policy, telecommunications and digital innovations.  Please, Ladenika Mackensie Mendez Gonzalez, I hope you can hear us and I give the floor to you.

>> LEDENIKA MACHENSIE MENDEZ GONZALEZ: Thank you.

Good afternoon, good morning, earn.  It is an honor for me to be part of this initiative of the international digital GIC and to share during this workshop.  I respectfully thank the gentleman from the foreign ministry and of science, technology and innovation, I thank GIC for the invitation and greetings from Mexico City.

Well, there is no doubt that artificial intelligence is a way to innovate in the public sector, but at the same time it requires an increasing availability of data and evidence between governments and citizens.  Like any process of impact on society implementation, artificial intelligence, it is mitigated within frameworks and social roles to which you must respond.  This is to prevent the creation of new social and economic gaps in our existing ones.  Therefore, it is required that in the sign of public policies to bring relevant elements to take into consideration.  Transparency, social reliance on the use of data, both I want to add to this question here, the socialized license is by the target population of the implementation of AI, a tool for decision making and or supporting system.

It is necessary for the population to be clear about the benefit they will obtain from application of the tool as well as to be transparency about data protection, the way to use the tool and to mitigate the various systems.

According to algorithmic transparency, it works as an enabler of other values because it allows knowing what the data that is used, how you see it, how it is used and how it affects public policy decisions.

When a system is transparent it is possible to know that data is protected and that it is equitable.

It is worth mentioning that not all public problems can be solved from AI, which make it necessary for the leaders and the science teams of innovation and public policy to have the skills of knowledge in order to be able to analyze the facility of the project taking into account biases, results of negative impacts.  The use of prediction bias of AI in certain models and ethical dilemma, they most ‑‑ they must be addressed and consulted with the community.  It must be ensured that there is a sign of a respect of rules by the existing regulatory frameworks as a way of democratic areas.

I want to close with some figures to refer for in 2021 in Mexico.  For example, more than 57% of Mexican companies are exploring the adoption of AI, Mexican professionals indicate that the pandemic has increased the focus on customer service for 47%, followed by the market that's 37% and processed automation, 26%.  So that's it for me.

Thank you.

>> ONIKE SHORUNKEH-SAWYERR: Thank you so much, Ladenika Mackensie Mendez Gonzalez.  We look forward to hearing more from you about the need to govern AI and also about transparency and the usage of data.

Our next speaker is Kim Dressendoerfer from Germany.  Kim is a senior technical leader, an expert in data science and artificial intelligence focused specifically on new solution designs which require high complex AI algorithm, a global AI Ambassador from Swiss cognitive and a founder of AI woman in AI initiative which showcases models in AI.  Please come on stage.

>> KIM DRESSENDOERFER: Thank you.  Thank you so much for having me.  I'm very, very honored to be a part of this and be along all of these amazing people.

Thank you for the quick introduction.  As mentioned, I'm coming from the other side being behind the code and building many things.  It is always quite interesting to see the two perspectives.  Obviously I see how many ‑‑ like you have to imagine, I sit in meetings talking about issues where clients are going through, what they're facing, the fears they're coming with and one of my jobs obviously, I take the fear, I kind of explain what we actually do and what is behind that black box of AI and what kind of ‑‑ it is also our job at the end as developers.

I was thinking a lot about my opening statement, where I want to go with it, what message I want to send.

I think that the overall thing, it is so important, it is that as technology evolves so quickly, and those of us who are creating AI, they have the responsibility to be opening a box for everyone who is using it.  It sends most of us humans nowadays on the planet are using AI, they have to know how we're using it, what is exactly happening.

With this come as big responsibility because a lot of the data we are seeing, a lot of things that we're opening up, a lot of data I see from clients, it is often with a lot of bias and it is not as clean as we want to have it.  Especially if you talk about sustainability, developing AI, it comes with a lot of responsibility.

What I also want to say, when ‑‑ AI governance, the sustainability, it is not just about compliance and about regulation, it is also an opportunity to innovate and make a difference and grow something.  That's why I chose my slide here to kind of also showcase what we're able to do and how we can change the world in a different kind of path and how we can create new ideas and solution to make our day‑to‑day life easier and create a new path for the future.

Absolutely agree with the statements of my previous panelists, there are multiple ways in which we need to talk about AI governance.  There is not only regulations from governments and the laws, it is also something that needs to come down to every developer and every one building AI at the end because we have to have the responsibility and we see the data, similar to other jobs, we have an ethical code we need to fulfill and need to make sure we act as in the most ethical way in dealing with the data and building the algorithms.

Thank you.

>> ONIKE SHORUNKEH-SAWYERR: (No audio).

>> KIM DRESSENDOERFER: I can't hear anyone speaking.    .

If someone is interested in the solution, we're using AI vision to help detect some of the farming, to kind of help the worker, you know, make it more efficient, helping.  At the end we want to help the animals to make sure that they're healthy.  Obviously we're helping a lot with emissions, as be limited we're working hard on that and our goal is by 2025 to be 85% with sustainable energy.  That's probably not a topic.

They're still not back.

>> (Technical difficulties in room).

>> Maybe we just can't hear them, maybe they're talking?  Yeah.  Yeah.

There's a whole lot of things going on.

>> KIM DRESSENDOERFER: I guess our mics work, not the ones in the room.

>> LEDENIKA MACHENSIE MENDEZ GONZALEZ: What is the slide of the pigs and the machine learning to scale things?  What's that one.

>> KIM DRESSENDOERFER: Basically, what you can do, you can measure if the pig is in perfect health.  If want weight is proper, if the pig is injured, and if they need more food or more water and all of this kind of topic.  If you look at the farms, there is a lot of animals and barely any worker who is are able to have the time to spend looking, taking care of the animals.  It is ‑‑ they just won't have the time.

>> ONIKE SHORUNKEH-SAWYERR: Now it is working.

Apologies, you could all see, this is again one of the ‑‑ yeah, examples for when sometimes technology is not efficient.  I hope that the mic will continue to work, it was breaking a bit.  It is also nice to see that our panelists just used that silence (no audio).

>> ONIKE SHORUNKEH-SAWYERR: Now it is back on.  Yeah.  Anyway, I was about to introduce the next speaker on the panel.

Thank you, Kim, I was interrupted when I went to thank you for the first opening statement.

Now on our next speaker, it is Jose Gontijo, who you already met from Brazil.  I'm not going to reintroduce him, but will directly hand over to him for his opening statement.

Please, the floor is yours.

>> JOSE GONTIJO: Thank you, Chair.

Well, here in Brazil, looking to AI governance directly, we established the national strategy and we really agreed with all of the other speakers that it is challenging.  How the government has to act, how others interact, how we protect the private policy, so on.  What it is here in Brazil, we established a Committee where we bring industry, academia, we bring the Civil Society and many levels of government representatives and everyone is together, we see ‑‑ sit at a single table, establish study groups and decide what is an Action Plan to try to make this balance between innovation and the good usage of AI and the development of AI.

In the Congress, it is running a legal framework, the discussion of how to regulate, transparency, how to manage that and there is a Commission discussing that and on the other hand, we have a technical group in the national strategy for AI evaluating all of this to see how it can implement in a very good way if it meets the regulation on AI.

Looking to Sustainable Development potential, for sure you have an environment prediction, you have the disaster prediction, you have the good efficiency of agri business, avoiding for instance in Brazil, you have a huge water observatory and we have a subway and a lot of agri business over it.  How to manage that to avoid contamination of the water and things like that, contamination of the air.  AI together with high‑performance computing can for sure help a lot to implement that.

In the other hand, AI applying to public safety, applying to many other things that at the end of the day you really impact on the SDG.  The risk of AI, it is such a very, very great technology that we think about many potentials, many possibilities to improve the world and the quality of life, the economy, to reduce the bureaucracy, the government, but we really have to take care that the countries that have the technology implemented, it will grow very fast.  Countries that don't have, they will not have it.

So the difference between the Developing Countries and not Developing Countries, it will probably increase.  We here in looking at this, in international governance, IGF, UNESCO, OECD, we have to think of how to mine this gap when AI applications really start to be implemented.  We have the diplomacy initiative that's happening and the discussion between the countries, how to use technology and to make it affordable and exposed to all countries, all of the administrations and all societies in the world to reduce this gap.  The AI developing policymaking, it is challenging because of that.  Of course you want to have our country get involved in everything, but we have to move forward without leaving anyone behind.  This is the main challenge for us as public policy for you that are discussing a multistakeholder forum, how to organize the AI implementation requirement.  Of course we want to develop and make innovation, so on.  We really have to think also about this gap, how AI will be useful for everyone, not for a few.

Thank you.

>> ONIKE SHORUNKEH-SAWYERR: 

(No audio).

>> JOSE GONTIJO: You are still on mute online.  Sorry.

(No audio).

>> JOSE GONTIJO: I'm not hearing.

(IGF room is muted on Zoom).

(No audio).

(Echoing).

>> JOSE GONTIJO: We hear you somewhat distorted.  There is an echo.

(No audio).

>> JOSE GONTIJO: We hear you, but very unclear.  It is not ‑‑ we cannot understand what you say.

>> ONIKE SHORUNKEH-SAWYERR: You can't really understand what I say?  Electricity.

>> JOSE GONTIJO: Now it is good.  (Poor audio quality).

(No audio).

>> LEDENIKA MACHENSIE MENDEZ GONZALEZ: We cannot hear you.

>> ONIKE SHORUNKEH-SAWYERR: Hopefully our session ‑‑ hopefully ‑‑ yes.  Sorry I also look at the screen.

>> JOSE GONTIJO: It is unstable.  Sometimes on, sometimes off.  It is ‑‑ not any more.  Not anymore.  (No audio).

(No audio).

>> KIM DRESSENDOERFER: We can't hear again.

>> JOSE GONTIJO: Maybe if you join through your notebook, we can hear through your notebook.

(Technical issue being addressed).

>> ONIKE SHORUNKEH-SAWYERR: Is it working now?  It came up.  That was very quick.  Thank you, Jose.  Let's try again.

Round of questions to all of you, I'll try not to say so much around that so as, you know, not risking that we lose you again or you lose me!

I would like to start with Ladenika Mackensie Mendez Gonzalez, so a question for you and the question is, what are the main components of a government system for digital technologies and what is the role of government in fostering digital technologies?  For example, AI for Sustainable Development.

Ladenika Mackensie Mendez Gonzalez, please, you have the floor.

>> LEDENIKA MACHENSIE MENDEZ GONZALEZ: Thank you.

So as I had already mentioned, the AI information, what it takes of the people, to better act with the state services and to personalize the public services, distributing the resources for example.

AI can make more accurate positions such as future needs in certain areas of services or to have a context that may change due to a change in policy or external factor.  The use of the model of the testing of possible Policy Options and interventions to be implemented, it also could lead to early identification of unintended consequences.

So through science intelligence, the digital intelligence, it is built from personal data.  The main elements to be considered by governments, from other parties to ensure that there is a sign of public policy, it is with a Human Rights approach as well as transversality and intersectionality for their inclusion of vulnerable social groups.

AI can contribute to Sustainable Development.

In addition, the professional expert who decides public policies with knowledge, such as digital literacy, risk management, network, data management, et cetera, it is relevant element for today, the policy, the making policy process.

In other words, I would bring my argument and the use of art artificial intelligence with certain moral, ethical dilemmas that must be addressed with the community.  I think that the Human Rights' approach, it closes the gap into the social groups.  This exists within the regulatory frameworks and for the time, it is all for me.

Thank you very much.

>> ONIKE SHORUNKEH-SAWYERR: Thank you so much, Ladenika Mackensie Mendez Gonzalez, also for, yeah, sharing your views on what the role of government should be and fostering digital technologies, especially with respect to Human Rights.

I now will jump the order a bit, thank you so much Jose Gontijo for staying with us.  We know you have to leave in a little while, I'll now ask you to come in with the next response to the question we have for you.

As a representative in the Global Partnership for AI from Brazil, as a Steering Committee member, what role do you attribute to multilateral initiatives and fora such as GDPI and IGF and the coordinating governance approaches.  What are the strengths, limitations do you think?  Thank you so much.

>> JOSE GONTIJO: Thank you.  It is challenging.  When you think of multistakeholder approach, also coordinator of the Brazilian Steering Committee of Internet CGI, and it is challenging when everybody is at the same table.  At the end of the day, when you have consensus, controlling the debate, all of the things that we know happens in this kind of forums, we have a very strong.

If you have the desire to be something to be baked, we can really find a good path.  Looking to CGI, the Brazilian country as an example, IGF, other, all of those foras with the Internet, we copy this in this policy in Brazil.  The digital AI strategy, in the national digital transformation strategy, and in all of the forums that we have implemented, the national Internet of Things plan, we implemented this multistakeholder approach. 

What happens is, I give an example such as the more national network of things plan.  It started in 2012.  From there to today, we have 6 ministry, we have three Presidents, we have an impeachment in the middle, moving from left to right party and the policy keeps going because we have all of these multistakeholder together.  We find a path, no matter what happens, no matter what politics happens, the industry, academia, Civil Society, they get together, they push it, and then no matter the development, they follow.

So it is challenging, because it takes a lot of time to manage this, to find this common consensus.  When we have that, no one can stop it.  Looking to AI and Sustainable Development Goals, I guess we can move in the right direction, it is hard, sometimes you will have conflicts, everyone has to have patience, has to have the good will to find consensus.

I do believe that the multistakeholder approach is the best choice, specifically in most important for AI because of the past Words, the distance between the countries will increase if you don't have this feeling to do that.

Thank you.

>> ONIKE SHORUNKEH-SAWYERR: Thank you for the response and also with the patience also with the challenges we're having today.

Thank you for your presence and also for underlying again the need for consensus in a multistakeholder approach in governing AI.

I would like to invite Kim to share her response to a question.  The question is from your perspective, Kim, what is the role of the private sector when it comes to AI for Sustainable Development?

>> KIM DRESSENDOERFER: Neutral in a one sentence answer.

I think every company has a duty to work on the developments and AI stuff.  Obviously, I'm a little bit biased with that view, I'm from IBM, making sure that technologies can use AI and also having the perspective from a developer to make sure that I build sustainable, ethically.  When I just look from this point of view, it is important to have those ethical conversations with the team to make sure everyone is aware of what they're doing.

Every time that I have an ethical discussion, it is kind of explaining, well, a doctor has an ethical discussion, has an ethical fora to make sure that they are okay from an ethical standpoint.  I think something like that should be in place for everyone in the private sector.

I'm lucky I have that at IBM, but it is not a standard that's something that is quite shocking.  If you look at other companies and companies that we're working with, helping with, I think that what is important to make the governance around AI, it is that we look at the AI life cycle.  Not just one person, building it, because if you look at the bigger picture, you have the business owner, you have the data scientist, who then keeps moving it forward, you have the AI operation manager, there are so many people involved in one algorithm that it is hard to kind of like grab one specific role and that's the one thing that is most important.  A rule doesn't fit it all.  We have so many different sectors, we have health, insurance, we have automotive, we have so many different kinds of areas where AI is in place and all of the different areas have different kinds of AI and different kinds of models behind it.

It is hard to just grab one model.  What we kind of have, it is we call it the fact sheet.  We go ahead, we say, okay, what does the business owner actually know about the fact and the purpose of the governance, of the product he wants.  The data scientist, what facts about data transformation, features, performance does he have?  Let's put that in place as well.

The model evaluator, fairness, privacy, functionality, transparency, verification, all of these kinds of stuff, it is facts that he knows.  Right.  The AI operation engineer knows about the drift, learning, all of that.  That's a lot of things that come in place!  What IBM does, we have the fact sheet, you put all of this data in it, all of the information, all of the facts, you collect them, make sure that whatever we use again, it is sustainable, governed and it can be reused, that's the thing.  We talk about transparency, opening the black box, but if you don't know the facts about your algorithm where do we go?

>> ONIKE SHORUNKEH-SAWYERR: Okay.  Thank you so much, Kim.

Also speaking about the ethical aspects of AI and the need for rules, I'm sure that people have questions and I'm looking forward to the next input.

I think what you elaborated on relates well to the question that I have now for Urvashi Aneja, you have studied the intersection of technology and society, Urvashi Aneja, what are in your experience the challenges for a sustainable use of digital technologies such as AI?

>> URVASHI ANEJA: No small question.  No.  That's a huge, huge question.

Sorry for the Onike Shorunkeh‑Sawyerr in my background.

I think there is multiple challenges.  Some of which have been spelled out by the previous speakers.

I mean, we have to maybe try to think of a kind of framework to slice the challenges, we have a range of political and economic and social and kind of environmental challenges, one wants to use a framework like that.

I think one of the characteristic lengths, the state actors have a weak understanding of how this works and they're relying on public sector peers in terms of figuring out what AI can do, where it solves a problem, where it cannot, et cetera.  Some of the private sector player, they're looking at it from a commercial perspective and the lack of capacity within the state actor, particularly in the Global South make it is harder for them to regulate the space and develop the institutions, structures that we need to be able to regulate it.  Even in the European Union for example, enforcement is an issue and transplant that to newly democratized nations and countries in the Global South, that's a bigger issue.

We have seen issues of data bias, what it means in terms of discriminatory outcome, exclusionary and bias is not something that we can do away with, we can clean as much data as we want, but that bias, it is reflective of the bias ever us as individuals and a necessary part of how things are.

It is important from the social perspective rather than trying to eliminate that bias to be very upfront about where discrimination is likely to happen.  It will happen.  Exclusion is likely to happen.  Design mitigation measures and building capacity, the end users to be able to manage that rather than assuming that we can get to this kind of perfect end point where some of the challenges around bias, discrimination can simply be eliminated.  The same goes for questions around privacy, not so easy to deal with the privacy challenge, privacy problems in AI systems, instead it is more important to think about the systems that we need to develop to address those.

That requires, you know, recognizing that these are ‑‑ that they are imperfect solutions, there is not a simple, easy, clean answer to how to address the problems.

The labour issue, it is something that I think doesn't get enough attention, an issue in conversations around AI governance.  Much of the early kind of work around machine learning, if you look ‑‑ Kim will correct me if I'm wrong, please do, from my understanding, even the big image net dataset which was kind of informing the computer vision, other machine learning algorithms, the labeling of that dataset was done by kind of low age workers in the Global South and that kind of trend continues, right, the self‑driving car that's in Germany, the training for that it is being done somewhere in Kenya, so on, so forth.  There is a kind of labour dimension to this which spans across geographies we need to think about, even beyond the kind of more typical conversations we hear about labour, kind of job displacement, disruption, et cetera.

The fourth impact, the ethical conversation we need to think about, it is really environmental impact of the technologies.  That's something that I have mentioned in my opening remarks as well.  I mean, the cloud is already one of the largest images and, you know, it is absolutely fantastic to hear about IBM committing to kind of building kind of green technology in the years to come.  This continues to be a challenge for developing countries and until we get that kind of AI and climate configuration right we will have ‑‑ we'll have a number of new ethical dilemmas to address as we go forward.

>> ONIKE SHORUNKEH-SAWYERR: Electricity.  Thank you so much.

Yeah.  With that, that is our first round of questions.  I would like you all to give a great round of live and virtual applause for our speakers today and for sharing their insights and views.  We would like to go deeper but before we do that, we want to hear more about what you actually responded to our questions and what the views are on the role and potentials of digital technologies and AI for development.

Let's wait for the screen to change.

Okay.  I'm going to ‑‑ I'll try ‑‑ perfect.  Larging the font, it is hard to read from here.  One of the clear responses now to the question of what do you associate with the term AI governance, in our word cloud, very clearly, it is the issue of standards.  The need for standards and regulation and then also the issue of cybersecurity, cybersecurity threats, generally the question of, yeah, security around AI and also fear is what I can read from here.  I'm not able to really read many more of the terms you shared with us.

It is clearly clustered to those in the centre.  I would like the speakers when they come in on the second round of questions to pick up on that and let us know that the responses, if they surprised them, expected, what they usually hear and would they agree or disagree.

If we now move on to the second question about the areas in which you as the audience and participants here see the greatest potential for AI with regard to Sustainable Development, there is a relatively clear tendency, greatest potential for AI to contribute to Sustainable Development is in the area of global productivity and economic growth.  There is a very clear tendency that the other two areas, they were actually not rated very high, including social equality and inclusion, apparently you do not seem to believe that this is one of the greatest potentials, doesn't mean it is not one at all.

The last one, it is with regard to climate action and environmental protection.  It is mentioned, second rated, but definitely very clearly in the direction of productivity and economic growth.  Speakers, please keep that in mind when you respond to the other questions following.

The third question, related to whether you believe that AI has a rather positive or rather negative impact on Sustainable Development overall.  This is a very clear tendency, after all that's been said in the beginning about cybersecurity, threats, fear, you do believe that it has a positive potential and that the rather negative impacts are not so high.  That's a good thing I believe.  Also informative for our discussion to follow.

Last but not least, the question of which risks you associate with AI.  So we do see here digital war, allegiance, surveillance, the misuse and litigation spreading thin.

Then the social impacts it can have, although it doesn't say which one specifically, also Human Rights violations and threats to, yeah, to employment and work.

Joblessness.

I think we'll leave it at this now.  As I said, this should now be something that our panelists hopefully have answers to.  First of all, thank you all for participating in this survey and sharing your views.  Now I would like to just open round of responses to our speakers, what are the initial reactions to the survey responses?  As I said earlier, is there anything that you found surprising, something that you often hear, the responses that we just read here on the screen and would you generally agree or disagree?  Of course, it was a lot of things that have been said, I will leave that to you to decide now which aspects you want to focus on.

Any initial reaction.

>> KIM DRESSENDOERFER: Maybe I can say something.  I'm extremely happy that everybody is excited about AI and there is not a lot of negative stuff or expectations.

The positive, it is overwhelmingly higher than the negative.  It is absolutely for myself always happy because obviously it is ‑‑ I love AI, I love so much about it.  I think when it comes to AI, the workforce, it is a whole different topic of people being scared.  I think that the overall view when we talk about AI, it is probably also some stigma coming from movies, like an Arnold Terminator walking around as a super AI, it always makes it hard and gives a stereotype to specific AI solution and makes me extremely happy that people are getting more trust in it and seeing the beneficial solutions that we're building with it.  That's one thing that I'm happy.

I get the fear of people being afraid of workforce and people jobless, losing jobs.  What I always ‑‑ what I always try to say, we're not even there yet to make AI as strong to people losing jobs.  The thing is, for AI, it is therefore to make your day actually easier.  That's what I always try to say to ‑‑ try to promote.  At the end, what we can and cannot do with AI is cancel out repetitive taskers, which is spending a lot of time with.

I mean, just I, for example, I have some stuff on my laptop installed that makes my life easier, even automation, I have some stuff to organize my whole knowledge base, it helps me as much as I can to just make my work life easier and this is one of the great stuff that we can build with AI.

>> ONIKE SHORUNKEH-SAWYERR: Thank you so much, Kim.

I'm not sure if there is any other initial reactions at this point in time?  I don't see fingers raised immediately.  Yeah.

>> URVASHI ANEJA: I could go.

I could respond to the survey and to Kim, what she was saying.

I think it is interesting that the survey, the participants, it is that AI will contribute to economic growth and development or economic growth, I think that was the question.

That's possibly true.  We still have to establish a link between the smart economic growth and Sustainable Development.  Right.  That link still needs to be established.  We might have more economic growth, but that's not necessarily meaning Sustainable Development.

Also I think the issue of thinking about AI in terms of kind of benefits and harms in this kind of manner, it is that we're not asking for whom and how those benefits and harms are distributed.  You know, what we have seen over the past many years, it is growing levels of income inequality, we have seen a few geographies and select companies benefit tremendously from technological innovation.  We are still to develop systems that can adequately distribute the gains, distribute the technology gains to people in other parts of the world.

I think that's what we need to recognize, right.  These things are possible at the same time.  Like it is possible for AI to bring this kind of productivity gains and efficiency gains and save time, et cetera.  There is a whole other part of the world, which is actually the majority of the world that does not have Internet access.  It is in some sense compromising the supply chain of AI production that is not paid enough, is not paid adequately to actually contribute to building the AI systems and the value concentration value extraction is happening in very select few places.

The issue that I have, it is that many of the kind of promises of AI ‑‑ I mean, I think Kim was spot‑on to say we can't have these generalizations on it.  Right.

AI is a huge kind of catch phrase, a catch all phrase and it means many, many different things.  We're seeing a lot of really beneficial applications and kind of B to B places and enterprise systems.  The impact for Sustainable Development I think still is a question mark.  It is a hypotheses sill that needs to be proven and it hasn't been proven yet.

One example I would like to give, it is when we talk about AI and Sustainable Development, we often talk about the promise, the future promise, right, we have yet to see.  Some of the harmings, they're already evident.  Forget about the Global South, think of the U.K., somewhere like London.  You have delivery app called I think Deliveroo.  And this, it is homeless people that are on Deliveroo so we can get food 10 minutes, 15 minutes fast that gig work that's happening, that algorithmic management of the platforms, is the real every day manifestation of the AI systems.  The harms, they're already in our face, they're already prevalent.  Some of the promises with regard to Sustainable Development, I don't mean broadly, I agree with what Kim said in terms of kind of business and individual improving kind of individual efficiency, business efficiency, but in terms of the hypotheses, the link between Sustainable Development, it is not yet proven where the harms are very real.

I do ‑‑ I think we need a little bit more skepticism.  I think we need to think a lot more strongly about the governance frameworks that are required to kind of steer AI and govern AI in a way that those development outcomes and that development potential, which is real, can actually be realized.  Current kind of trajectories I don't think bode well for realizing this in an equitable fashion around the world.

>> KIM DRESSENDOERFER: Can I accept in there real quick.

I mean, everything, like you said, it is absolutely true.  My question is more like just to come back to you with a question, I hope that's fine as a panelist, I get it.  I mean, like you mentioned, there is a lot of place in the world where we still don't have an Internet, right, the usage, I don't think it is just an AI problem we have here.  It is like it has nothing ‑‑ the AI on top of it, you know, just like the ‑‑ (Zoom freeze).

‑‑ there is enough things, Google has a whole facilitatory of workers typing in queries every day so the engine works faster for us in case we use that query, all of that stuff.  We have so many labour workers, it is a huge topic when we talk about this kind of topic.

To use it, I don't think AI is yet a problem of that.  A great thing of AI, as soon as you have the Internet question, you can obviously build it yourself.

The advances of how you do AI, you can use it yourself.  It is not ‑‑ there is not a huge discrepancy of, okay, if you ‑‑ you don't need money to build AI, all of the tools are for free, opensource, if ‑‑ you can teach yourself on YouTube how to code and how to get the solution and to make a product out of it.

I think obviously, I get the negativity, on the over side, you can also see the potential we have in teaching yourself the skills and making your own business with it and making a profit out of it.  I think that's something which is quite different from what we have in other kind of professions, it is a huge kind of new Spectrum that you're opening up.

>> ONIKE SHORUNKEH-SAWYERR: Maybe briefly, thank you so much for very lively interacting with each other.  I think that's beautiful.  Many of the topics raised by our audience have also been already raised in your contributions.  I'm just wondering, do you want to briefly respond to that and also Ladenika Mackensie Mendez Gonzalez, if you want to, you can come in because after, this let me just announce it now, I would also like to introduce our fifth speaker who had a bit of technical difficulties in joining us and is now here with us.  We also will save time for Davis to join us.  Yeah.

>> URVASHI ANEJA: I don't think we're arguing or at odds.  I think we're just approaching it from a different kind of starting point.    the potential is there, I think my comments are more to kind of caution us that it is not a silver bullet, similarly like the fashion industry, right.  The fashion industry is creating jobs for many millions of those in Bangladesh but the jobs, they're pretty crap jobs.  Right.

Is that the kind of future that we want for work?  Probably not.  Is that the pathway to get there?  That's a value question.  Right.  So I think it is more like let's learn from those industries, let's learn from those mistakes so that we can realize this potential.  Right.  We can leverage that potential and actually harness that potential for societal good and realize what you are talking about.

>> ONIKE SHORUNKEH-SAWYERR: Thank you.

I see a thumbs up from Kim.

It seems like she's agreeing.

I have to admit, Ladenika Mackensie Mendez Gonzalez, if you want to say something, can you raise your hand physically, it is hard for me to actually see the screen from here.  If not, that's also okay.  You can maybe ‑‑ yeah, come in a bit later.

>> LEDENIKA MACHENSIE MENDEZ GONZALEZ: Can you hear me?  Okay.

Quickly, well, I'm happy with the responses interest the audience.  For example, the cybersecurity, it is a relevant element to the artificial intelligence and on the other side, it has social impacts.

For example, AI can be an enabler and on the other hand, I can see the gaps between us using technology and those who do not because it is not only about services, about what movie I can see, what food I can eat, no.  Also it is about buying services, people of the social class, it is economical, and equally, I consider AI, it is another way to democratize access to public services.  I believe that we must take care of the data and the services, the predictions, for that I believe that protecting personal data is the most important path to AI. 

That's all.  Thank you.

>> ONIKE SHORUNKEH-SAWYERR: Thank you so much for also sharing your perspectives now under discussion.

Now I hope that with us is Davis from Kenya now.

Davis Adieno.

>> DAVIS ADIENO: I don't yet see anything on the screen.  We can hear you!  We can hear you!.

Give me a brief second to brow dues you and I'll hand over to you for opening remarks.

Perfect, Davis Adieno, from Kenya, the Director of programmes for the Global Partnership for Sustainable Development databased in Nairobi Kenya, previously he worked as the Global Partnerships regional Director for Africa and a civic world alliance as senior advisor of data accountability and Sustainable Development.  He also worked for development initiatives as senior manager of Senior Management of data use.  I would hand over the floor just in one second to Davis and also, Davis, since you unfortunately missed part of the discussion, but we had a question prepared for you, we would like you to, yeah, hopefully freestyle a bit and also come in directly with your response to that question.  It is related to the role of global society when it comes for AI for Sustainable Development what, do you think is the role of the global Civil Society?  Thank you.  The floor is yours.

>> DAVIS ADIENO: Thank you so much for the opportunity and apologies.  The Internet had technical challenges on my laptop.  Now I'm joining on my mobile phone.  These are some of the, you know, challenges that come with technology sometimes when you most need it!

It is a pleasure to be here.  The only problem is, everything else, the wonderful panel that's been prepared, coming from the perspective of non‑state actors.  I work for the Global Partnership for Sustainable Development data and non‑state actors in Civil Society, this is one category of partners that you work with, we work with governments, yesterday I was in a validation workshop for artificial intelligence guidelines that we're developing for practitioners here in Kenya and the AI community, actively participating not only in the process itself, but having conversations on what is relevant for them as practitioners and also to look at specific aspects related to the bare minimum and I'm bringing more ‑‑ (Zoom freeze).

‑‑ when talking about the future, I heard others mention this, the benefit, it is now for us who are living, for he is of us who are experiencing life, the future is now.

So, you know, on the technologies here, mostly, it is in the hands of the private sector.

It is a great opportunity to, you know, connect those who are still struggling with this concept and most importantly, how to get the specific objectives around social good.

Do you want me to just continue and respond to the question itself?

>> ONIKE SHORUNKEH-SAWYERR: Yes, sure.  You can just continue.  The mic is yours.

>> DAVIS ADIENO: Fantastic.

There are two things to think when it comes to Civil Society and AI:  One, it is the day‑to‑day living, livelihood, experience of life, the experience of normalcy, when we talk about AI, the proposition around AI, we're talking this abstract world where we're in to concepts, you know, opportunities, technology infrastructure, Civil Society and other non‑state actors to engage directly with communities.

They experience challenges firsthand.  On the development side, the generalization, certain times when resources are limited or when it comes to the right first approach, you know, Civil Society and interacting with people on a daily basis, the challenges when it comes to poverty, lack of education, lack of other things.  You see the evolution of society also in terms of the changes that people experience, you know, on a day‑to‑day basis.

Having that lived experience is critical of importance, it is critical in my opinion when it comes to AI.  When we talk about training datasets, when you talk about training algorithms to actually reflect societal norms, practices and responsively, because it is also positive that we know AI right now and this is not really relevant just because you Googled a particular keyword, Civil Society, bringing them in the conversation, it makes sure that you get other experiences and you can respond more effectively to the needs of ordinary people out there.

I think that's one component that is lacking in the conversation.  We're looking at training datasets, looking at the process, sharing that there are certain levels of standards when it comes to AI.

Private sector on the other side, it really knows exactly the strategic objectives, what they are, the challenge of the algorithms to meet the subjects.  In a similar portion, we have to think about AI, all responsible AI for social good, and target AI for social good and targeted into the needs and the developments of daily lived experiences of people across the world.  That's the only way to get the examples of people to see value in this emerging technology, to see the benefits and also at the same time, you have potential harms that may come as a result of the sustainable technologies.

Thank you.

>> ONIKE SHORUNKEH-SAWYERR: Thank you for the opening statement and answering the question.

We're glad you were able to join us.

I guess we're a bit behind schedule and still have time for questions.  I would like to first check in the room.  Is there anyone with questions?  I do see a hand up already.  I see another hand here.

Please, I would like to ask you here in the back.

>> AUDIENCE: Thank you for the session.  It is exciting that the topics are raised.  I come from India, Global South.

I think fundamentally I always go back to the question of data, ideologically in advancing the Sustainable Development.  Currently the data is to a large extent cubed and the development of the applications, the development of AI in general is not.

So I think one step sort of loop back one step in terms of let's start discussing the sustainable question in a sense when we have good baseline or at least larger participation of datasets from the Global South and then these conversations will actually evolve much wider acceptance in the sustainability criteria to begin with.

You know, I'll leave it at that.  It is just a comment.  No remarks required.

>> ONIKE SHORUNKEH-SAWYERR: Thank you for the comment.  Maybe if we don't have a full time for a round of responses from our speaker, I hope that in the final state they can pick up on any aspect raised by the audience that they would want to pick up on.

I saw another hand raised earlier.  Please comment.

>> AUDIENCE: Thank you very much again.  I'm sure everybody knows me a little bit.  I'm born here in Ethiopia.  I'm from Australia and New Zealand.  Currently I'm researching on how to move from physical currency digital currency or central with the digital banking as part of the services.

Currently as we all know, one side, the technology innovations such as blockchain, AI, MFC, NFT, Quantum computing are generating Big Data, it is difficult to manage, which is lack of effective, efficient, data governance management mechanism or tools.

With increased unemployment.

For example, in Australia, almost all shopping centres, big businesses, using checkout machines where employees working there.  The government is not taking any action and unemployment is increasing.

On the other hand, the technology creates tools such as IoTs and EOTs with AI forming tasks faster than human operation, processing transaction in the banking sector and in the shopping centre and selling ranges of community products.

My question, what are the future plan to accommodate the two different aspects or outcomes of AI leverage and global AI governance Sustainable Development?  Thank you very much.

>> ONIKE SHORUNKEH-SAWYERR: Thank you so much for your question.

Could you maybe switch off your mic.  I think it is still ‑‑ I'm not sure if it creates an echo.  Thank you very much.

Back to the speakers then.  There was a comment that related more on the overall topic of data and then, yeah, a question that was just posed now which I think is actually very much summarizing at least for my point of view the discussion that we now had on the two different sides of the coin.  I don't know any of our speakers if there is an immediate reaction to the questions and comments from the audience.

Please just come in.

>> URVASHI ANEJA: I would respond to the data point.

I am just to say, I could not agree more, and we do need to kind of have that data in place before we start thinking about Sustainable Development.

If I look at the Indian context, I really see India actually in a process of digitalization, we're not really in that point of kind of AI, still digitalizing, collecting data.  You know, to kind of end on a positive note as well, I think that's a real opportunity, an opportunity to build curated datasets, to build datasets that address specific problems and are representative, that the problems that are being tried to solve and define by communities, et cetera.  Because we're at the digitization stage and because we don't have that data yet, we can turn this into an opportunity to actually build the datasets that we need to advance Sustainable Development.

>> ONIKE SHORUNKEH-SAWYERR: Thank you.

Kim, Ladenika Mackensie Mendez Gonzalez, Davis, any responses?  Anything you would like to share?

>> KIM DRESSENDOERFER: Cannot agree more.  If you look at the data, if you look at AI, it is just ‑‑ it's just basically the prediction of the data from the past, we have to make a prediction of how the future data may look like.

Everything regarding to data, it is the most important thing, when it comes to quality above quantity.  I always have more data, it doesn't mean it is better data because at the end we have to teach the system to understand the data. 

The other thing, I think if I can remember the other question, I'm very sorry ‑‑ I can't.

>> ONIKE SHORUNKEH-SAWYERR: There was a question on more how you want to align or how you would envision to align the two sides of the coin, sort of the potentials and the risks related to AI that were outlined, maybe if you want to comment again on that aspect.

>> KIM DRESSENDOERFER: Always.

Yeah.  In every revolution, everything we do in the past, there has been shifts regarding specific new innovation, regarding new steps, humanity is going through.  I mean, it is always interesting to kind of compare what we're in, what phase we're in now with the past.  As many revolutions we have been through where jobs may get shifted, new jobs, but also more jobs generated, I think that the overall AI development, there are 500 million more jobs coming in the next couple of years and there is many more new jobs getting generated.  There is always the one side of, yes, there may be some repetitive jobs that may change, new jobs are coming.  I mean, I don't think my job would have been existing it AI was not there.  That's for me where I'm coming from.

It is obviously beauty and pain, it is quite close to each other.  I cannot imagine the fear some people are living in, which may have the fear of losing their job due to AI and so I can't ‑‑ I just, like, can't imagine the pain obviously.  I also need to sew the future, where this is going and especially with new generations coming in and new generations using AI and the new trend, how fast, efficient, how convenient technology has to be and how we have to build to keep it fit, there is an interesting quota, if Amazon is down one second, one second, they lose 20% of transactions.  You can think about that number.  Right.

How much money it is for them.

So now just imagine any other technology that has down time, it has to be quick, has to be efficient.  A lot of tools we're using, app, just they cannot be down because people stop using it.  That's another perspective we see.  It comes with generation, generation shift is happening right now.  That's something we also have to think about.

>> ONIKE SHORUNKEH-SAWYERR: Electricity.  Thank you so much, Kim.

Unfortunately, we have very little time left.

I think we won't be able to take up the questions that were posted in our zoom chat.  Nevertheless, maybe something for the future discourse and now I would like to invite all of you to all of our panelists to give a final statement, basely we give you the opportunity to express one wish, yeah, that's related to the question of ten years from now what you wish to see global AI governance.  I would like to believe that Kim had given us the answer right now.  I'm not sure if you have anything to add.  You have one minute each.  I start with Urvashi Aneja.

>> URVASHI ANEJA: A word, more participatory AI governance.

>> ONIKE SHORUNKEH-SAWYERR: Thank you so much.  That was amazing.  Summarized it well.

Then maybe next, Liz Oremboor Ladenika Mackensie Mendez Gonzalez.

>> LEDENIKA MACHENSIE MENDEZ GONZALEZ: Thank you.

Thank you, everyone.

May wish is that we have to work together, all the people, governments, social society, academy, all social vulnerable groups to consider the young, child's, adults, they all, everyone is in this ecological system.

Thank you so much.

>> ONIKE SHORUNKEH-SAWYERR: Thank you so much.

Then Davis, can you hear us?  Do you want to express your wish?

>> DAVIS ADIENO: 

>> ONIKE SHORUNKEH-SAWYERR: I do not see Davis on the screen.  I hope he has not dropped out again.  If so, I think Kim, it is your floor again.  One minute please.

>> KIM DRESSENDOERFER: I wish that every developer team was as diverse as we walk through the streets and we have the same of what we see in the developing world.  I want to ‑‑ in ten years, I hope every developer room is completely diverse and obviously my absolute wish is no bias in any system.

>> ONIKE SHORUNKEH-SAWYERR: So I think that's a very beautiful closing remark.

We don't have much time left.  I would like to thank all of you for your participation here in the room and online.

Again, thank you for your patience with our technological glitches of different sorts.  Finally, I thank all of our speakers, Jose Gontijo, Davis Adieno, Kim Dressendoerfer, Ladenika Mackensie Mendez Gonzalez, Urvashi Aneja and Helko Wildner next to me.  I hope this conversation can continue in the future.