IGF 2021 – Day 2 – WS #137 Multi-stakeholder approaches for the Design of AI Policies

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

   >> We all live in a digital world.  We all need it to be open and safe.  We all want to trust.

   >> And to be trusted. 

   >> We all despise control. 

   >> And desire freedom. 

   >> We are all united. 

   >> PRATEEK SIBAL:  Thank you, IGF, for that short introductory video.  So I think it is time and we will start and others will jump in.  So welcome to the session on multi‑stakeholder approaches to design of AI policies.  We want to keep the conversation quite free and open.  And I think ‑‑ I mean we thought it would be nice to start with everyone just talking about how has their experience been in different multi‑stakeholder contexts and conferences or processes.  How included or excluded you felt because at the end of the day these kind of conversations are about also how people feel and we can't just start with principles and practices and not bring in that how do we feel about it dimension.  So I give the floor to each one of our panelists, and I will introduce all of you first as a good host.  And I will also share my experience. 

So today we are joined with ‑‑ by Hillary from New York.  Hillary is an associate program officer at the United Nations Office of the Secretary‑General's Envoy on Youth.  It has been great working with them I must say on a lot of youth issues. 

    And just sharing a little bit of personal testimonial that a lot of our program at UNESCO has been influenced by how they have pushed us to include youth voices.  So hats off to your work, Hillary. 

    Then we have Jibu Elias, who is a content and research lead at India AI.  And he is lending his wide knowledge and keen insight in to AI for building a unified AI ecosystem in India.  This is especially interesting because they host a platform for information sharing for the private sector but bring in all different actors.  It is a multi‑stakeholder platform for knowledge and information sharing.  So quite inclusive in that sense as well.  Thank you, Jibu, for joining. 

We have Eleanor who is joining us.  I don't see them right now.  And then we have Esther joining us.  Thank you so much, Esther.  Esther is a Director‑General for Innovation and Emerging Technologies at the Ministry of ICT and Innovation at the Government of Rwanda.  I understand at an African level as well in terms of developing frameworks and guiding some of this.  As we all know Rwanda is kind of a use case example of how innovation is going on.  Great to have you here. 

    We have Joanna Bryson, who is a Professor of Ethics and Technology.  Johanna is an expert in a lot of things.  And she is at foremost a computer scientist, who is working on governance issues and making this feel more inclusive, bridging the gap between technology and policy and is great to have experts like yourself to join these conversations who speak the language of both communities. 

    Thank you.  Thank you all.  I will start briefly with what made ‑‑ what my experience in multi‑stakeholder dialogues is.  A few weeks ago I joined a global discussion on AI.  And I was perhaps one of the few people from the Global South in that room.  And I felt quite excluded because all the people who are participating in that discussion were one, much older than I was.  So there was this age dimension.  And two, so ‑‑ which doesn't mean anything but to me it was ‑‑ these were entrenched networks that were communicating for each other for long.  And it was a space which was not very inclusive. 

    So I felt that even as someone representing an international organization, which comes with a lot of social capital, you still can feel excluded in a setting.  So that wasn't great.  To the other side it was a great experience joining a session which the Youth Envoy's office brought up.  They asked what are our pronouns and made sure the conversation flows.  And we can go and talk and there was no difference in engaging with the UN Assistant Secretary‑General.  But that's my experience.  I will probably go on with Hillary to share your experience of what ‑‑ how did you feel in a multi‑stakeholder setting.  You have maybe one minute. 

   >> HILLARY BAKRIE:  Thanks.  Hi.  So great to join you.  Very excited to welcome everyone's experiences as well.  I think for me same as you, I also feel like oftentimes the space of this discussion could be more inclusive as a young woman, especially young woman, a woman of color, it is for me it is kind of still rare to see representation of people that look like me in these spaces.  So I really wished that could be improved in upcoming years. 

One of the interesting things that I have found because I work with young people and I'm a young person myself, the discussion on AI, whether ‑‑ if it is taking place on a more informal basis, just using the medium that young people use, TikTok, Clubhouse, Twitter, it is more inviting in a sense they use less of a jargon.  Even if you are not an expert in AI you would be more compelled or more inclined to join this conversation and actually pay attention to the development of the policies and the innovations that are taking place in this space.  So I feel like these are kind of like the unique perspective that I saw that conversations and dialogues outside formal institutions is actually very dynamic and not inclusive.  Representation of communities from vulnerable groups.  That would be my take. 

   >> PRATEEK SIBAL:  That's a great point.  How do we get away from our institutional setups and join the communities where they feel comfortable instead of bringing them here and sometimes a lot of tokenization as well?  Thank you for sharing that.  Perhaps Esther, would you like to take the floor next in how did you feel as a participant in a multi‑stakeholder conversation?  What was your experience of inclusion or exclusion in such a setting? 

   >> ESTHER KUNDA:  Thank you very much.  I think for me one of the greatest ‑‑ I hope you can hear me very well.  One of the most important aspects in terms of multi‑stakeholder approaches is really how the information is captured or how it is translated from different stakeholders. 

    So one of the things especially with this new emerging technology and more specifically around AI, these different views and different points of views that different institutions would have. 

    What Government regards as AI or the opportunities that AI would bring to the ecosystem, what regulators ‑‑ how regulators would look at AI and more importantly what setups really are trying to invest in this legal system.  I think for me the idea is around how do we make sure that that conversation is flowing really well.  And this is something that in particular we tried to curate and see if it can happen. 

    But also having a global ‑‑ so having a global view of different subjects, whether you include ‑‑ I'm not saying for a small country like Rwanda or for us to see, the stakeholders are only going to be from Rwanda.  And then in such conversation how do we enlarge the ecosystem.  So I think that has been for me my experience, especially for this new technologies, trying to go out of the stakeholders in Rwanda only and expanding our horizon and seeing if that would work.   

   >> PRATEEK SIBAL:  That's a great insight from your experience at the national level and how to enlarge that conversation, even around definitions and how we talk about the topic and how different people understand. 

    So Joanna, you have been engaging on this topic since a long time, I would believe.  What has been your experience as ‑‑ as an academic of being ‑‑ feeling excluded or included in different settings?  Let's take the example of a policy setting.  So as an academic when you come to a policy setting, or vice versa when you welcome other people in to an academic setting which can be quite intimidating sometimes for people, how has your experience been? 

   >> JOANNA BRYSON:  I think ‑‑ thank you very much for having me.  I feel like I need to represent all white people which is a weird thing for me to be representing.  But yeah.  But I can't.  That's ‑‑ so part of the point.  Everyone is individual.  And so when I think about inclusion and exclusion, I was excluded as a little kid and then I didn't ‑‑ then I was a computer programmer.  Not a lot of women, although more than now, were computer programmers in the 1980s.  I was a computer scientist and I was quite frequently the only woman in the department. 

I remember one time when I started working more in to the social sciences and I walked in to a room and I heard this weird noise and then I realized it was women laughing.  There were more women than men in the room and somebody made a joke.  I didn't recognize the noise.  So leapfrogging forward to now I think ‑‑ I think there is something ‑‑ and this may be having looked through ‑‑ I keep moving.  I'm now in Berlin.  That I feel so much safer and so much more included when I'm in an inclusive environment. 

I love being in diverse environments.  Maybe it is because I'm a woman.  I don't know if the same experience is true for everyone for my demographic.  I feel safer then.  What makes me feel really excluded and I realize this may sound awful, if I don't feel like I'm getting respect or I see other people being disrespected, I think weirdly to me one of the biggest parts of inclusion is acknowledgement not of me as an individual, but acknowledgement that there are different kinds of experiences and expertise and that we see how that is useful. 

    So I can't in one minute also cover all the range of experiences I have had.  I hope that I sort of did.  One thing I just found out last night, and this goes to another weird part of my identity, was that people were astounded that I don't care how many people are in the audience.  And so maybe part of my experience is because I'm neural atypical but I never self‑identified what was a thing.  And now I'm starting to realize a lot of computer scientists they do think differently.  And I never noticed the number of people in the audience matter.  There is all kinds of ways to be different. 

   >> PRATEEK SIBAL:  I thank you for being honest about your experience because what we want to do is also when we are talking about multi‑stakeholder we have a safe space to share our experiences.  Thank you for that.  And it is really helpful to learn for some of us that even people who are experienced have so many degrees under their belt can also feel excluded because it gives hope that I'm not alone.  I'm not alone in this space.  What can they do together to make it more inclusive? 

Jibu, I come to you next and then go to Tim.  You are curating a platform.  So this gives a lot of power to actually bring in some voices or not. 

    How do you in your daily work experience or experience inclusion or exclusion?  How do you actually also translate inclusivity on the platform? 

   >> JIBU ELIAS:  That's a very interesting question.  And I hope my ‑‑ I hope you can hear me. 

   >> PRATEEK SIBAL:  Very well. 

   >> JIBU ELIAS:  Is it fine? 

   >> PRATEEK SIBAL:  Yes, it is very well. 

   >> JIBU ELIAS:  When it comes to what we do in terms of India creating a platform, our whole mandate is to bring together these multiple stakeholders.  That's our whole objective.  As a country India is never short of talent and resources but the biggest challenge we found is these things are sitting in silos, whether datasets, research work.  So how can you bring all these people from the academy to the industry partners to the Government organizations, civic bodies, and startup community together and create a unique ecosystem.  That was over row No. 1. 

And one thing as, you know, as we mentioned about inclusion and conclusion, I felt in general policy discussions or anything that happens related to AI, there is a tendency for it to get dominated by more tech focused approach, especially when you are talking about issues with regards to ethics or responsibility, which are mostly social, moral and ethical problems.  I mean there is a tendency where you try to solve these moral and ethical problems.  By the same people who ‑‑ from academy, from social science background or something like the other kind of voices, right? 

When you are talking about ethics and AI there is a general domination from a western standpoint of view when the fact is there is a huge difference between Eastern, you know, collective point of thinking and western individual point of thinking as well.  So I feel this is something I feel exclusion that will happen.  So when we had this opportunity to create, curate this platform we ensure that we represent all the voices, whether it is academic or from a multiple perspective.  How can you look at a situation from the eyes of various stakeholders?  How do you bring everyone together?  So I think I learned this lessons.  Being part of many discussions where I had the most part of experience in this ‑‑ not included in many of these things so that I can fix what is wrong there in our own approach.     

   >> PRATEEK SIBAL:  Thank you so much for sharing that.  And I do agree, at one of the conferences that I was participating recently, we had a professor from Japan, a professor from India.  And they were not really involved.  It seemed that others were speaking the same language and they were sitting and having lunch alone.  This is terrible.  This means that the space is not inclusive.  Why are we not listening to experts because once they come ‑‑ I don't know, for whatever reasons this is not nice. 

    And this also kind of makes me reflect whether as an outcome of the discussion that we are having, if there will be a gumption for having a Global South network of participants who are willing to collaborate and discuss on a regular basis.  And we at UNESCO can actually incubate this kind of work and host this not to control but just to facilitate.  I think that could be also something that we'd love to discuss at some point. 

    So before we move on, I'd also like to ask Tim who is a partner representing Innovation for Policy Foundation, on ‑‑ because they're involved in a lot of inclusive dialogues and which he will also share soon the processes.  What has been your experience, Tim, of exclusion or inclusion in these kind of settings on policies? 

   >> TIM GELISSEN:  Thank you for hearing all of the interesting use cases.  Let me share a personal experience and connect it.  So I was actually calling my mom yesterday.  She asked me like first question, what are you actually working on.  I told her tomorrow I'm actually participating in a panel on Artificial Intelligence.  And she said it sounds complicated and went on asking about the weather.  I was like she ‑‑ if she ‑‑ Artificial Intelligence, she maybe quits the discussion.  What I was thinking this is a form of self‑exclusion.  So actually I think ‑‑ and then I started the discussion, I said hey do you know that maybe the tactile authority in Netherlands is deciding whether you would be audited and it is based on an algorithm.  The Youtube, it is a computer that's deciding what you want to see next.  My mom, middle class woman from the Netherlands if she is self‑excluding herself from the discussion, what does it mean when you talk about the Global South.  How can we make the discussion bite‑sized so people can feel like they want to deliberate and participate in the discussion.  I think that's also what we will see that something as a policymaker should do with this.  If somebody says, does some self‑exclusion and reach out and this is how you can contribute to the discussion. 

    So ‑‑

   >> PRATEEK SIBAL:  Thanks.  I can attest to that.  It has been the experience with my mom as well.  I'm talking on different panels and she is like what are you doing.  What is this stuff.  And sometimes it is difficult to involve them in the conversation.  They are like oh, my son is at the UN and he is doing something.  I can't describe it to my friend.  What you mentioned about bite‑sized conversation and to make the policy process more inclusive because at the end of the day why we have so much detachment from the policy process globally and the trust in policy making is because people just don't understand.  And they say oh, these are experts.  They do their stuff and in the end we suffer.  So there is breakdown in legitimacy.  And this I think can also change if we involve people and are more inclusive.  Which brings me to you, Tim, back, to share the process that you and the I for policy team has developed.  Then we go for our panelists for a conversation. 

    Over to you, Tim. 

   >> TIM GELISSEN:  Okay.  Let me just share my screen.  I don't want to interrupt this beautiful discussion too much.  So let me just take a few minutes to discuss our preliminary work that will be forthcoming in a joint publication with UNESCO next February in 2022. 

    So actually we took a deliberative approach ourselves in this work as well.  We had three workshops where we discussed Human Rights and the impact of AI and also we had a workshop with policymakers to discuss real life examples of people that were involved in policy processes and participatory elements that were in there apart from the research we did.  We have seen more than 20 countries that have an AI strategy or an AI policy document.  And that designed a process.  And all of them have a participatory element in them.  And that's where we drew lessons from them and that's what we will represent in a forthcoming report. 

    And besides this this IGF workshop is part of the process of coming up with recommendations and lessons learned.  Because the topics we discussed in the workshops and topics we discussed, those are the same we discussed today.  I'm curious to hear from the experts and the audience.  AI policy or some other policies.  So please do participate.  Welcoming everyone to put in their ten cents. 

    So first of all, we discussed why is this time different.  Because some people still believe Artificial Intelligence is high.  They say we've been talking about AI since the '60s.  Why is it different.  It is like the electronic skateboard.  Flying skateboards back to the future.  This will never happen but we believe this time is different and with many experts, I believe that everyone in the room has the same opinion because it is pervasive and it impacts all of our lives, whether you like it or not.  That's one of the key things.  It is almost impossible to opt out.  AI will decide many things, even if you are not aware of it.  At this stage of the technology, there is a lack of transparency and accountability.  There are no scrutiny mechanisms yet to really fully understand what's being done and how it is being regulated. 

    And this is most interesting, the impact of AI is asymmetric.  It's enlarging existing inequalities, in race, gender or social status.  And because it is a vast impact and this risk that is inherent in AI we believe that everyone should be involved because it impacts everyone and we need everyone to solve the ethical dilemmas that are involved. 

    Maybe the lessons we learned, I just took two quotes from Jake, a Nigerian lawyer and Active List.  He said inclusion is not only participation but also engagement, representation and empowerment of disabled unrepresented groups and populations.  This is a nice quote.  If there is AI impact on them, they should also be represented.  It should be on the table.  It is about empowering them to have a say at the table.  So I like this quote.  That's why I highlighted it here. 

    Then I have another quote from the Chilean policymaker.  A nice example of a multi‑stakeholder approach.  We will also highlight it in our report.  Multi‑stakeholder approach can be an education process.  So it started with a blank agenda and they let people come up themselves with what do you want to discuss.  What is interesting.  What is important to you. 

    And they had several phases.  They were transparent about the process they were going to do and they found out constant feedback loops between the policymakers and politicians.  The process was a learning experience and set a solid foundation of monitoring and involvement of a long‑term engagement for people in the process and see what's being done.  And also seeing and asking questions to companies like hey, what are you doing in AI.  So I like this quote.  And the TV process is an example of a multi‑stakeholder process.  Based on the conversations, the workshops we kind of summarized ten building blocks or lessons on how to make inclusive policies.  I listed a few.  But I'm pretty sure we will discuss most of them also here in this room. 

    And at the same time maybe we find some new.  Maybe in the final report, we have 11 building blocks or nine or eight because of the discussion we will have today but I think learning is key.  It should be understandable.  There should be also data policies because AI is as good as the data you feed it.  So if there is inequality in the data, it will show up in the outcome of your AI.  The data policies that I have ‑‑ AI policies. 

    And it is not only about putting people at the table, giving them a chance to say something at the table but influence them while there is decision making and talk about it and be accountable for decisions that are being made. 

    So these have just a few I wanted to highlight, but I'm curious to hear from you and keep the conversation flowing and to see if we are going to ‑‑ yeah.  Give ‑‑ provide all your includes and see how we can summarize it in the report.  We are in the validation phase of the report.  So please do participate and share your experience.  And please reach out to us if you want to ‑‑ want to participate and want to be updated about our progress of the report. 

So back to you, Prateek.  I'm very interested to hear about everyone in this room today. 

   >> PRATEEK SIBAL:  Thanks a lot.  It is ‑‑ we are here to challenge the process and insights.  Do you think it is really different this time when we just saw Tim presenting from electricity to motors to AI?  Is it different?  What do you think is different or if it is not?  The floor is yours. 

   >> JOANNA BRYSON:  Great question.  So this is just a short response.  Right? 

   >> PRATEEK SIBAL:  Yeah. 

   >> JOANNA BRYSON:  So the ‑‑ so there are two sides, yes and no.  And let me say no first.  A lot of things that people think are being caused by Artificial Intelligence or social media, we know we have seen historically in the past.  There has always been disinformation.  There has been political polarization and we can look and see explanation.  So I think it is very important to look back now that I have been looking at the impacts of AI and governance for a while, I realized that even like when the horse first came to Europe, there is commonalities between that.  The horse is not exactly technology but it is used by humans like technology.  In some ways whenever you reduce the cost of distance you introduce new governing challenges and AI is like that. 

When you increase our capacity to communicate and you change, this ‑‑ if you can communicate more.  It is easier to more quickly find ways to create public goods.  You are finding ways to create new organizations, and a lot of the things that we have seen first get in on the digital technology.  Maybe some of them were not pleased about and were afraid who we are helping to coordinate.  But I think on the other hand, we are seeing the unprecedented opportunity for inclusion. 

    And a lot of the things that people found frightening are consequences of the fact that we are empowering huge amounts of humanity to do things that they were never able to do.  Everyone can go further and they can know more than they have ever been able to do before.  It changes governance.  I would say yes and no.  There is huge changes but we can learn from history. 

   >> PRATEEK SIBAL:  Thanks so much for that.  Esther, since we kind of are discussing around this point on governance, in your view, and I know that you have also run processes for AI policy development in Rwanda, how has been your experience in multi‑stakeholder approaches to AI policy development?  And how have you ‑‑ what are some of the governance challenges that you've encountered so far? 

   >> ESTHER KUNDA:  Thank you, Prateek.  And I think for us where we are today is we're almost at the end of the journey of developing an AI conference.  So we have taken different approaches.  One, we've worked with development partners to actually develop the policy itself.  But other than that, and in that process how we have looked at it is really think about if we are working with development partners, how do we bring in to this conversation people ‑‑ especially academia because one of the things that we look at for something like AI is how many jobs are you going to be creating.  What types of jobs are you going to be creating and thinking about that scaling value chain that you will be able to really think about. 

    On another level, what we always think about, what we have also thought about is bringing on board let's say the role of regulators has been thinking about data protection.  An environment like where we are seeing a lot of algorithms in these AI spaces, especially a country like Rwanda where AI algorithms are trained on datasets most of the time do not have datasets that are coming from our countries.  How do we approach creating those datasets and how do we approach creating those algorithms and having a responsible AI aspect to this. 

On the other side, something that we also looked at is trying to think about and really engage startups in saying so how would you ‑‑ what would be the journey for private sector to actually get here.  To actually interpret AI but also be various new AI companies that are coming onboard.  And lastly in terms of governance, I think it is very important for us that the Government takes almost the lead and what we ‑‑ the journey we are taking is identifying which use cases can we better apply AI on. 

    So it is this type of conversation that we are having and at different levels.  For example, our regulator would be the ones that will put out the ethical Guidelines that the industry, that will guide the industry.  And for us when we put in place the policy but to find a way for all others ‑‑ all other stakeholders to actually claim this. 

    And moving at a later stage what we are actually working with JSM is around now creating specific projects that can be Pan‑African.  That could be ‑‑ that also really bring in context.  Here this is ‑‑ in areas developing the best algorithms, use cases, et cetera. 

    And I think in terms of governance that's really putting at the forefront that Government is interested or encouraging a myriad of ideas around AI evolution.  And, of course, when we think about and around the dataset part of it, that's where we heavily think about inclusion.  Because at the end of the day, if any ‑‑ the type of data your algorithm is going to be best on is going to be trained on, will determine how inclusive your AI economy is going to be. 

    So this is an approach that we try to bring, to bring up in how we developed our policy but also in the approach we've taken in terms of engagement, in learning through the process and bringing on board experts and learning through that process of saying how do we go about this and this is a journey, not an end per se. 

   >> PRATEEK SIBAL:  Thanks.  I think that's a great point to hear from a Government official saying that we go open up two stakeholders and say this is a process.  This is a journey.  What are your views.  We are also learning and we are also cocreating this.  I think that's a great point.  And also something that you mentioned super important about data and creating those comments and scaling up some of the projects that you are working on in Rwanda and then it links also with what Joanna with an opportunity to create common resources that can be leveraged globally or in the region.  Perhaps Joanna is laughing.  She didn't make that point?  You did. 

    So I also know that there are people in the audience who can participate in Poland.  And I know my colleague Eleanor is there.  Can you hear us?  You are on mute.  Can the IGF Secretariat please?  And there are a number of points in the chat in the meantime.  And I will definitely come to both the point by Veronica which ‑‑

   >> ELEANOR SARPONG:  Can you hear me? 

   >> PRATEEK SIBAL:  Yes.  Thanks.  Sorry, this is a bit of a hybrid ‑‑

   >> ELEANOR SARPONG:  I feel very excluded.  I was sitting here all this while listening to everyone give their contributions and I think that perhaps the technology failed to let you see me. 

   >> PRATEEK SIBAL:  I'm so sorry because I was asking my team where is Eleanor.  Where is she.  We can see here.  We are glad to see you finally.  The floor is yours.  We would love to learn about multi‑stakeholder approaches because I know you have been working quite extensively globally on what has worked and what has not worked. 

    And can you give us your truth on, hard truth of what works and what doesn't work and your criticisms of how international organizations engage and whether that is inclusive or not.  To make multi‑stakeholder processes work in reality and not just in publications.  Over to you.  

   >> ELEANOR SARPONG:  Oh, wow.  Yes.  So I must say that Prateek, I'm privileged to be working with the Alliance for Affordable Internet as the broadest coalition of technology and the leading advocate for affordable Internet globally.  So even though we are not working directly on AI we work on connectivity which is the Foundation that you need to be able to get to AI and all the innovations that come with it. 

    And we have been working with a very strong multi‑stakeholder approach.  We believe that you need to be very inclusive, not just in engagement process but you should also include people in the decision making, in the execution of policies and how you cocreate as well.  Within the Alliance of Affordable Internet the way we work is we have local coalitions in various countries.  And these local coalitions we bring together Government, public sector, academia, private sector, and Civil Society in a very open, and discussion format where we look at the policy gaps within a country.  We look at the problems that we want to solve. 

    Now one of the very big successes for us is how open we are, especially with Governments, with policymakers and with private players as well.  We are able to bring ‑‑ have people sit around and have questions that are leading, that will give us a chance to explore what the gaps are and what the solutions are. 

    And one of the things that we noticed, a lot of Civil Society groups do not tend to have the resources to be in these sessions.  So when we want ‑‑ when we want to have a very strong multi‑stakeholder engagement we give them some kind of support, especially when we are working in a lot of Developing Countries, we ensure that their transportation and other resources are covered.  So that they can be able to participate fully in these engagements. 

One of the things that we also tend to look at is in terms of the research that we do, we want to make sure that it is very inclusive.  We look at a very intersection approach when it comes to issues like gender.  We want to make sure that if we are looking at women's groups, they are represented.  Not just urban women or educated women, we look at whether they are in, you know, women who do not have the educational level, make sure that they are also represented at the various local coalitions that we work in. 

    And so we work very, very closely with a lot of CSOs to be able to give us that kind of representation.  But one of the things that I have also noticed, and a lot of these multi‑stakeholders engagements, especially when we work at a global level is that we fail to see how to bring people from regional levels and especially those in developing areas and the Global South.  And making sure that they are part of a conversation.  So we ‑‑ it's very dangerous to fall in to tokenism where you get one person coming and that person is supposed to represent an entire continent.  We get a diverse group of people who come in and we give everyone the chance to actually participate and be ‑‑ in conversations about AI we need to look beyond English.  A lot of times conversations are in English.  People who do not speak English or do not speak the level of English or understand the technicalities and the technical jargons then feel excluded. 

But also it is important to look at policymakers.  And I saw that one or two of the participants spoke about the role of Governments.  Governments are very, very important in these conversations.  If we are going to design policies and legislations that are going to cover ethics and AI, we need to make sure we are going to break down these technical descriptions and make sure they understand and ensure that they are comfortable and feel safe enough to be able to say that they feel uncomfortable about a lot of these discussions.  And so this is something that UNESCO might want to look at it, how to be able to bring a lot of policymakers up to speed on a lot of the technical issues regarding AI, and also make sure that we create the safe space for them to come back to us and ask for iterations in a lot of the discussions we have. 

The experience is very, very important in these spaces.  We should look beyond just international discussions.  We look at regional and also at local multi‑stakeholder engagements, making sure we are not in the capital cities, for instance.  We have discussions about AI.  The people who are going to be impacted by the decisions that we make should be in the room.  And they might not be able to speak the language that we have.  But they could ‑‑ we can break it down for them in a very simple format so that they can tell us whether they agree or how this ‑‑ also impact them eventually.  Issues about localization of the discussion and also making sure that we are able to increase participation at various levels. 

    We should also look at connectivity.  I mean we can't talk about AI without connectivity.  And what I have noticed in one of the sessions that we have participated previously where we were having a discussion about access, was that the people who we wanted to make the most insightful inputs could not connect.  Because it was either the connectivity was not good.  Or it was not affordable. 

    So I think that in another way UNESCO should also look at how to support some of the connectivity efforts that we have started.  I think it is important to join forces with ITU and others and us to be able to push a lot more Governments to improve connectivity.  But also how to make it affordable so when it comes to discussions on AI and how to evolve from there we don't leave people behind.  A lot of things for UNESCO to consider. 

   >> PRATEEK SIBAL:  Thank you.  Just don't hand over the mic because I have a follow‑up.  Thank you for touching on so many points.  We will try to discuss and engage with you on the coalitions that you work on locally because for us it is important.  One of our field office colleagues they mentioned a few days ago they work with community radios.  And how can some of the information on community radios around these topics can be transmitted. 

    Now the question internally was if there is no connectivity, it is on community radios, what is the benefit.  But I'm still not convinced.  I think information needs to flow.  And information needs to reach.  And I think it will be interesting to engage with you and the alliance and the communities that you work with to kind of cocreate something.  The Youth Envoy office is supported of getting more youth engagement in these topics.  With something that Tim also alluded to in his presentation when he cited Jake was multi‑stakeholder processes should not be at the stage of creation but also at a later point.  How does that work?  I want to understand, what is this governance mechanism where you make multi‑stakeholderism part of governance?  Can you give some examples of that? 

   >> ELEANOR SARPONG:  I think if I'm trying to understand your question, what you are saying when we design or whatever discussions we have about multi‑stakeholderism, policies that we come up with, you are talking about the kind of metrics, how do we measure success and how do we ensure that we are able to ensure accountability and it will come from the way ‑‑ the kind of co‑creation we have.  If you are in a country, we worked in Liberia on the national ICT policy.  And we had to develop various metrics, targets that we expect the country to be able to achieve.  Some of the gender targets. 

So it was important that in this discussion we brought all stakeholders together, women's group, we brought the regulator together.  Once you are done and we have decided these are the targets that we want to attain within your one year, two year, three year, we have to find who will follow up on accountability.  And that's where the Civil Society groups and academia will be very important in terms of research that needs to be carried out.  We need to support those groups to be able to track progress that's being made and also be able to show whether this progress is indeed in line with what we had, you know, collectively what we had wanted to achieve. 

    And so I think there is some kind of resource that's also a financial resource that's required.  There is also the skill sets that is required.  And then also there is the resource in terms of, you know, monitoring and evaluation that is required. 

    So it starts from us all agreeing this is the targets that we want to attain.  This is what we want to see and being able to support the groups that will be able to carry this out.  And I think in a multi‑stakeholder set, Civil Society is very well placed and that includes academia and all the other groups to be able to do this. 

    So I don't know if I answered your question. 

   >> PRATEEK SIBAL:  Thank you.  I think ‑‑ so my question was how to make multi‑stakeholder a process and not just a starting point.  And from what you said, I understand that it is the accountability and the engagement and the research that Civil Society, academia and the other actors bring in which actually makes the process of governance multi‑stakeholder through this route of accountability. 

Just to give you a small example.  When I hear Greta Thunberg go on stage at a UN conference and say that all people are doing blah blah blah, it pushes me to be more accountable when I am writing a speech for my boss.  We can't use the same things.  We need to act and we need to do certain things and then talk about it.  We need to walk the talk.  So I think that is how I understood accountability and the impact of youth engagement.  And this is also how multi‑stakeholder impacts policy discussions and governance if ‑‑ did I understand well? 

   >> ELEANOR SARPONG:  What I will add to this the role of Civil Society here is very important.  It is very ‑‑ I can understand that yes, we like to have sound bytes sometimes.  Policymakers like to sound good in the media.  And, you know, they want to make sure they have the rights.  The points that would resonate well with the audience and that also goes along with what they have in the policy.  Somebody has to be able to check whether that's, you know ‑‑ whether there is a disconnect between what's been said and what's being done.  And that's where the criticisms will tend to come in.  At the end of the day we expect people like in the media, we expect research groups and Civil Society and we expect think tanks to be able to hold the various processes to account.  And they do this by researchers. 

And that's where I feel these groups like UNESCO and others would be able to look at how to empower a lot more Civil Society groups who are independent to be able to hold a lot of these processes to account as well. 

   >> PRATEEK SIBAL:  Thank you so much.  There are a number of points in the chat.  I will come to Hillary.  Hillary, Veronica, you had a question around youth engagement.  Would you like to take the floor yourself or should I read out what you mentioned?  I will wait for a little while. 

   >> ELEANOR SARPONG:  She is right here. 

   >> PRATEEK SIBAL:  Please pass on the mic to Veronica. 

   >> ELEANOR SARPONG:  She is picking up a mic. 

   >> Hi.  Thank you for taking that message.  So I'm here.  The point was particularly on how we engage youth groups as part of the bigger Civil Society as so far.  And I'm particularly talking about the work we have done with the Council of Europe Youth Department where we analyzed the whole different processes, UN, OECD, Council of Europe and UN youth groups were not actually participating in that part.  Accepting that, participating in these processes that it is an education approach, how do we accept that.  But I'm talking about youth groups as organized groups, yes, not random scheduled young people that you happen to meet.  That's the angle. 

   >> PRATEEK SIBAL:  Thank you.  I will direct this question to Hillary. 

   >> HILLARY BAKRIE:  Thank you.  I think clearly this is also something that we often see a lot in the youth spaces as well.  So I completely agree with you.  There is not enough inclusion in this space.  And again taking like an example of like the ones that Prateek just mentioned on the climate space.  We can see how young people are frustrated.  This is why youth engagement, particularly meaningful youth engagement needs to be mainstreamed in institutions like the UN and private sector and all of these multi‑stakeholder partners.  For instance, like in the UN, I work in the UN Youth Envoy which represents the Secretary‑General on a lot of youth programmings and coordinating the different interagency work is doing for and with the young people. 

One of the things that we're really trying to push for everyone single program you make it is important for you to embed the youth strategy principle on meaning youth engagement which includes that youth engagement must be institutionally mandated.  Rights‑based, transparent and accountable.  We have a tool that helps to ensure that this meaningful youth engagement principles is mainstream all across the process.  It is our work at the Secretariat or our work at the ITU and so far the process has been very encouraging to see how it is being mainstreamed. 

We need to engage the group as an organized group.  I have to point out that young people are not homogeneous.  They are very diverse.  So we need to take in to account that there are organized youth groups, for instance, that the UN works a lot with, but also grassroots youth is as important as you see in the climate movement with their futures. 

    It has been able to mobilize on a grassroots level.  I think the key to make sure like how this ‑‑ how we can include young people in a more meaningful way is actually listening to them by active listening.  For example, last year during the DG7 Summit, the specific youth summit, the youth Delegates have already suggested for a call to action that for every single AI policy processes, like there should be a youth consult to be established that could help youth Delegates represent each members of countries in DG7 as an acting youth voice for AI policymakers. 

Young people are not just sharing their frustration of the exclusion but also trying to provide solutions to this.  However we would need I think strong commitment for policymakers from institutions like the UN as well to make sure that these recommendations are actively listening and taken in to account.  I think, for instance, for single plan the Governments and policymakers to create a technology Council or oversight board which is happening a lot to help shape AI policy processes and the future.  We need to make sure that young people, not just young people for the sake that they are young but also young people of color, young People with Disabilities, indigenous youth are represented in these Councils and oversight boards in the first place.  Nothing About Us, Without Us. 

It is important to make sure that accessibility to opportunities like this, to participate in a more decision‑making role is accessible for young people.  And it goes back to Eleanor's point in creating an enabling environment that allows young people to contribute.  Whether it is providing financial support that will allow them to take part in digital consultations or offline consultations. 

    There is no straight answer to how we meaningfully engage youth in this.  The key word is one intentional commitment for youth inclusion from institutions and policymakers by actually listening to solutions that young people have proposed because there is plenty of this already. 

And the second is establishing an enabling environment that makes these processes as transparent and inclusive as possible.  And I would strongly I guess also encourage, right, to youth groups or for partners that champions young people or partners who wish to support young people's participation to reach out and then explore collaboration opportunities with institutions like the UN. 

For instance, our office at the UN Secretary‑General's Envoy on Youth office works with partners like UNESCO or ITU and seeing how we could create entry points for young people.  They could be represented as themselves.  Instead of us speaking on behalf of them is actually us bringing them in the space to be in this ‑‑ to be in a place where they could speak on behalf of the new generation,

    I guess one of the latest examples we did we hosted a very honest dialogue with the UN leadership, with the Deputy Secretary‑General of the UN with young people, particularly youth activists.  Most of them are young women and young innovators in this space of how they feel in terms of young people's participation in shaping policy processes, whether it is for AI or other technologies.  All of them have said the same thing, they would actually need partners to help create pathways.  And I think it is particularly important to also bring up the fact that we need intergenerational partnership in this because young people wouldn't be able to break through these barriers without having allies and partners from other generations who are currently holding roles and powers to make decisions.  I think if I could just kind of close my answer is that look at it in a way that these are the generations that will inherit the future and the impact of our current decisions today.  Young people who are ‑‑ whose life is affected on a daily basis from AI technology, whether it is predictive algorithm or whether it is in the future facial recognitions.  They should be able to take part in shaping this course of this thing.  So I hope I shared ‑‑ I answered the question.  Back to you. 

   >> PRATEEK SIBAL:  Thank you.  It would be also very interesting if you could in the chat share some of the tools and resources that you mentioned on how to have this meaningful engagement and I'm sure because we have a lot of partners around the table, whether it is Esther and the Government, Joanna and academia or partners from the Civil Society who may find this useful tool as well coming from your office in some of the projects and planning.  I know someone from the audience wanted to take the floor.  If you would like ‑‑ yes.  The floor is yours. 

   >> Thank you very much.  My name is Alga.  I'm from the Netherlands.  And I heard many, many things that I agree with.  I will explain a little bit what we do and how we tried to come up with a multi‑stakeholder approach in our trusted analytics advisors.  So if a company hires we take a look at ‑‑ we try to come up with a multi‑stakeholder approach on all the different variables that are relevant.  And we try to scrape all the variables that are not relevant. 

    Because a lot of the data is being collected from different organizations.  But not everything is relevant in creating a certain AI system. 

    After that, after that's implemented we also have a team that does IT audits.  So that takes a look at the algorithm or at the AI system that has been created.  And we take a look in what's created.  And if that's still relevant because you can develop a system at one point in time, but if culture changes or the whole different approach to certain topics also your algorithm or AI system needs to be updated. 

    And we tried to keep that loop and we tried to keep those systems up to date. 

    And we will try to help our clients in having responsible data science and when we now see is that it is upcoming.  That those companies are aware that it matters.  But we certainly try to make this as inclusive as possible ‑‑ as inclusive as possible. 

    Thank you very much. 

   >> PRATEEK SIBAL:  Thank you for sharing this.  As we talk about multi‑stakeholderism and we talk about responsible AI it is important to understand the role that the private sector plays.  And pushing some of these tools and some of these developments.  And this first brings me to Jibu and then I will come back to Johanna on the question of polarization.  First to you, so we've ‑‑ you are working with a lot of private sector organizations.  How do you see uptake about responsible AI and the need for multi‑stakeholder participation in the private sector in the Indian context, if you would like to share? 

And also another point if you can respond to two, we have also heard a lot about languages.  We heard Eleanor talk about language and inclusion.  Concrete examples from your platform to bring in linguistic diversity and people who speak or write or use different languages to also engage.  So you have two questions.  The floor is yours. 

   >> JIBU ELIAS:  Yes.  Fantastic.  Some of the ideas that were shared here by Hillary and Eleanor, they were enlightening.  We see companies are struggling to come to terms between what they understand to be their legal right to use AI and their social right.  Which they don't process by default.  So that's I feel ‑‑ the first part.  We see that from time to time, and so in order to accomplish this sort of social right, what we feel is the stakeholders trust in this AI application. 

    And this can be done through dialogues and discourse.  Coming to the whole conversation around responsible AI, to be honest I think we have reached that point where it has ‑‑ it is a lot of noise.  Everyone is repeating the same thing over and over again.  And yet become ‑‑ it has been commoditized.  You are trying to solve a technological problem by creating a technology solution through responsible AI.  Many are social and more of a particular nature. 

    Coming to India, I think the engagement when you think about multi‑stakeholder engagement that comes through the Government as we have seen.  In fact, in 2018, when the country started its current AI journey, the preliminary meeting was organized by the Government where, you know, stakeholders from the IT industry, Government organizations, civic bodies, members of academy everyone was there.  And the outcome was quite fascinating that the ‑‑ the whole address ‑‑ unanimous thought that the first design principle for AI in India has to be AI for all, right? 

    So which means inclusion in many terms.  But again coming to the private sector organizations like Nascon, which I belong to, has played a crucial role.  What is happening in India?  Mostly the Government is what is driving this initiative.  And the fascinating thing is this has been trickling down in to a professional thing.  When you look from a policy standpoint, right?  So and today we have hundreds of AI tools deployed from the public sector, private sector which are in the nature of AI for good or AI for empowerment in mostly with people.  And from time to time we see large consultations happening, whether it is about preparing the AI strategy paper or responsibility AI approach paper and frameworks.  And things like that. 

And coming back to the trickling down part, the industry part, what we see is some of the states in India doing a fantastic job.  One good example with coming out of states and AI policy.  And so this ‑‑ this has a framework that was introduced that can evaluate AI systems.  This is enforced by the IT department and created procurement Guidelines. 

    So this is a kind of thing I feel we need going forward, frameworks, with, of course, the participation of everyone who is involved, inclusive.  Frameworks that are enforceable rather than what we are saying right now is a lot of chatter, we need responsible AI and each of them has their sown responsible AI principles.  And which again makes a whole process much more complex.  So I hope I answered the first question. 

   >> PRATEEK SIBAL:  Thank you so much.  Thank you so much on that one.  And let me interject you on that because I want to bring in Joanna first because something you mentioned was interesting.  We heard from you what is the toolkit of making these processes.  You produce Guidelines and then work with industry bodies who try to implement this.  One thing you also mentioned was about responsible AI being a buzz word and I know ‑‑ I think Joanna feels a bit ‑‑ has some opinions about trustworthy AI.  And this whole kind of deterministic view AI for social good.  AI for that.  AI for this.  Joanna, would you like to come in on this point?  And Don, your ethics professor hat and shed some light on this? 

   >> JOANNA BRYSON:  Okay.  Great.  I'm not absolutely not sure which way to go, but this is sort of the other question about polarization. 

   >> PRATEEK SIBAL:  That will come later. 

   >> JOANNA BRYSON:  Yeah.  That's just a basic ‑‑ you can ‑‑ that I just worry about.  First of all, there is two parts of what you just said.  One is the trustworthy AI and responsible AI.  Those kinds of lines are talking about the Artificial Intelligence as if it is other.  And then most important thing if we are trying to hold our societies together is that we correctly attribute responsibility to the person who has developed the artifact.  It is often the corporation that develops and designed.  And then there is a question of owner/operator, who are the people who would be responsible. 

So, for example, with a car, there is the company that built the car and the person who owns the car and the person who is currently driving the car.  And there could possibly be a person who broke in to the garage and is driving the car. 

    So I just have a real concern about two pieces.  One of which is that, that we don't anamorphorize the AI and we are clear who we want to trust.  It is a vector for transparency.  We either do not or do not and making accountabilities and only governance and enforcement to ensure that people do that good practice. 

But the other side that Prateek just sort of prompted me on is the AI for good.  It is, of course, a great idea to do good but ‑‑ and everything we do now we practically will use AI for.  If we are lucky enough to have access to digital technology.  But when people talk about AI for good I always worry about.  It is not like because we've ‑‑ we made some ‑‑ planted some trees or something through the ‑‑ through crowdsourcing, but that means it is okay that we try and gather at the same time a database of people we use to disavail and disempower. 

    I don't think we need to mention AI that much unless it helps us bring more resources in.  I guess that's what people are doing.  But I don't want it to be used to sort of muddy the waters around the incredible importance of making sure that we are using Artificial Intelligence appropriately and that we are defending ourselves from harms. 

   >> PRATEEK SIBAL:  Thank you.  Should feed in the procurement Guidelines and the standards that the private sector is developing and it should not be we trust AI.  We need to trust the process and the people who are designing these technologies.  So I hope this resonates also with some of the remarks which Jibu made.  At this point I would also ‑‑ I think we have about 10, 12 minutes left.  I would also like to bring in Esther on this point.  You mentioned something about startups and youth involvement and how ‑‑ how to basically ‑‑ so I'm losing my thought here.  But you mentioned something about polarization and jobs.  Right?  And the ‑‑ a lot of the focus in countries in national AI strategies is around creation of jobs and skills. 

So what are some of the challenges in thinking that you have experienced in this field around how do we create jobs?  What are the political pressures and pools?  How are you implementing this?  What are some challenges in terms of ‑‑ if I were to put it this way, public pressure as a Government official.  And then I will get Joanna to respond on polarization and how we can solve that.  So I don't know if my question is clear but it is around jobs and AI.  And how do you create it and how do you implement it through your policies. 

    Over to you, Esther.  I think the screen is frozen.  We can wait a few seconds for that.  You are back online. 

   >> ESTHER KUNDA:  Okay.  Every time someone talks about AI, and it translates to automation and that specifically talks about very big losses of jobs in terms of ‑‑ so loss of jobs for different areas.  And ‑‑ when you are designing the policy, it is also being very intentional.  I hope this ‑‑ it is not too much noise ‑‑ (cutting out).  

   >> PRATEEK SIBAL:  Perhaps for some time we can all switch off our video so that maybe it is better.  Esther, we are not able to hear you very well.  We have switched off your video.  If you would like to speak with the video maybe that helps the connection bit.  We can hear you and then ‑‑ yeah. 

    So I see that Esther is not on the chat, not on the call anymore.  But Esther, do you hear us?  Yes.  Perfect.  So the floor is yours.  We are listening. 

   >> ESTHER KUNDA:  Yes.  Okay.  Perfect.  Let me try and do this quickly.  So essentially one of the key areas when every time we talk about AI, that translates to automation and specifically we go to job losses, so I think in terms of ‑‑ in terms of policy making or different interventions to put it is being very intentional in ‑‑ in identifying and putting out their message around what other opportunities are there for young people.  So I think one of the key ‑‑ (cutting out) identifies around data annotation and similarities that would create more jobs than what we are seeing today.  That intentionality is very important at this level.  That's one of the biggest messages that we have ‑‑ we are thinking through and understanding how to actually ‑‑ to bring it across. 

    Thank you. 

   >> PRATEEK SIBAL:  Thank you.  Thank you so much, Esther.  So we understand that one of the tools while we are talking about policies and polarization.  It is a big issue, automation, and people feeling scared of job losses.  One of the things that you as a Government do to direct them in a methodological, well thought out way that there is no need to panic.  We are providing you the tools.  And we are standing behind you to support that, whether it is through incubating a startup ecosystem or through creating new opportunities as you mention with data. 

    So everyone, we can have our videos on back now.  So Joanna, the question to you we talk about polarization.  I know you have been researching polarization and income inequality and so on.  What is it that multi‑stakeholder processes can bring to reduce this polarization?  Calm down the debate a bit.

   >> JOANNA BRYSON:  A lot of people are worried that they are going to lose their jobs.  People don't lose their jobs when we have AI.  But the jobs do change.  And in particular, so, for example, for bank tellers when there became automated teller machines, or radiologists, radiologists should retrain.  Machine learning can do better and in five years we won't have radiologists.  Now there is more bank tellers.  Why is that?  Using the technology made them more productive.  So it is now in more people's interest to hire those.  Banks have more branches and that might be temporary, we'll see.  But it has been true for decades now.  There is more radiologists, because they are able to do so much more with the new technology.  Going back to what I said in the first time about the fear that you feel when you aren't respected for the amount of time that you spent developing skills, a lot of people they used to talk about the whatites who broke up the machines.  They were not protesting against there being machines.  But protesting against the machines not being paid the same amount that were being paid per part.  It was a threat on their wages.  This was political polarization, is not correlated with increased Internet access.  But it is correlated with increasing inequality. 

It turns out that what we have done in our most recent research, we found a good explanation for why that would be.  It is actually as you become more economically precarious you can't take as many risks.  So you are afraid to work with the out group.  You can predict the in group more easily.  This is not published yet, but last summer we were looking to test this, looking across our larger range of countries and looking at the data.  And indeed it seems that not only does inequality increase when you are at greater risk of losing your job or house, when your local economy is declining, but also trust drops.  Much larger effect is ‑‑ trust is a luxury.  And you only trust when you can afford to be wrong basically. 

    And so that's why we need economic support.  And getting back to that contrast that some societies like China and Germany have increasing inequality without increasing polarization, so far, it is because they were working very hard to make sure that everyone was being kept along with the increasing economy. 

    So in terms of inclusion, I guess the most important point to make is that initially when new technologies come in, like everything gets scrambled.  We seem to like to have a little competition.  There is something called the genie coefficient.  And if everyone has the same amount of money, genie is 0.  And if only one person has all the money, genie is 1.  It seems that people are happiest when genie is .27.  When we have these new technologies coming in, there will be companies like Amazon did this when they bought Whole Foods.  They worked very hard to make everyone make the same amount of money.  We want equality.  We want equity but we also want to be special.  That's the part what we need to think about when we are thinking broadly about inclusion.  Think about how people can specialize and be rewarded for what they bring to society. 

   >> PRATEEK SIBAL:  Thank you.  So I think we are all special in our specialized ways.  And also I think I was watching this Netflix showed called Super Store.  And there was this episode where the supermarket hires a robot to do the cleaning.  And then the staff which is a floor staff says oh, our hours are being cut and they take the robot and throw it down the roof and thinking it has been destroyed.  And then the robot goes back up and starts working.  This sense about people not being valued, I think at the core of it policies need to also be humane and they need to value how people feel.  And not just talk about the hard numbers, which definitely inform the discussion as you mentioned about the genie coefficient being 0.27. 

We have about five minutes left and the IGF Secretariat is sending me a lot of the messages to wrap up.  I will close with one minute each to all our panelists with what is it that you think should come out of this session, this conversation that we are having today. 

    What is your take‑away and where you would like us to go.  It would be just one minute.  And I'm prompted that we should take a group photo.  So I will start with perhaps on stage, Eleanor, if we can hear from you.  Please go ahead. 

   >> ELEANOR SARPONG:  Right.  So I would say if we are having a discussion about AI and the multi‑stakeholder process we need to make sure that we are looking around the room to see whether the people who are going to be impacted or who benefit from the use of AI are represented.  And whether we are hearing their inputs and concerns and whether that's been captured and whether they have a chance to be able to help in iterations or give their feedback or be able to monitor the impacts that this has on their lives.  If that's not happening then we need to really rethink it. 

Because we kind of just have data scientists and technologists to decide how we should be.  We should include people from the public sector, especially Governments who have to take care of legislations that govern the public good and look at Civil Society groups who tend to be in academia who are more ‑‑ who have a say on what the public's ‑‑ are looking at the public interest as well.  This is very critical and making sure we look around the room who is this going to impact and are those people there to be able to give inputs in to how things are designed and how the impact happens. 

   >> PRATEEK SIBAL:  Thank you.  That's a very tweetible quote and we are going to take that up.  So we look around the room, how people are impacted and involve them in the discussion and in the process.  Then have a metric to measure it.  We have very little time.  I request all of you to keep your remarks short. 

   >> ESTHER KUNDA:  Thank you.  I think I will add to what Eleanor was saying.  In addition not people that we benefit, also people ex ‑‑ (cutting out).  That's a ‑‑ this needs to be in the room so they can understand or even big discussion, put out how they are actually going to be affected.  So I think I will put it that way, just adding to what Eleanor was saying. 

   >> JIBU ELIAS:  Yes.  I think this should be the beginning of the whole process.  I mean this course regarding AI and effect and everything regarding safety.  And all these things.  It is not a thing which is open and shut, right?  It is a continuous conversation.  It needs further discourse and conversation.  There should be an enforceable framework to hold them accountable when it comes to, you know, moving away from whatever particular model principles.  So that's my two cents here. 

   >> PRATEEK SIBAL:  Accountability is key in this process. 

   >> JOANNA BRYSON:  Okay one in response to what Eleanor said, science doesn't decide things.  Scientists sometimes but basically science is about making predictions and providing understanding.  It is the whole of Governments and governance to make the decisions, to make the normative calls.  That's so important.  About what I will say in summary since I accidentally pasted the wrong paper anyway coming back to what we were saying before, it is a kind of exclusion that we don't acknowledge how much great stuff is already being done by parts of the world that are not always identified with AI.  So that paper I accidentally pasted in to the chat was examining the narrative that was coming from China and U.S. and not the EU.  If you exclude all those three great powers, the rest of the world combined is actually doing more than China and the EU combined.  So it is important to recognize.  And it is ‑‑ actually I think it is a form of disempowerment when people try to marginalize the amount of impact that we are already making from countries all over the world. 

   >> PRATEEK SIBAL:  Thank you.  That's super important.  I hate to ‑‑ I have to close but I just want to add one more thing, the other day I was part of a panel where people were talking about development and democracy, as democracy is vested.  I thought that's super disempowering for people working for Human Rights around the world. 

   >> HILLARY BAKRIE:  I agree with everyone.  Representation matters because not only that we can identify gaps and how are we complimenting each other's gaps to fill in.  We can safeguard this process and make sure that Human Rights are at the center of these human policy making.  I hope your conversation doesn't end here and translate to more stronger and more concrete collaboration and I look forward to engaging with everyone here as well as the audience right after this session. 

   >> PRATEEK SIBAL:  Thank you.  It has been a pleasure talking to you all.  We will get back to you.  And we will love to cocreate something and work with you.  If you want to turn on your camera for a second, our colleagues want to take a screen shot for the news item they would like to produce.  As many faces we have it is better.  So we wait for a second.  Well, very well.  We see so many people.  Lovely to see you all.  So Steve, we are all here for you. 

   >> Thank you.  I will count to three.  3, 2, 1 smile.  Okay.  We are good to go.  Thank you so much. 

   >> PRATEEK SIBAL:  Thank you so much.  It was lovely talking to you.  Take care.  Bye‑bye.