IGF 2022 Day 4 Town Hall #55 Inclusive AI regulation: perspectives from four continents

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> Hello, everyone.  Thank you so much to be here.  Welcome you and welcome online also audience.  Welcome our panelists.  So thank you so much for accepting our invite to be here.  It is so important topic that have been discussed in the IGF.  So I would like to open this session 55 Inclusive AI Regulation: Perspective from Four Continents. 

And now without further ado, I would like to pass the mic to our host and Moderator, Professor Christoph Lutz.  Thank you so much.  And please the floor is yours, Christoph. 

   >> CHRISTOPH LUTZ:  Thank you so much for the kind introduction.  So I will be the online Moderator of this session.  And I will say a few words before we start and then give the floor to our four speakers.  The goal is to have a kind of comparative perspective on AI regulation to get insights from different continents.  So its term inclusive AI regulation:  Perspectives from four continents.  And we will try to get insights from different countries and backgrounds and get the conversation going. 

So the way it is organized we will have four input presentations, each one about nine minutes.  And then after this short input presentations we will open the floor for discussion.  If you have any questions, please write them down.  And keep them ready.  You can also post them in the chat.  And we will forward them to the speakers. 

    And we hope to really keep this very open and conversational.  So this session is actually an ongoing, part of an ongoing research project that's in collaboration with ITS Rio and with the Berkman Klein Center.  It is very much in the spirit of that research project which is funded by the research Council of Norway.  And it is about triple partnership of responsible AI.  Bringing different backgrounds and perspectives together to think more closely about how we can govern AI. 

I will introduce our first speaker.  We will start with Celina Bottino.  Celina has a master's degree in Human Rights.  And she is an expert on Human Rights and technologies.  She was a researcher at the Human Rights Watch in New York.  And she is also a consultant for the Howard Human Rights clinic, associate of the children's adolescent’s rights.  She is developing research in the Human Rights field.  She is a project director.  And she will give us insights on responsible AI and governance from her perspective.  So I hand over the floor to you, Celina. 

   >> CELINA BOTTINO:  Thank you.  And it is a pleasure to be here with all of you, even though it's in the Zoom.  But it's important for us to have this opportunity to exchange experiences since the idea is each one brings a bit of perspectives from different continents.  And so I'm ‑‑ as being from Brazil, I will try and give a bit of feeling of how this issue is being ‑‑ it is going in the region, more specifically here in Brazil. 

    So I always like to remember that we first started to discuss about this issue of AI inclusion, it was five years ago in the first AI ‑‑ in the event that ITS organized with the NOW and other friends here, back in Rio where we discussed the framing of the problem of AI from an inclusive perspective.  And I would say that at the time parts of the challenges that were pointed out still remains. 

But regarding access to infrastructure, and also access to knowledge, but ‑‑ and I will just illustrate with one project that we are doing, how this is still an issue, but I would say that from the policy perspective, lots of developments, we saw lots of different developments on this front. 

    So regarding one of the challenges that we still have to deal with, which means access to data and also access to knowledge regarding AI.  So as we all know, data is the first ‑‑ the most, first important element when we are talking about AI with no data, AI is kind of useless.  So but still we have a lot of gaps when we ‑‑ when we are talking about machine readable data here in Brazil, for example. 

So I tested doing this project with the public defense office here in Rio in which the idea is that AI would be used to help the public defenders on their work which means like addressing the issues of all the underprivileged people that cannot have access ‑‑ could not pay for a lawyer.  So they have this ‑‑ Brazil has this service of free access to justice. 

    So one idea ‑‑ the idea of the project would be to map all of the issues regarding medicines that people have to go to the judiciary to ask for access to have ‑‑ to specific medications. 

    So the point of the project would be to help with AI map and understand which kind of medications are most, for example, mostly demanded.  And towards against the public office, the public sector.  Or, for example, addressing or for the private ‑‑ for the private sector.  So this is still not a clear picture of how is the situation on this specific topic.  And with the use of AI we could easily try and identify, for example, which are the most demanded and the most ‑‑ who people are looking for the most kind of medication. 

And with this, with this information, it would ‑‑ it could help, for example, bring up new public policies and identify other ways to help these people without needing them to go through all the pain, to go through access to have this medication through the judicial system. 

    But we are now facing an issue in which there is not good data to do this project.  So we have the funding.  We have the technology.  But we still are lagging a bit behind on the data needed.  So this is I think one of the examples in which we still need to work on, on having this minimum infrastructure to develop AI tools with our focusing on ‑‑ which has the focus on helping solve public issues.  Public problem. 

But just quickly jumping to what changed on the policy realm, I would say that we ‑‑ in the past five years we saw some specific change as we now have Brazil specifically have approved a national AI strategy which still has some issues, has faced some criticism because it was not as specific as expected.  It is just kind of another chart of like principles and more ‑‑ more in this ‑‑ not as specific as expected. 

And also Brazil has more than 20 bills trying to regulate AI.  And just yesterday, actually, it was presented the specific bill on ‑‑ that was as a result of the work of this Commission.  And that's something that I would like to stress, how was ‑‑ how the process went. 

    So some specific representatives presented their AI bill.  And when they identified that this was a very technical issue, that they should resort to a specialist to help bring up a bill.  So the Senate created this Commission of experts to try and come up with another alternative of regulation. 

    And this Commission was set up.  They organized several public hearings.  And also put some of the topics under public consultation.  And as a result of this participatory endeavor, they presented yesterday their bill.  Actually we still don't have the text, it is not public yet.  But they publically explained what are the topics.  And one of the drivers we see for regulating is the topic of responsibility which there is still a lot of discussion, if it should ‑‑ how it should be.  How broad it should be. 

    And just to I think conclude, the idea of having this as a result of this Specialist Commission I think it was a nice ‑‑ it was a nice move.  But still there were criticisms regarding the composition of the members of the Commission was it diverse enough, was it inclusive enough.  We still see maybe it would have been some tweaks to make it as inclusive as expected.  But again we still have another important issue which was identified five years ago, which is the knowledge gap.  We still have, for example, we had a course yesterday about AI and Human Rights.  And we still see that there is lots of new information.  People ‑‑ they don't know, it is still a gap that we need to overcome of trying to bring these topics more to the people as a whole to make them participate more in all of this ‑‑ on all this process.  So I think that's my final ‑‑ my first remarks.  And thank you. 

   >> CHRISTOPH LUTZ:  Thank you so much, Celina.  Perfectly on time.  Great.  This was fascinating and great insights from Brazil, South America.  So I will go to our next speaker which is Samson Esayas.  Dr. Samson Esayas is an Associate Professor of the BI Norwegian school.  So Samson, the floor is yours. 

   >> SAMSON ESAYAS:  Thank you for the introduction.  I'm ordinarily from Ethiopia where the conference is being held.  It is unfortunate that I couldn't be there in person.  Hopefully people are having a great time in the conference and outside.  I'm from the northern part of Ethiopia where there has been a conflict for the last two years.  And it has been cut ‑‑ it has been under communication blackout for the last two years and also a great deal of human suffering.  And since we are talking about inclusion, it is my hope that the region gets reconnected, and that they become part of the community, to be part of the conversations that is happening in their region, in Ethiopia but also across the globe.  So having said that I want to highlight, also talk about what are the main drivers of AI governance. 

    And I will start by just outlining four main broader issues that are driving the discussion and the legal initiatives.  And then will try to explain a little bit in relation to some of the aspects that relate to inclusion and diversity. 

    So I would say that there are four main drivers at least from the way I see it.  The first driver for AI discourse and the initiatives we see is, of course, the protection of fundamental rights.  Protection of Human Rights which is ‑‑ which takes a quite significant role in the discussions we have in Europe.  The protection of, of course, data protection, privacy, protection of freedom of expression and protection against discrimination.  And the protection of vulnerable groups that as people working for, you know, these platform driven businesses.  So that's one, one category of drivers. 

    And, of course, the second category which is also related to some ‑‑ somehow related to fundamental aspects.  Concerns about elected integrative information.  So here there is also quite a lot of initiatives that are perhaps, for example, the way big platforms such as Facebook, Google use AI systems to moderate content and the risks that come with content.  For those of you familiar with the EU framework, there is a new law, or an updated version of an existing law, digital service, provides or imposes obligations on these big platforms to manage systematic risks when they use AI for moderating content. 

And the third driver is liability and safety of AI.  So here, of course, there is a question of liability, what happens if AI systems such as self‑driving cars cause harm.  So what ‑‑ how should we address those concerns.  And there is also an initiative, we hear the European Commission has proposed a law for regulating liability of this kinds of systems. 

And the fourth border driver of discourse is related to this divide in data control.  So Celina alluded to this importance of data for AI, for development of AI. 

    So I think there is also a similar concern in Europe where the exist divide between a few companies that are able to create significant value from the data, from the one hand.  And then, of course, users which are the main generators of the data and should also benefit from that data, and together with also small businesses and public agencies interest in gaining access to that data.  There are legislative initiatives within the European Union such as the Data Governance Act, that try to facilitate the sharing of data and also facilitate that small businesses could be able to give access to data and develop services. 

    So I think these are the ‑‑ these are the main major, major drivers.  And, of course, the panelist is specific in relation to inclusion.  So I would say a few words about the focus on fundamental rights and a little bit about the draft AI Act and how it tries to address some of these concerns in relation to inclusion and diversity. 

    So the AI Act, the main aim of the AI Act is states that has AI should serve the human or should be human‑centered.  It should serve humans.  And there are ‑‑ you see many, many cases.  The focus seems to be in terms of identifying specific use cases that seem to endanger or to create high risks, unacceptable risks to Human Rights, safety of individuals.  So you have certain systems that are being prohibited.  You have certain systems that require that you have to do lots of things, risk management, data governance framework.  And if you see some of these use cases, they seem to be driven by some of the things that have happened in the last few years.  Some kinds of AI users that led to discrimination, or other, other violations of Human Rights. 

    So we have had I think ‑‑ in Europe we have had the use of AI for grading during the pandemic where AI has been used to give grades to the students because they couldn't go to school.  And that kind of phrased lots of concerns in relation to discrimination, because excellent students living in poor neighborhoods would be given lower grades because the schools they went to performed poorly historically.  As a mediocre student going to schools in affluent areas were given good grades, because the school might have performed well historically. 

It seems to be one of the things that are taken in to account in the AI Act.  The AI Act, say if you are using AI systems for making decisions or admission decisions or grading, you have to follow strict and detailed requirements. 

    We have also similar concerns in terms of the use of AI for making social benefit decisions.  So this is also regulated, highly regulated now in the AI ‑‑ in the AI Act.  And then we have also a different set of regulations focusing on platform workers.  People working for Uber, food delivery companies because there are lots of concerns in terms of surveillance of those workers but also how they use algorithms, AI systems to manage those workers. 

    So there are quite detailed obligations in terms of transparency, explainability.  Those are some of the drivers. 

   >> CHRISTOPH LUTZ:  Thank you so much for the insightful presentation and the insights from Europe.  Our third speaker will give the perspective on Africa and more specifically South Africa.  And we have with us Shaun Pather.  So Shaun is a professor and Chair of the Department of Information Systems in the faculty of economic and management sciences at the University of the Western Cape in South Africa.  He is an ICT for Development expert and National Research Foundation rated research who focuses on Information Society and related issues, especially digital divide, digital inequality and uplifting rural communities. 

    So I'll give the floor to you, Shaun. 

   >> SHAUN PATHER:  Good afternoon.  Apologies for me not being there in person.  I had to go on a bit of unexpected leave.  Just to share some perspectives from my side from an African without particularly Christoph, focusing on South Africa, it is important to remind ourselves AI being a phenomena of digitization to remind ourselves of the state of digital inequality, drawing on ITU's most recent facts and figures while we kind of have a sense of a ‑‑ of continued penetration of people in to the network. 

But where does Africa stand in all of this?  And we remind ourselves compared to the rest of the world, Africa is sitting at 40%.  Inside of this 40% we must remind ourselves ITU actually collects this data which is through mobile operator data, through the regulators.  Inside this 40% there is a lot of discrimination.  So that in itself is not quite accurate.  A key factor around this even though that the network infrastructure starts to spread is that of affordability.  And as you can note here, is that from a basket perspective, 5% and 15.4% as a percentage of gross national income is indicative.  This by the way of mobile broadband and this is fixed broadband of the expense in Africa to the average African person making access to the network prohibitive. 

You see the inequality here from a generational gap.  This is reflecting youth.  So the percentage among youth of penetration, 55% and 36% of others and you can see the vast difference as you look across this infographic compared to the rest of the continent. 

    Skills being the other issue, this is a slightly older bit of data from 2021, but again as you can see here that basic skills in the African continent by virtue of these shading I can see a low level of skills.  All this ITU data talks about penetration and access, et cetera.  But the equality problem in Africa is multi‑facetted.  So while all of this is around infrastructure and the quality of universal access.  So multiple perspectives to deal with. 

I'm not going to speak in the nine minutes about the cases of inequality because I take it that we all know very well they are documented.  And a growing body of literature on the inequalities created by 14 industrial revolution technologies.  Fortunately, I came across an alt advisory policy brief which was concluded around the middle of this year, published in September 2022.  And this alt advisory did an assessment of AI governance in Africa across all countries using these six indicators, whether dedicated AI legislation, data protection legislation, national strategy, draft policy or white paper in relation to AI, expert commissions, and whether AI features as a priority in the country's national development plan.  The findings from that don't look very good. 

    No country has dedicated AI legislation.  Mauritius has partial legislation.  30 countries has data protection and this is where it seems to be more effort happening at the moment which addresses automated decision making.  Four countries have a national strategy.  One country has a draft policy or a white paper.  13 countries have expert commissions and six countries include AI as a priority. 

So it is not as if there is no effort.  There is a ‑‑ some effort when I look back to last year, when we discussed this I think it didn't seem to ‑‑ it seems that slowly some momentum is picking up, not enough, but a lot of the momentum and effort thus far is around data protection in relation to automated decision making. 

    And as you can see here is that about 55% of the countries had something in place in that regard. 

    So all of that ‑‑ so if we have to move beyond, I mean we are in a state of digital inequality.  We are struggling or we are taking a very slow start in the continent in terms of beginning to create some kind of governance, AI framework or policy. 

At the African Union in 2020, it is not as if this is absent from the agenda in the 2020 gathering, that they speak of the African digital transformation strategy and speak of leapfrogging.  So there is a sense of leapfrogging, the opportunity that somehow from a digital perspective the continent is behind.  And that with the presence of AI, debt.  This is where the continental policymakers are seeing the future. 

    But the problem and the danger of thinking of leapfrogging of understanding the matter, the average poor person is digitally excluded.  A proportion might have access and used to basic ICTs for those who are fortunate.  But technology developers including AI machine learning are not focused on how they would use ‑‑ support the social and economic development. 

AI is driven by data.  But the problem is if the populations are not involved in the creation of the data or if their data, we are not drawing data from the people on where the applications are going to be used, they will remain outcast.  So that's fundamentally at the source of the problem.  You have inactive people on the African continent in the digital society.  Which means that our datasets are not representative.  And for me that's where the heart of the problem is. 

    So we have to shift focus.  Currently the progress has been made in respect of privacy and protection, as I pointed out just now.  The discussion and the debates for me suggest that we ‑‑ while we are looking at how AI is used to develop economies, how you make monies and profits, there is no effort in terms of how it ‑‑ on issues of inclusion and diversity.  So that's central for me. 

The leapfrog term used by the AAU without thinking about the impacts on poverty and entrenching digital equality, leapfrogging is a wonderful notion.  But if we don't think about the ramifications of it then we will and might have an African policy position if we only simply take an economic road perspective. 

    So we have to increase participation and policy and governance must seek to engender participation in data infrastructure.  More integrated effort, assessing all of ‑‑ including all ‑‑ addressing all of the issues of inequality to ensure that greater participation means that we have people coming in to the data infrastructure. 

    I'm going to skip this other than to say, I have a minute left, Christoph, is that the digital infrastructure projects, right, they are problematic.  I draw these points out of recent research done by ICT Africa.  The citation is there. 

In summary I think we need a most structured and coordinated response.  We need a common artificial international framework.  We need to keep in mind that ‑‑ that's my time up, but ethics is not universal.  When we think about AI and ethics and application, we think about it in a regional basis.  We need to do more for continental based and regional based inside of the continent, the governance and policy development. 

We got to increase transparency.  And I put my academic hat on, I think we can do more in terms of developing design based confirmation, internationally agreed set of principles around equality.  We need metrics to assess software problems at the spoken stage.  We need more research and practical tools to be developed to support software engineering.  And we have to look at independent regulation for software.  To run without creating inequalities.  I will stop there. 

   >> CHRISTOPH LUTZ:  Thank you for the interesting presentation and insights from Africa.  I have to move to our last speaker, Sandra Cortesi.  She is a senior research and teaching associate at the University of Zurich and adjunct researcher.  She collaborates closely with talented young people and engages with researchers in the field of youth and media to share their interesting, exploring innovative ways to understand, evaluate and shape current and social challenges, virtual reality and especially in terms of how to engage youth in our digital society.  So Sandra, the floor is yours. 

   >> SANDRA CORTESI:  Thank you so much, Christoph.  Delighted to be part of this panel. 

What I'm going to share is a reflection of what Berkman Klein is ‑‑ happening kind of in a very collaborative fashion, many colleagues working on these issues, including Rick Kasser.  I am the spokesperson in some way.  But truly excited to be here.  When it comes to Internet Governance, kind of from this U.S. perspective, the big picture suggests that there are many conversations happening among many actors within the private and public sector who are all involved in developing many different norms of governance.  From an inclusion perspective this is good news.  It means that different stakeholders are involved in governance issues and how to resolve them.  So again that is certainly good news. 

    If you take the U.S. as an example, for instance, we take ‑‑ we see inclusion of different stakeholders at the four following different levels of governance making.  First, we see examples at the city level.  For instance, take the example of the ban of facial recognition in Cambridge, for instance, where we see citizens, local actors engaging with local City Councils debating these issues and addressing these issues.  We see, for instance, standard setting organizations, like the National Institute of Standards and Technology.  And new frameworks for risk assessment and debiassing by doctors and hospitals as a second example. 

    Third example, we see these conversations happening at the state level, with state legislators, activity and legislation, for instance, when it comes to regulation of self‑driving cars as well as Consumer Protection issues. 

    Again here different stakeholders are involved in law making.  Fourth example is at this national level where national legislators are also active.  We see as an example the Algorithmic Accountability Act that's being under discussion resembles a little bit the EU act draft law.  Although it is very uncertain whether it becomes law given the situation or gridlock in Congress. 

    Maybe as an addition to that, currently unless there is kind of a national law which is unlikely to happen any time soon in the U.S., the U.S. approaches ‑‑ the U.S.' approach to AI governance in its health is different from Europe with the EU act, for instance.  The U.S. so far has not taken action to regulate AI horizontally.  Meaning across all sectors and different types of AI.  Rather it thinks at least at the federal level it leaves much more room of kind of this regulatory job to specialize the sector specific agencies like, for instance, the Food & Drug Administrations or the FDA in the case of medical AI as an example. 

    So different approaches here to discuss or different ways to go about it.  So we see in essence diversity of foras and spaces with different stakeholders.  But, of course, when it comes particularly to inclusion, important participation gaps remain.  One of my key topics or issues I very much care about are young people.  So take young people as one key stakeholder.  Over the last few years we have seen many countries releasing a range of AI policies or policy initiatives focusing largely on how to leverage AI systems, mostly for economic growth and national competitiveness.  But it turns out that many of these national AI plans don't mention children or young people.  That there is a great mapping by UNICEF who has documented this extensively very much, recommend you to take a look at it. 

    So also when we look at AI ethics principles they often don't mention children or young people specifically.  There we did some really cool work with UNICEF, IEEE and the Web that I'm happy to share more in the conversation part. 

    So there is a gap in terms of AI impact on children and young people, but also some initial work I would say in the initial activities relevant in terms of AI governance. 

    So I take here youth as an example, of course, similar participation gaps remain in terms of other communities, other underserved communities, particularly including People of Color, very crucial topic currently in the U.S. as well.  So here much more work needs to be done to make the process of AI governance, to make AI governance an equitable one.  But I'm still ‑‑ I remain hopeful that also within spaces like the one today we can advance this conversation and then as I said much more hopefully to come.  So thank you so much for having me here. 

   >> CHRISTOPH LUTZ:  Thank you so much for the insightful presentation.  So for the next part we will move in to the discussion.  I would suggest if you have any questions to please ask them or raise your hand.  We can pass the questions to the presenters. 

    And if ‑‑ from the onsite part are there any questions? 

   >> JANAINA COSTA:  Any questions from the audience?  Not at this point, Christoph.  Yes, one.

   >> AUDIENCE:  Hi.  So my name is Chang and I'm social an entrepreneur from Viet Nam.  So I agree with you that different regions do not see eye to eye when it comes to AI ethics.  But can you give us an example of how AI ethics from African perspectives might be different from the continents?  Thanks. 

   >> JANAINA COSTA:  Thank you so much for your question.  I will give the floor for Shaun Pather to answer the question and other panelists. 

   >> SHAUN PATHER:  Thank you for that question.  Regretfully I don't ‑‑ I don't have a specific answer to that.  There is a body of literature around culture and around ethical practice that one can find or read that does a test to it.  I can't give you a practical example because it is not an area that I'm well read in.  But ‑‑ I think the point I was making in the presentation is there is documented research that ethics or the notion of ethics or interpretation of what might be ethical in this region might be different in another region. 

    So I can't ‑‑ I don't have a specific one other than to say that I'm fully aware that it has been documented.  And I think the point I'm making just to get back to it, is that if we are to have some kind of ‑‑ a set of universally acceptable Guidelines that there should be some room within that for regional differentiation.  Thank you. 

   >> CHRISTOPH LUTZ:  Thank you.  Anyone else want to add from the other panelists? 

   >> SANDRA CORTESI:  I don't have much specifics to share, but maybe just a point or two to make.  There is a colleague at Berkman Klein that has written a case called From Rationality to Relationality, Ethics and Human Rights Framework for Artificial Intelligence.  I very much recommend it as a reading piece. 

   >> CHRISTOPH LUTZ:  So we have another question from the onsite participants. 

   >> AUDIENCE:  Yes.  

   >> AUDIENCE:  Thank you.  I work for the Egyptian Telecom Regulator, but I attend the IGF and speak on my own behalf.  So from a technical point of view we definitely have standards.  And it is important that we have frameworks that work together and are interoperable.  And though we can never have all the same rules because we threw the red lines at different levels, but still we do have standards.  And we need also to align the terminology that we all use.  Also it is important for us also to have metrics and measurement tools in order to know where we are.  Thank you. 

   >> JANAINA COSTA:  Thank you.  Christoph. 

   >> CHRISTOPH LUTZ:  Thank you for the question.  This is a really good point.  It actually touches nicely on the ‑‑ one of the questions that we raised for this panel, what do you think other regions can learn from the initiatives and responses from your agencies.  How can we engage and make sure that mutual learning takes place when it comes to AI governance and regulation.  Since we haven't heard from Samson and Celina, I would suggest that they maybe go next.  Whoever is ready. 

   >> SAMSON ESAYAS:  Yeah, I can start.  So I think ‑‑ I mean there are many at least taking the European initiatives, some of the ‑‑ there might be some good things to take onboard.  And, of course, this is also recognizing that everything the European Union is doing is going to be replicated and is going to be useful for other regions.  And, of course, in relation to the comment that was early ‑‑ also question that was raised in relation to the question of ethics, is it the same thing everywhere depending on the culture.  But at least I also agree with the comments that was ‑‑ the comment from our Egyptian colleague, it is possible to create some kind of framework that everyone can agree on.  I think at least we have agreed in terms of Human Rights. 

One thing about the European approach is that a lot of the focus going on in this ‑‑ in regulating AI is driven by the protection of Human Rights. 

    So that's something to take in to account.  How does the use of AI basically affect our Human Rights, our basic fundamental rights.  So focusing on that aspect would be relevant.  Recognize the regions would have other interest, for example, in Africa I think the issues of connection, economic development, basic services would be essential.  So that's basically the role for the local Governments to take in to account.  

   >> CELINA BOTTINO:  I would just add as we saw there is not a clear one definition, one take on all of these topics.  And that's why I think it is very important for the conversation of AI governance to be as inclusive as possible.  And when I say that, I mean from a geographical perspective as normally these conversations are being led by Global North countries in which and duplications would affect all countries. 

So normally more specifically Global South countries should be heard and should be called to be part of all of these discussions so that all of these nuances are brought to the attention once discussing these protocols or all of these topics that we are talking. 

   >> CHRISTOPH LUTZ:  Great.  Thank you very much.  We actually have a question in the chat here.  I'm going to read it out.  And I'm going to ‑‑ so hello, everyone.  I'm Amir from Iranian academic community.  What would be the practical approach for AI regulation and AI based spaces like Metaverse to ensure public safety, security, health, data sovereignty and accountability of global AI actors with regard to this fact that different societies have different ethical and legal frameworks?  What should be done for AI related cybercrimes which is borderless?  Any thoughts? 

   >> SANDRA CORTESI:  It goes back to what Celina said.  There is no one silver bullet solution.  But we need to make sure these conversations happen at the global level.  And within kind of this global level what I shared earlier that we make sure that different actors are involved, not only at the kind of legal actors but that we include Civil Society and others in these conversations as we may not agree when it comes to these borderless issues on the solution.  But through dialogue, at least through making sure that everyone can participate.  We get closer to a solution, we might be able to live with.  But that's speaking as a psychologist.  So maybe a legal scholar can add to it. 

   >> SAMSON ESAYAS:  A few points.  I think ‑‑ I don't know.  Yeah.  Our ‑‑ yeah, the question raises quite a few issues.  And it is as mentioned, it would be very different strands of things that we have to talk about.  And, of course, there is the Metaverse or any kind of these platforms raise many issues.  So the fundamental rights aspect is one thing.  So you can ‑‑ you have to focus on that aspect.  But also there are concerns about safety. 

So in Europe again as I mentioned the AI Act deals a little bit about safety, security also of AI systems, although that focus is very specific in to some use cases.  So the Metaverse is not that covered in that sense.  But also you have also content moderation, disinformation concerns that might arise on the English system.  That's another aspect we have to talk about. 

But we have at some point when this thing becomes concrete to regulate.  So, for example, in the EU, at least there is ‑‑ there is a discussion that this Metaverse could be something in the agenda for the next year, discussions next year, although I don't really see anything basically, any concrete proposal or initiative coming ‑‑ coming out. 

   >> SHAUN PATHER:  Yeah.  Interesting discussion.  To add, so I think whilst there is I mean already Metaverse is implying some different level of globalization and the like.  I mean the question is what will be a practical approach.  And I do think the practical approach is about firstly a global and then regional, at regional levels cooperation and agreement on principles and the like, et cetera. 

    The fact that you say that there are different ethical and legal frameworks, that is true, but I do think it is possible to produce a set of globally accepted principles within which then there could be some delineation and focusing and tweaking as aligned to what we have already acknowledged as differences in regions and cultures.  Certainly things from an ethical basis there is a universal set of principles.  And in relation to cybercrimes, I mean globally, most countries, not all, have found ways to cooperate around cybercrimes. 

But I think this in particular is something that needs far more work to happen because we are grappling or it is a major, major issue to figure out how to deal and address and prosecute on crimes that are commented from elsewhere.  That's a matter that needs to be elevated in the international policy and debate. 

   >> CELINA BOTTINO:  If I may add another reference here quoting colleagues from the Berkman Klein, I will share here on the chat, the AI principle, principal AI report that Berkman put together showing, analyzing all of the documents regarding principles and AI.  And trying to show which ‑‑ what Shaun was saying, the conversions at least on ‑‑ at least five, if I'm not mistaken, that would be present in most all documents, which includes not only Government documents but also declarations to come from Civil Society and from industry also sectors.  So there is another reference that would be helpful. 

   >> CHRISTOPH LUTZ:  Great.  Thank you very much.  So we have about five to six minutes' time.  And we want to do a very short last round from each speaker.  Before that if we have any last questions, either online or on site, please raise it now.  If you ‑‑

   >> JANAINA COSTA:  Do you have another question?  We have one more question. 

   >> CHRISTOPH LUTZ:  Okay. 

   >> AUDIENCE:  Thank you. 

   >> AUDIENCE:  Thank you.  Actually it is more kind of a reflection, but I would like to hear your thoughts as well, on the possibilities in the near and medium term future in terms of AI governance.  I think there is lots of scholars addressing the concept and the idea of digital imperialism or digital colonization, such as Nick Coldri.  And it is related to the fact that most of the users of the platforms that processes data or based in so‑called Global South, and we have a huge concentration in the market, mainly located in the Silicon Valley in the United States.  And ‑‑ if I'm not wrong, even open AI belongs to Elon Musk, doesn't it?  In Brazil have a specific case, that the Brazilian Government started to send the Brazilian health care historical data to be processed, IBM's Watson's AI machine.  And we have found the same process is taking place within the public education. 

    So my question to anyone who feels comfortable to answer it is there any room for a global memorandum or coalition that we can ‑‑ we make sure that at least for education and health purposes AI infrastructure should be open and have not for all commercial proposal ‑‑ purpose?  Thank you. 

   >> JANAINA COSTA:  Thank you.  Back to you, Christoph. 

   >> CHRISTOPH LUTZ:  Thank you.  I think we can maybe use this as a quick last round before we close the session.  If all of the four presenters could have a very short maybe response to the question.  Then we can wrap up the session afterwards.  Let's maybe go with the order ‑‑

   >> SHAUN PATHER:  I didn't quite hear the question.  I can't respond.  I was having a sound issue. 

   >> SAMSON ESAYAS:  I can start just briefly.  So I think yeah, I think it is an important question.  Is there space for, you know, for inclusion, for including the Global South or everyone in to the discussion upon building the AI but also in the policy making.  I mean I don't know what's happening in many places, but I think there is quite a lot of room for improvement.  So I think both the development as well as the policy in this course seems to be driven from one side, maybe from an EU, U.S. perspective.  Perhaps one would like to see more starting bottom‑up engaging the local community.  And I think in Brazil at least we have seen some engagements where the local community gets engaged in building their own systems, own software for mapping for specific areas. 

So that kind of look at engaging local engagement and financing the local entrepreneurs would be important.  But also a discussion, a discussion that brings perspectives from different regions and people with different backgrounds would be required.  And I think this platform that we have created now, this opportunity would be one thing that we need to continue and get better at I think. 

   >> CHRISTOPH LUTZ:  Please go ahead. 

   >> CELINA BOTTINO:  Quickly, drop in to addressing that comment.  I guess if there is still room and I think maybe the question is how to ‑‑ how to be ‑‑ how to get this ‑‑ make this room and move these agenda forward.  And I would cite what UNESCO is doing as it is trying to be a focus for developing a framework for regulating platforms and focusing specifically on the use of data.  So maybe there is some of these international organizations are already trying to get standard markup position.  And maybe just as you mentioned education and health, coming from the example from the United States, at least if you were trying to separate from ‑‑ not trying to do an overall policy, trying to go through specific areas, maybe that it would be an easier way to move forward such complex discussions.  And, of course, being also always inclusive and getting everyone that should be there or at least most of the people should be in a conversation at present. 

   >> CHRISTOPH LUTZ:  Thank you.  So we are almost out of ‑‑ basically out of time.  But maybe we can have a very short last statement from Sandra and Shaun. 

   >> SANDRA CORTESI:  Just a quick, the majority of the world needs to be included in these processes and conversations.  It is complex undertaking as someone who has directed the AI expert Commission for Colombia under the former President.  I know that it is not easy, but I see a lot of good work happening.  A colleague of ours is working at Cuf.  Colleagues at ITS.  So there is no way we're ‑‑ where this can be done without the majority world. 

And maybe as a prong again to my favorite community, young people, young people, one in three of Internet users.  And so they never get a seat at the table.  They are rarely being heard.  So they have a voice in this and an opinion as well.  And we should do better to include them in these processes as well.  So thank you so much. 

   >> CHRISTOPH LUTZ:  Thank you.  Shaun. 

   >> SHAUN PATHER:  In summary, I mean my highlights from the governance perspective is one, is that from the African perspective beyond economic dimensions we need to think about governance in a way that doesn't perpetuate the digital inequalities.  Data related infrastructure.  There is a data policy framework, but the jury is still out and EU data policy framework.  Lastly we need to do much more to inform the policymakers and those in charge of governance to understand the issues around AI and related matters that are perpetuating.  Thank you. 

   >> CHRISTOPH LUTZ:  Thank you.  I would like to thank all the panelists and participants for the interesting presentations.  I would like to thank Janaina Costa.  And Christian Perrone was strongly involved in preparing this panel.  Great thanks to you both.  And just very shortly if the presenters could stay just for a minute or so for a group photo.  That would be great. 

   >> JANAINA COSTA:  Thank you, everyone.  Thank you for this session.  Thank you.