IGF 2021 – Day 4 – Town Hall #53 AI for inclusion and diversity - 4 continents perspectives

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> JANAINA COSTA: Hi, Christian. I appear as Christian Perrone, but actually it's me, Janaina. I tried to fix on‑site. We have a colleague, Lucas. I just sent everyone a new link. My only concern now, maybe our participants, our audience, will experience the same trouble or problem to attend. We have more than 30 registered participants but I don't have their mail. So I don't know if they're going to be able to join our session.

If there's anyone from IGF technical support here, my name is not showing correctly. I'm not Christian Perrone. If someone can help me to fix that?

>> SHAUN PATHER: It's Shaun here. You asked us for help to do what?

>> SANDRA CORTESI: Hello.

>> CHRISTIAN PERRONE: Hello, folks. There was a bit of a technical glitch from all over. Janaina had entered from my name, apparently, but now things are settling in. Thank you for joining us from different corners of the world. I know it's been a bit of a challenge this whole IGF week. But I'm happy that we are here together and we'll have our very interesting session and panel.

Thank you very much to you all.

I see most of us have joined or are joining. For those who do not know me, from our audience, I'm Christian Perrone. I'm Head of Law and GovTech at ITS Rio. I'll be the moderator today. That's why I'm blabbing a little bit now. It will be fantastic to have our panelists from four different continents representing four different points of view and discussing this very interesting topic of artificial intelligence. Obviously, we're going to focus on inclusion and diversity of, of course.

And it will be fantastic to have these different views, this diverse range of people from very interesting and different backgrounds. So it will be an interesting opportunity for us to have an open and broad discussion on the topic.

As far as I can see, I believe most of our panelists have joined.

Let's start of panel for the day and our town hall, which is a bit different. We want participation from the audience as well, on the ground and in our Zoom meeting as well.

So as I notice, I'll be joining ‑‑ well, today, we'll join from different parts of the world, from four different continents, so four different panelists will have their own view and try to explain things from their own points of view, from their own regions and own continents, on this very interesting and important topic of AI and inclusion.

But in order to start, we'll have to show our ‑‑ we'll have to wait for the IGF panel to start officially, then. So thank you very much. Let's wait a little bit longer and let's go forward shortly.

(Video plays:)

>> We all live in a digital world.

>> We all need it to be open and safe.

>> We all want to trust.

>> And to be trusted.

>> We all despise control.

>> And desire freedom.

>> BOTH SPEAKERS: We are all united.

(End video.)

>> CHRISTIAN PERRONE: So thank you very much. Now that we have officially opened our panel, sorry, our town hall on AI inclusion and diversity from the four different continents. Joining with me and with us today are four great specialists. This is an event that we proposed as different organizations comes together through an interesting partnership called 3AI partnership, which means ITS Rio representing, but also BI for Norway, and Berkman Klein Center from the U.S. So already from the start we have an open‑ended partnership focusing on different of AI, but joining with us today also we have panelists from the University of Western Cape, South Africa. So focusing on those four different continents and trying to propose interesting views for the future and from the whole globe.

The idea of the panel today is to have a frank and very open discussion from what are the main risks and concerns in terms of AI, but also what are the main opportunities and ways to mitigate our risks and concerns.

And the idea of having this as a town hall is to provide an opportunity not only for us to have these panelists, very open‑ended panelists and their interesting views, but also to propose in the second part of our discussions I'll also open the floor for the contribution of different members of the audience.

For this contribution, we have three possibilities. In the second part, we'll have an opportunity for people to open their mics and talk, but also they can contribute continuously in the chat, which we have our great moderator Christian Fieseler here from BI, as well, and we have a moderator for the chat. And third, we have a mural, we will use a new type of technology, so that during the discussion we can propose interesting ways and catch interesting ideas, concepts, phrases, so words, with this mural. And with that, we'll have the help of Janaina Costa, a researcher at ITS, that will help us with this new technology. Throughout the session, she'll post the link so that we can chat through our discussion today.

The first part, together with us we have four panelists, so Samson Esayas from BI Norway, which he's a professor there, as well. We'll have Sandra Cortesi, which is Director of Youth and Media from Berkman Klein Center. We'll have Celina Bottino, which is Director Of project ITS, Institute for Technology & Society Brazil; and Professor Shaun Pather from University of Western Cape from South Africa. So I'll give the floor to them for seven minutes each for them to give us a little bit of their points of view of what are the main concerns, what are the main risks, what are the main opportunities, what are the main mitigation factors, and how they see this playing out in the next five to ten years. So seven minutes each.

We can start out with Samson. You have the floor for seven minutes.

>> SAMSON ESAYAS: Okay. Great. Thank you, Christian, for the introduction.

I'm working from Africa, but I'm here to speak more really about the development in Europe. Shaun will take care of the African perspective. Of course, I think one of the questions for the panel, for the panelists, what are the main concerns? What are the silent features that are driving AI discourse in relation to inclusion and diversity in Euro? And I think one of the main drivers of concern of AI, at least in discussions of AI, in Europe are often framed around respect for fundamental human rights. This conception or framing of fundamental rights often takes a central stage. Here, when we talk about fundamental rights, of course, we have several of them. The right to protection against discrimination, nondiscrimination based on race, gender, sexual orientation, is one important fundamental right recognized under the EU Charter of Fundamental Rights. The right to data protection is another important right which is recognized under that framework, as well. Of course, we have other also fundamental rights which are considered to be threatened or impacted by the use and deployment of AI.

I want to highlight two concrete cases which actually attract some attention in the last couple of months which is in Euro. The first event or case is the use of AI systems for providing grades to students. So I think many of you have heard about these things. So as a result of the pandemic, many governments and examination‑assessing institutions had to find new ways of providing grades, because students were not able to sit for their regular exams because of the pandemic. Of course, many, many institutions turned to AI and artificial intelligence for providing those kinds of grades, but international organizations, but also governmental institutions.

And this has led to some public outrage. For example, in the UK, after the UK qualifications and examinations office used AI systems to kind of modify or assess the grades given by teachers. Then many students actually took to the street because they found the use of the AI system actually discriminated based on their social economic background. So the data showed, at least data that was some media reported, that actually the algorithms benefited schools or students from schools with a privileged neighborhood. Because the algorithms, in the algorithms, the examination office also took account of previous performances of schools.

That kind of led to these demonstrations in the UK, and also other parts of the world.

This is perhaps one of the few instances where the use of AI systems actually led to public outrage. And another example which is more recent is that in this year the whole Dutch government was forced to resign because of a scandal involving a child benefits scheme. So the scandal emerged after the tax authority, the tax authority implemented an algorithm to kind of detect fraud in child benefit claims. And this algorithm has mistaken identities around 20,000 parents as potential fraudsters or as fraudsters. And of course, not surprisingly, many of these parents have had an immigration background. So the problem was that because of the few instances of actual fraud cases that were assessed with migrant parents, the also had generalized immigration background as a factor for potential fraudulent claims of child benefits. So these are some of the concrete things that actually show the concerns that the EU is dealing with and the concerns that are driving the discourse in Europe.

And of course, in terms of the initiatives that we have had, Europe have had already existing legislation that provided some kind of protection, for example, the GDPR had some provisions, particular Article 22, which deals with protection against solely automated processing and the right to contest automated decisions. So for example, in the Dutch case, the Data Protection Authority has already imposed a fine on the Tax Dutch Authority based on the GDPR.

So we see there's already some existing legislation that provides some solutions to the problems. But, of course, there are also major legislative initiatives that many of you are aware of. One particular one is the proposal for Artificial Intelligence Act or the AI Regulation Act, a regulation which was unveiled by the European Commission in April of this year.

This is very ambitious, perhaps the first comprehensive legislation specifically addressing AI systems that touch different sectors. And this is a very ambitious and complex proposal. So we won't have time to look at the details of the proposal. I'm sure that there are many panels in IGF talking about this proposal. But here I want to focus a bit on some of the provisions that actually touch upon this issue of inclusion and diversity.

So the objective of the proposal for the AI Act is basically to establish a human‑centric AI, a AI system that respects fundamental human rights, an AI system that is robust, accurate, and trustworthy. So you see that the fundamental perspective, the fundamental rights perspective, again comes central in the objective of this regulation.

So based on these two factors, based on the protection of fundamental rights, first security and safety of those systems, the proposal basically adopts what is referred as a risk‑based approach and identifies three categories of AI systems. So the first category is basically AI systems are considered to pose unacceptable risk to fundamental rights, safety and security of individuals, so if the AI system falls into this category, then it means you cannot use it or deploy it within the European economic area.

The second category is AI systems that are considered to pose high risk to fundamental rights and safety and security of individuals. Here, those AIs systems have to comply with a detailed set of regulations and specific requirements related to risk management, data governance, transparency, and human oversight.

Apart from those obligations, then there are requirements of confirm assessment. You have to also produce a third‑party certification or assessment showing you actually comply with those requirements that are set up under the regulation.

Then the third category is low risk AI. In this kind of AI, then there are only some transparency obligations associated with this.

I will briefly touch upon the first two categories. Those are unacceptable AI systems that pose high or unacceptable risk and the AI system that pose high risks, and some of the issues related to inclusion and diversity there.

In relation to the first category, which is some of the AI systems that are considered to pose unacceptable risks, there are four categories of AI systems. I won't be able to go through four of the systems, but just to highlight one of those areas, the proposal or the act, the proposal for the act, prohibits the use of biometric identification in real‑time in public spaces. Basically, the regulations say that it's not possible to use facial recognition technology or biometric identification systems in real‑time in public spaces.

This is there are, of course, some exceptions but there's a general prohibition now for the use of real‑time biometric identification in public spaces.

And I think many of you are familiar with many cases of discrimination based on facial recognition technology, especially in many instances in the U.S. where facial recognition technology misidentifies people of color disproportionately and the impact that will have on people with a minority background. We've seen in the U.S. some states have started to introduce bans on facial recognition technology by law enforcement. Here we see the EU is also moving towards that kind of prohibition on the use of such biometric information in public spaces, of course with some exceptions where this system might be allowed.

And in relation to the second category of AI systems, AI systems that are considered to pose high risks, there is a long list of AI systems that fall within this category which I'm not able to go through, but to look at some of the list, we have AI systems that are used in education and vocational training. If you are using an AI system to decide access to education, that would be considered a high‑risk AI. Or if you're using AI system to assess grades or tests that are required to enter into an education institution, that would also be categorized as high‑risk AI. The case we have had in relation to the protests in the UK for the use of algorithms would fall into this kind of category. It would be a high‑risk AI. And then the user or the provider of that AI system has to comply with very specific detailed obligations under the regulation.

And we have also the use of AI systems for employment and recruitment purposes. We have had also similar experiences where AI systems that are used to, for example, sort out CVs, discriminating against some groups, women in some instances or people of color or with people with certain names, Islamic names, that would be considered high‑risk AI.

And we also the use of AI systems in for deciding public benefits, such as social security, social security benefits. So the Dutch case would be covered here. If you're using to assess eligibility of social welfare benefits, then that would be also considered as high‑risk AI.

And there are, of course, lists of AI systems in relation to assessing of asylum in the use of AI in judiciary or law enforcement, or for example for purposes of assessing the length of sentences, which often also are very discriminatory practices.

So these are some examples.

If an operator or a user is using one of these AI systems, then they have to comply with a set of obligations, as I mentioned. Here, we have also obligations in relation to risk management, obligation in relation to technical documentation, transparency, security of the system, accuracy of the systems, but I will highlight one particular obligation in relation to data governance and data policy requirements under that obligation.

And this is Article 10 of the proposal which basically sets out an obligation that if you're a provider or user of one of these high‑risk AI systems, you need to have a data management framework that ensures the trustworthiness of that system. So among others, this requires the user or the operator to have a dataset, any dataset, for testing, validating, or developing the system has to be relevant, complete, representative, and error‑free. So basically, and of course, we know that many of the discrimination cases we have had, at least the majority of them, are related to the lack of representation in the dataset. So this provision, under the AI Act, is basically trying to remedy that problem that we often see that leads to discrimination of certain groups because they are not represented in the dataset that is used to train the AI systems.

So this obligation basically imposes that you have to have a representative dataset and that the dataset has to be complete and then error‑free. This would be also based on the confirmation. Before putting the AI system into the market, you have to make sure someone confirmed that you have complied with the obligation.

It's interesting, noncompliance with this obligation actually also results in the highest fines under that proposal. It can be up to 30,000 ‑‑ 30 million euros, or 6% of the global annual return of the company, if a company does not follow or comply with this principle, this rule, under that proposal.

So that is something interesting.

>> CHRISTIAN PERRONE: Fantastic, Samson. You have given a very broad overview, the main questions, that are the EU concern. And we can go back to specificities related to what are the main solutions that have appeared in this proposal on EU AI Act. Thank you very much for that.

So that we can move continents a little bit, to have a little bit of a broader overview with that, I will call upon Celina Bottino, the director of projects at ITS, to give us a little bit of a point of view of Latin America and particularly of Brazil.

>> CELINA BOTTINO: Hi, Chris. Thank you. Thank you. And hi to all of you on the panel. I'll try to be quick so that we can have more time for more conversation. Since we're talking about AI inclusion and perspectives here from the Global South, I would like to give a big few steps back before focusing on AI specifically.

But it's always nice to remember that ITS, together with NoC, the Network of Centers, and Berkman Klein Center, have already been discussing this topic of AI and inclusion since 2019 when we an opportunity to do this big event down the in Rio.

I think now it's maybe time to have a second edition, right?

So that was the time we started to try and frame AI discussions with this lens of inclusion and we had just back then some, like, research questions which would emerge, but now I think we have advanced a lot on the uses and we have more to discuss.

But I would like to focus in some points here, as I mentioned, that when we talk about AI, especially I'm looking at Brazil specifically, there are some issues regarding connectivity, infrastructure, data ecosystem, and education that I think are very important to be addressed.

So there's still a global divide on the quality of connectivity, right? Considering most AI, or all, I guess, applications will depend on a certain type of connection, it's having a good internet access is crucial for any use of these applications.

And we're still, unfortunately, lagging behind. For example, Brazil has a considerable amount of people that are not yet connected and not with a good connection. And I think one point that was raised is the fact that still our public schools are not connected to internet. So they do not have access to any of these technological solutions that are used in schools.

And we had now, in the pandemic, where Brazil was a place I think that schools were closed for the most time, more than one year with schools closed. And the only way it was through online education, and people from public schools, unfortunately, in practice were just without any access to any kind of education because they just didn't have how to get to this, to the content.

So I think this shows how important we need to fix some crucial tools, let's say, for the use of technology and AI as a whole.

And another important point is the data infrastructure that is necessary to feed AI applications. For example, right, when you use self‑driving cars they will be using maps, a kind of Google Maps or any other sort. There's still many regions that are not mapped. And I'm not talking about remote regions. I'm talking about places right in the middle of Copacabana, one of the most famous neighborhoods in Rio, where the low‑income community that exists over there just right next to these big buildings, high buildings, does not appear on the map. So you can see how still it's important not to mimic the exclusion that still happens on the offline world, let's say. Another point is the quality of data. Right? There's a lot of work to be done to transform our data into machine‑readable data. And I think the health sector is a good example where you could use a lot of AI applications to help better organize service and still we're talking with some developers that we're trying to see some solutions on this sector, but they just said they didn't have data to work with. So unfortunately, they were leaving the country to try and develop their ideas somewhere else because they just didn't have the raw data to work.

And another point of data, which now it's getting ‑‑ Brazil didn't have until two years ago, a data protection law. But now we already have it. But still, it's very much in the beginning of its application. So the importance and concern with data protection is also something that should be noted.

And the other point is that the issue of education. Right? And the need of reskilling. As we saw, for example, a lot of national strategy AI plans, they focused on this topic. Right? Which is crucial for especially when we're talking about countries here in the Global South that education is the gateway for people to be included and connected to these new possibilities. Right? And so what is the plan that a country should be looking for regarding preparing its population to use and to develop and to be also producers and not just consumers of this technology? Unfortunately, we do not see much of any of that on the AI national strategy. And just before, to wrap up, just very, very briefly overview of the Brazilin AI National Strategy which was published early on this year, which got people frustrated because it was the result of a public consultation which was very nicely done, let's say, because it was very much multistakeholder. It was put for people in all sectors to contribute, researchers, industry. But the result did not reflect all the contributions that were done in the context of the public consultation.

So the result was more like kind of a mapping, let's say, of the AI principles that we're seeing everywhere.

But it lacked more strategic points or objectives, like what we would like as a country and how we would like to position ourselves, and that was not there.

But I know there's going to be a second round of discussions to have a second version of this AI strategy, which we believe it will be hopefully a bit better than this other one.

And lastly, Brazil, regarding regulation, even though we have all these problems that I mentioned, even though our congress is very much excited and thinking that they really should regulate AI, like, for yesterday, as if it were a very serious threat or something that really makes them think that that's a real urgent matter. So it was already approved in one of the houses, and especially in this context of still the kind of pandemic where not much conversation was managed or happened. So differently from the process of the strategy, the discussions regarding the bill of law regulating AI was not multistakeholder. It was not much opened. And it did not even have time to discuss, especially on this issue which really needs a lot of discussion. We see how Europe is taking its time and really looking to it and suggesting frameworks and we're just rushing through. So that's something, thinking about what countries should learn, that's something that should not be repeated, let's say, from elsewhere. But yes, I would just finish with a positive note, is the importance of any initiative that tries to regulate these topics, that it really should be truly multistakeholder and especially with enough time to discuss. Right? Because any rushed regulation could impose also barriers for innovation, and then create other problems. Right? And there are other possibilities for like sandbox regulation or prototyping legislation, and trying to separate, like Meta has this project, Allude? I forgot the name. It just tries and test specific project legislation on a real‑world AI application and then observe how it would work, and then go back and discuss if it makes sense or if it's having intended results. If not, how to adjust it.

I think I'll leave it here. I'm happy to be here and happy to hear from the rest of the panelists.

>> CHRISTIAN PERRONE: Thank you. So if we have an issue there, it's about diversity and discrimination. It's always interesting how you bring everybody together and probably a step forward is having these multistakeholder points of views. That's very interesting as well. Now, let's go with Professor Shaun Pather to give his views from South Africa and Africa as a whole. Thank you very much, Professor.

>> SHAUN PATHER: Yeah, thank you very much. I am wondering, I have a few slides. Is it better if I have a more structured way? If the moderator can enable the screen‑sharing?

>> CHRISTIAN PERRONE: Let's see if I'm allowed to do so. Give me one second.

>> SHAUN PATHER: If not, I'm just going to talk through them.

>> CHRISTIAN PERRONE: I'll see if I can. I think you have cohost status. So probably you will be able to share your screen.

>> SHAUN PATHER: Yes. Thank you.

>> CHRISTIAN PERRONE: Fantastic.

>> SHAUN PATHER: Okay. Yeah. Apologies for that. I thought it was better and hopefully I could just keep to the time that is allocated to me. Given the many arguments we're already familiar with, the arguments that have driven us towards having this session around impacts of inequality and diversity, I think they are well‑documented and we continue to document it. But in thinking about Africa, I think I must quickly reinforce the state of digital inequality on the continent.

I have extracted a few bits of data from the ITU's 2021 publication on facts and figures, which for those who are familiar with this know it basically is a comparison of data from across various continents.

So firstly, just from the perspective of simple use, and I mean one cannot talk about the AI and its applications without considering that within the inter‑network. Because the inter‑network is essentially what's driving much of the inequalities and other perspectives we talk about.

Here, as we would see, if I can just pick up a laser, so we can see where Africa is sitting there across the rest of the world.

Again, in terms of location, urban versus rural, and the substantive rural populations in Africa, especially. And we can see and compare to the rest of the world, the continent is not well‑positioned.

If we think about mobile network, because that's what's serving the greater numbers of populations on the continent, again you can see compared to the rest of the world where more than half still have 3G and then 2G connectivity.

The issue of affordability is a serious matter. You can look at all the reds and the pinks here where the G&I per capita costs between 2 and 5, and 5 and 10, in this continent.

And I suppose not that different from Southern America, compared to the rest of the world.

And then, lastly, the issue of skills. As you can see again, the ITU gives basic, intermediate, and more advanced skills. This is just basic. As you can see here, just from the coloring, that we're zero to 20 percent, and there's not much happening across the rest of the continent.

So as a starting point, the issue is that digital inequality is already very prevalent on the continent. Some of the key pillars that underpin it are those around skill, the infrastructure, affordability, and that of universal access.

It's interesting because I've been working in this area for many years, and by and large the market forces seem to drive operator and for‑profit models.

And for those who we need to bring into the digital era more continue to lag behind.

So I think the opening point around the issue of inequality we talk about is there is already a very severe problem of digital inequality. And then looking a little bit of what I can find around the continent on policy ‑‑ oops. Well, I don't know what happened there.

Okay. On policy discourses, nothing much. I tried to see whether the AU has formed anything. I have some comments out of this event that took place last year, where the Fourth Industrial Revolution was very much on the agenda. And I just extracted two quotations out of a press writeup for that because this talk of the African Digital Transformation Strategy, and the focus of this strategy from an African perspective is being seen as a leap‑frogging opportunity.

In other words, we are down here in terms of the digital stakes, and with the advent of 4IR technologies there's a sense that, well, perhaps Africa can leap‑frog and put ourselves up front there in terms of where all of these emerging technologies go.

Just because I'm more familiar with South Africa, again, I asked was is there any part of our policies that even begin to examine issues of inequality and the rest? Of course, the Constitution of South Africa which is well‑known for the way it's developed does kind of give the underlying protection. And then we have a national development plan which focuses on smart technologies, nothing about inequalities coming through.

Urban development plans, again, focusing on efficiency and service delivery. We developed an e‑strategy in 2017, again, which looks at smart cities.

The national integrated ICT policy white paper is our overarching ICT policy document, speaks of a digital society, emphasizes privacy and security, but that's kind of where the policy discourse stops.

So out of where the policy is and thinking of government, the issue for me is that there's a perpetuation, we're setting a perpetuation of digital inequality. The average poor person is already digitally excluded. There's a proportion who has access and use to basic ICTs, those who are fortunate.

And I say this at large because very often for people who operate in the mainstream metropolitan areas of the continent we forget of the reality of large numbers, swathes of the population, who are outside the metros. By and large in Africa, most of the main key metros are not that bad off in terms of where they're situated in infrastructure.

So the Fourth Industrial Revolution technology developments, including AI and machine learning, tends to focus then ‑‑ don't focus on how we would support social economic development. So that's the first problem that's going to perpetuate digital inequality.

And the fact that AI is driven by data and the fact that large numbers of people are not even active on digital platforms means they're going to remain outcasts. So that underscores, then, the notion of inequality that's going to continue to happen.

So the current focus has to shift. We've made a lot of progress in terms of privacy and protection. In this country, we have an act, very well‑conceived act, but the discussion, debates, and policy frameworks which are focusing on how do we build the economy, how do we make new industries, new manufacturing, et cetera, and how do we leap‑frog, but without thinking of the possibility of how we overcome issues of poverty we're going to entrench those social and economic problems. They will loom large. That's the concern as we move forward is that we're not focused. And the focus is going off in one direction only.

So to finish off, I listed a few inclusion and diversity concerns. I think some of my colleagues who have spoken just now might overlap a bit, especially with the Latin American presentation. I've made this point about the digitally marginalized become more marginalized, and these statistics I'm not going to repeat that because I've made the point.

The divide between big and small business, most countries on the continent have micro‑entrepreneurial activity as the center stone of driving economic development. But what's happening when you look around the environment, AI technologies is being driven by big business, multinationals in terms of where they're focusing the objectives of applying this.

And the potential is going to exist, because if the economy becomes skewed towards a bigger route, we're going to create this separation between the big, and the smaller players will need to entrench themselves to drive economic growth. That's one concern of inequality.

Now, the issue of skills. Once there's some documentation and work that suggests that the new technologies are going to create more jobs than those that will be lost, and the expectation is we're going to reskill, retool. But I don't ‑‑ my personal view is that I don't think in the state of high unemployment rates and the levels of poverty that the current unemployed are going to have better prospects in a world that's dominated by these emerging technologies. I'm not sure. I don't think the evidence actually tells us that.

So the potential, and I think colleagues have already spoken about social inequality that's going to be entrenched, is quite clear. Because more and more decision‑making is relying on AI technologies in the background, sometimes in ways that we don't even realize.

I think this is the last one. I will stop here. Maybe in the discussion we'll pick up others because I'm out of my time.

Here is a good example of how AI was used for political manipulation in South Africa, where Twitter bots were used to drive a particular agenda and drive the notion of a concept called white monopoly capital, which fueled huge and massive racial discord just a few years ago in South Africa, and it was unchecked until it was found.

So these are all the potential concerns. I have ideas around how we mitigate, but we're out of time and I'm going to leave my looking‑ahead thoughts to the open discussion, Chair. I've exceeded my time by one minute. Thank you.

>> CHRISTIAN PERRONE: Thank you very much, Professor. It was a great overview. I really see a lot of overlaps between the different regions, particularly Latin America, you're quite right, and on the different issues of the digital infrastructure and the digital divide and how it will impact AI not only development but also implementation in the future. So thank you very much for that.

And now we can have our fourth panelist looking from the standpoint of media and use, Sandra Cortesi, from Berkman Klein Center in the U.S.

>> SANDRA CORTESI: Thank you very much. Please make a cohost. Even though you confirmed that already, but please make me a cohost. Thank you.

>> CHRISTIAN PERRONE: Sorry? Oh. Fantastic. Thank you very much.

>> SANDRA CORTESI: Okay. You can hear me and you can see my slides, yes? Okay. So let's try to be brief. After Europe, Latin America, Africa, I'm here to share some perspectives from the U.S. My name is Sandra Cortesi. I work at the Berkman Klein Center for Internet and Society at Harvard University based in Cambridge, Massachusetts.

Maybe as a footnote or as a caveat, because my background is in psychology, I tend to not represent the views related to law or legislation or even regulation.

I asked, essentially, my colleagues at Berkman Klein for some help in this regard, and so here is some inputs that they said that may set up my conversation or my seven minutes.

I asked them what's going on in the United States related to artificial intelligence? Here are some observations. One, things are happening but the things that are happening are mostly happening at the state level versus the federal level. Two, AI is most often regulated on a sectorial basis. The emphasis, which there is kind of an emphasis, the emphasis is on autonomous vehicles, AI in the judiciary system, policing, and credit‑worthiness.

And then at the federal level, emphasis is placed on research and development, rather than regulation.

Maybe one noteworthy development from this year, earlier this year, essentially the National Defense Authorization Act passed, was approved by Congress.

One of the AI‑related elements within that act is the National Artificial Intelligence Initiative Act, which mainly focused on improving national competitiveness in AI, via research and development, investment, improved interagency coordination, education, a topic very dear to me, and standards development.

The law facilitates soft law guidance and may instigate enforcement action.

What is truly interesting to me about this act is two specific things. One is related to the lack of AI talent. So it talks about the development of an artificial intelligence science and technology workforce pipeline. And the second require the activity that is very much of interest to me, the implementation of an education program or curricula, not only at the post secondary level but also in the K‑12 realm, which is usually where I connect to this, because my focus is on young people ages 12 to 18.

But if you have specific questions about the National Artificial Intelligence Initiative Act please don't hesitate to my two colleagues listed here on this slide.

Okay. So inclusion and diversity in the public discourse, something I follow very closely, tends to focus often on the uneven access and possible biases or discriminatory impacts of AI‑based technologies on different populations. We heard it a couple of times already, with the focus on kind of the disturbing risk of amplifying digital inequalities.

When in the U.S., when we talk about populations, we tend to refer to or include, at least, urban and rural poor communities, women, youth, LGBTQ individuals, ethnic and racial groups, or people with disabilities. Again, in the public discourse in the United States, I would say women and ethnic and racial groups are most often centered in the discourse. But the one community I am very interested in is, of course, this youth community.

And so I thought I would use my couple of minutes to talk a little bit what's happening in this youth realm. For sure we can say we're at the very beginning of understanding the impacts of artificial intelligence on young people, and not surprisingly this lack of evidence in research is also reflected in the different policy documents.

If you look at all national AI plans, it has become a little bit better, but this is the visual is from a mapping that UNICEF did, I think, in 2019, where you can see that also in the U.S. youth issues or youth topics are never mentioned in the national AI plans.

A little bit more is happening since then, but I think there's still a lot to catch up with.

To end, essentially, I wanted to share two observations from conversations with young people in the United States that may impact how we think about law or regulation in the United States and beyond. The first observation from those conversations with young people is that we as adults and young people, we often use the same terms, but we refer to very different things. That is important because if you write something into an official document, likely this is written in a forum of how the adults think about it. A very prominent example of this is privacy, also, in the AI debates and discourses, where privacy for a young person means something very much connected to social and interpersonal elements. So they think about parents, friends, teachers. Where as adults we think about or have an institutional or commercial concept of privacy. We think about governments and companies. So that is important as you write protection of privacy or things like that into something, that the perspectives may clash on what that concept even means.

Other elements related to privacy could be the concept of surveillance, one where young people have a different perspective. Again, it's much more related to surveillance from adults, adults. Another concept could be the question around autonomous vehicles where so far young people are not able to drive, where autonomous vehicles may mean something very different than 18 plus year old or 16 plus year old individual.

And the last concept related to AI is this question of personalization. I just had a really interesting conversation with young people two days ago where when they think about personalization their biggest fear is fear of manipulation and landing in kind of the filter bubble. In their minds, they would like to have an on/off switch for personalization where they can see the information as it portrays to their interests but also able to switch it off, would be curious to see what adults think about it.

To end, the second observation from young people is, in order to be relevant to young people it's important to consider different approaches. Obviously, law and regulations is one way to go about it, but another way would be to think more broadly about education. So that connects it back to the beginning where I started. I think particularly as things move relatively quickly and young people learn as they go, as well, I think law, often, and interventions in that space lag a little bit behind. So education is something important to consider.

So Youth and Digital Citizenship+, I very much recommend this report we wrote. We published it in 2020, that talks about education broadly, including artificial intelligence, and we did a mapping looking at who is actually talking about education and AI. In short, very few. So we're trying to overcome that by developing AI tools and lessons that anyone can access, 35 languages, also on AI, Creative Commons license.

Last but not least, just to flag new policy guidance on AI for children that came out last week, developed by UNICEF and other institutions. I highly recommend it as a reading resource.

Okay. Thank you very much.

>> CHRISTIAN PERRONE: Thank you very much, Sandra. It was fantastic.

One of the main interesting things that we have at this time for this kind of discussion is that we have not only four different views from four different continents, but four different views from different genders, different people, different understandings, and obviously different backgrounds, as well. So your background with psychology and youth of children, it's fantastic.

It fascinated me your point about the on‑and‑off switch of personalization and manipulation. This is something that not only for the youth, but particularly for the youth it is quite interesting.

So now we have about seven minutes. So what we can do is, first, open the floor if there's anyone to talk to. If not, we can ‑‑ if Janaina can help me, we can showcase a little bit about the mural that we have been doing throughout your four contributions to this discussion.

Which was quite interesting. If you look at the mural now, you will see that there were lots of things that we have discussed during this 50 minutes in our town hall discussion.

You see some of the risks and potentials, concerns, they kind of are to an extent similar, but also they refer to different situations. So they refer to some technical aspects. They refer to some rights aspects. They refer to some specificities concerning infrastructure, specificities concerning what are certain groups that might be affected by AI, for instance, when you talk about skills, when you talk about surveillance. I find it quite interesting the point that Sandra also mentioned, surveillance from the standpoint of youth is not the same sense as Samson was talking about at the beginning, talking about facial recognition in the public sphere. Surveillance is not only that, but also surveillance from their parents. This is also an issue that can be discussed.

So what we can open the floor now for the next five minutes, then, is to discuss a little bit about what you think about the next ten years. So if you can wrap up your views in about a minute, that would be fantastic. And we can finish on a high note thinking about what it is the panorama over the next ten years. I know it's going to be hard in one minute to do that, but let's try and share our views on that note.

So let's start from a different point of view. Let's start with Professor Shaun Pather and then move around a little bit on that. Thank you. You have the floor for final remarks.

>> SHAUN PATHER: Thank you. I think I'll just focus on one issue. There are a couple of ideas, but the one is I think from the African perspective we need a more structured and coordinated response in terms of AI governance. In international frameworks, we have the OECD Principles on trustworthy AI. There's a global index on responsible AI being developed. And I think if we have a universal framework under which we deal with, then, country‑level tweaking and implementation of the framework, because ethics differ from continent to continent. So regardless of where technologies are developed, they will be subjected to a common set of AI‑related ethical issues that we can develop in concurrence under the banner of the UN. Thank you.

>> CHRISTIAN FIESELER: And thank you very much. Most fantastic point. Fantastic point of coordination.

So if I can call Celina to have one minute to give your final remarks and seeing the future in ten years, what you think is necessary.

>> CELINA BOTTINO: Thank you. Well, I think what is necessary for these next years would be maybe a massive investment in trying to mitigate all these connectivity issues that we mentioned, which is similar to what in Africa, too. So get at least the minimum infrastructure in place, because I think there are a lot of possible and positive, also, examples which I just learned about AI, Brazil AI application, used to help accessibility of people that cannot write and move, which they just can do it by using their blink of their eye. I'll just share here. So I think there are many positive things to happen, and I'm optimistic.

>> CHRISTIAN PERRONE: Thank you very much. Investment is also a very important point. Sandra, you have one minute.

>> SANDRA CORTESI: My hope for the next ten years would be that we increase youth participation in these debates, not only in the debates, but that they get a seat at the table and become designers and developers of AI‑based systems. How to do that, I will post in a second in the chat, but thank you very much for having me.

>> CHRISTIAN PERRONE: Fantastic. That's a very interesting point. Also, we were talking about multistakeholders, but also multi‑participation holders, as well. It's quite an interesting point of view, as well.

So now you can finish up our discussion, Samson. You have the floor for the next minute.

Thank you very much.

>> SAMSON: Yeah, I think in Europe I think the main focus would be engaging the legislation that are in place at the proposal stage and then passing them into law.

And I think one important thing to emphasize in the legislative proposal is this idea of focus on fundamental rights of individuals, so the right to their privacy, the right to protection against nondiscrimination. So I think that the focus on those aspects would be something that we need also elsewhere, not only in Europe, but also in other parts of the world. So I think that I hope that we will see similar protections legislations coming out with a focus on that, on that aspect. Yeah, I think I will stop there.

>> CHRISTIAN PERRONE: Thank you very much, Samson. I think that's the main point, to finish now, with a very interesting high note of participation, multistakeholder, about looking at financing and organizing at the future, and obviously passing legislation that focuses on the rights of the persons that are mostly affected by this. So I think that was a great town hall discussion. Thank you very much for you all. Thank you for our group, our partnership, of 3AI partnership. And thank you Professor Shaun Pather that participated with us all. And thank you for the background and the IGF people that have helped us so much, and that I could not mention every single person, but I know that they deserve our thanks, as well.

So Cheers. I hope we continue this discussion further in a different event and different forum. Cheers. Bye.