The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
>> Good morning, everyone. We're starting in a few minutes. I want to encourage everyone to come to the front. Let's make it like a roundtable conversation. We're just starting.
>> Good morning. Welcome, everyone. Welcome, also, those in the back. So we have many seats around the table. Please, if you want to join, it will make it easier for us to have an interesting conversation. We have 60 minutes. I think they can do very good television shows in identify minutes. So we have no reason not to do a good debate in 60 minutes. It's my pleasure to welcome you to this UNESCO forum, Business and Human Rights in Technology Project: Applying the UN Guiding Principles on Business and Human Rights to Digital Technologies, based on a concrete text regarding what UNESCO is launching.
We also have all sorts of contributors, other series of reports of UNESCO connected to this issue. So I think we'll have an interesting kickoff. And then, of course, the ideas that all of you can contribute to this central question, that is how we can enhance and foster policy. You will see that my colleagues who prepared this are very ambitious with the questions.
Of course, we'll need an entire year to discuss the questions, but, if we can offer some inputs to these ideas of building policy, I think we'll have achieved an interesting result for the reports.
It's my pleasure to introduce my colleague who is the head of UNESCO delegation here in this IGF but also the head in the area on ICTs in connection with several UNESCO areas and our headquarters.
All the speakers, by the way, have very, very interesting CVs. In order of saving time, I'm not going to read all of them, but I will ask you a specific question so you can start thinking ‑‑ to describe yourselves in no more than seven words. So you have the challenge first.
>> Thank you so much. So I had the science and agriculture section ‑‑ seven words? ‑‑ I thank you for your introductions. I extend a very warm welcome to all of you on behalf of the assistant director General For Communication and Information, who cannot be with us today. As we have in UNESCO, the general conference meeting, which decides on programs and budgets, meets every two years, and it's an important week because actually one of the commissions last Thursday decided ‑‑ and that's directly relating to our work here ‑‑ to develop standard setting instrument on ethics and AI, human rights based, human‑centered, ethical recommendation. And that will be a process which will be with disciplinary decision this week. It will be launched, and it will take two years and be a very inclusive process.
This is also why our assistant general could not be with us today. We're very happy about this.
Ladies and gentlemen, we can think about alternative futures and make choices.
Even as there is a broad consensus that AI is being used to transform different spheres of human activity remain in a state of flux when it comes to understanding the information of this transformation on individuals and societies.
We have brought with us today some leaflets ‑‑ you would find them here ‑‑ which are teases to the reports we'll launch in a few weeks and which benefits from the review by some of the panelists here with us.
This report recognizes that AI is an opportunity to achieve the 2030 sustainable development goals through its contribution to building what UNESCO calls inclusive knowledge societies. These are inclusive societies based on free expression, access to information, quality education, and respectful linguistic diversity. We need to realize the dynamic distribution of AI cross multiple and dispersed centers within government, private sector, technical community, civil society, and other stakeholders worldwide.
It is for many reason that multistakeholder gathering is vital. Based on UNESCO's Internet universality framework in 2015, this study analyzes how AI will impact human rights, in terms of freedom of expression, privacy, journalism, and nondiscrimination. How openness needs to inform the challenges related to AI, how access to AI hinges on access to algorithm, hardware, human resources and data, and how a multistakeholder approach can governor the challenges for the benefit of humanity.
In addition, it advances AI for gender equality in Africa, two goals of UNESCO. The report that will be offered soon offers policy option that can serve as inspiration for the development of a new policy framework and for re‑examination of existing policies. I sincerely hope this report will assist other stakeholders in understanding the transitions in the way of our societies and help steer AI for creating inclusive knowledge societies.
I hope we'll have a fruitful discussion today. Thank you.
>> Thank you so much, Cedric. So some keywords of his opening remarks, UNESCO is a standard setting organization, and it's coming a new mandate of the general conference on this issue. So during our debates, remember what is setting standards on this area, the idea that we are an organization that can be called the laboratory of ideas. What new ideas should come from this discussion but also very important to underline the importance of building human rights, open, accessible, and multistakeholder‑based policies towards these issues that is within this universality concept that's approved by our general conference. Thanks a lot. I think it helped to set the stage for the next speaker. The study is going to be presented in an incredible eight minutes, but you also have your seven words to present yourself.
>> Thank you for the nice introduction. Seven words about me, I think I am an advocate of roam principles. This research is conducted for a year involving a team of six colleagues. Unfortunately, the rest part of the team is not here. I want to say thanks to them. We have several reviewers here. We have global peer review, including 20 international experts with two reps not here. It's a 200‑page book. I have condensed it into 16 slides, and I have eight minutes. I think I need help of AI to finish this job.
So I will be very brief to unpack some major issues we have tackled, but the result gives a precise example in the details.
We have heard in so many discussions on researcher reports about artificial intelligence. What is special about this UNESCO research? I have seasoned to all kinds of utopism. Also, we try to examine the potential risks and challenges while we harness this powerful technology.
So as Cedric framed very well, the second special feature of this book is that UNESCO ROAM framework to look at how AI is impacting different rights, how the situation of openness is evident in the development of AI. What are is challenge and potentials for inclusive access to developing AI and also how we operationalize multistakeholder approach in governing the AI.
As UNESCO's contribution to the global discussion, we have highlighted two cross‑cutting issues. One is how AI can help us to fight for gender equality and also how we can use AI to empower Africa and the developing country.
If you happen to our event, there are indicators, you must have also heard that ‑‑ sorry. Let me go back again. We have developed 303 indicators to measure this ROAM‑X at a national level. We already have 12 country assessments. That assessment would be very conducive to AI development as the researcher has recommended because AI needs to have a very robust Internet environment in place to support this sophisticated technology to develop in the country.
So human rights, for UNESCO, we have identified four crucial areas: Freedom of expression, access to information, no doubt it's been expanded by the artificial intelligence but at the same time we see a trend that the increasing online moderation to remove hatred speech, remove violence can risk removing the legitimate speech if there are no due process in place.
Artificial intelligence is also contributing the amplification of this information while it's helping us to access information statement.
Right to privacy is another crucial right at stake. The AI can deanonymize the data. No data can be completely personal, and the facial recognition is widely used in countries. Privacy is something we need more accessibility.
Generalism and the media development, we have identified in the research quite some good examples that journalists are really benefitting from the AI technology to help them scanning millions of records to the investigative journalism and to save their time and energy to produce good report.
But, on the other hand, we also see a trend, that's digital attacks against the journalists can be automated by AI in a more threatening manner.
Last one is about right to equality. We have to face such a fact. It's never easy to eliminate the bias embedded in the data and also as an automated decision‑making process. So they challenge you to ensure the fairness and decision will be crucial.
We also observe that challenge of AI in influencing voters' opinions, voters' decision‑making process in the elections. I believe my colleague will take it more. And such a cruel decision. I only have two minutes. I will be super, super quick, like AI.
We have the multistakeholder recommendations for different stakeholders and particularly the media who's going play an equally important role. Openness, we tackle the issue of black box, open data, the monopolization of the markets, and we provided the recommendations and actions. Access is divided along the AI within the countries, and other countries.
In Internet governance, we see more a need for participation to govern the AI.
And the gender, again, I must say that we need to have AI as allies to help us to achieve better gender equality. We also recognize there's a huge dominance of men in the AI development industry and also such a ubiquitous existence of discrimination against the woman.
Africa is not only lacking behind the developments of AI at the level of capacity and structure and governance, but also AI industry needs to use global data to train the algorithm. Africa can be a victim by selling personal data of citizens to the Internet companies, AI companies. That can jeopardize the privacy of our African people.
I will more or less stop here. That's two minutes. We're having a few copies in the room. The full reports, 200‑page brick will be on the UNESCO website in the following weeks. I'm still here around tomorrow morning. Feel free to talk to me if you need more information.
>> Thank you very much. So this is when you go to those restaurants with very long menus. You give a first look in eight minutes.
Now we're going to start which kind of food we're going to pick up to discuss, but a few elements. 2030 is just around the corner. So how this discussion on AI and policy can really help to achieve all the 17 goals of the 23rd year agenda and the crucial issue of human rights. We have such huge challenges for health, for labor, for other key human rights. Let's not forget this freedom of expression and access to information are enablers for achieving the other rights more than rights in itself, which is also true. So how freedom of information and access to information is across the border and the gender perspective is very, very relevant.
So let's talk about how we can improve the discussion in those different areas.
So now we're going to the discussion phase with very, very interesting speakers who are invited to debate this issue. There are all inputs to this. We'll start with Mr. Alex Comninos who has five minutes and seven words to describe yourself.
>> ALEX COMNINOS: Hi. I'm Alex Comninos. In seven words, I would say I'm an ICT policy researcher. Put a forward slash. I'm an AI/policy researcher. So I've been researching ICT policy for quite some time. There is an overlap between the two. I think we're going to discuss that later. I work for an organization called Research ICT Africa. We look at usage and evidence‑based policy in Africa, and AI for us is ICT. Although AI is 60 years old, in a way, it's not new but in a way its prevalent, your news feed was created by algorithms or AI‑based systems. Your photos were maybe cleaned up and sent back to family. Also, it's, perhaps, artificial intelligence as well in the efficiency of the delights or the electronic systems to get you.
So AI is new, and it's in that way that it's very prevalent. It affects our core rights in that sense. So your access to information is somewhat mediated by AI because it acts as a filter selecting the news that you get. If you live in countries or in a situation where there's violence or ethnic conflict, your right to safety and security are also somehow ‑‑ the AI algorithm also somehow intercepts if fake news is going to cause conflicts in communities.
So addressing the human rights concerns, I would say all human rights are affected by AI, so an approach that's quite big now is AI ethics. I think the ROAM framework is a lot better, Rights, Openness, Accessibility and Multistakeholder and the cross‑cutting issues because we've had human rights law for a very long time. We have had practice in being open societies through the Internet and thorough ROAM principles.
The ethics discourse, I think there's a tendency that the early actors or disrupters, we have to think about the ethics of AI, but we also have to think about human rights law, convention, et cetera.
So the issues around AI, which I think we're going to discuss in this session are that algorithms are affecting session decision making. There's algorithm‑assisted decision making, and we can make that more diverse, and we can address bias within that, but we can't just look at the algorithm. We have to look at what is the application. What are you actually doing with the technology involved or the application involved?
And then, lastly, all new technologies have tended to increase digital inequalities. So as more people come online or as the Internet rolls out, equality of access becomes an issue. Many people are online, but they do not have adequate computing devices in order to be able to take advantage of being in a knowledge society. So we also really have to watch out that AI is not creating new digital inequalities.
My other concern, talking about ICT for development and the digital divide, we've been doing this for four decades. We don't want attention deflected way from it.
And I think my time is up.
>> Thank you very much. Right on time. Few flags from the remarks, rights mediated by algorithms, let's think about that, and how to include ethical conversations and, again, human rights law considerations in the decision‑making process of building AI technology but also the policies related to this area. And he finished talking about a very important element for the 2030 agenda, and that's the idea of leaving no one behind. So how we stop increasing inequalities when we're building and implementing these technologies and the policy related to them.
So now we're going to move to Ms. Jai Vipra. I hope I pronounce it correctly. You have five minutes and seven words to describe yourself.
>> JAI VIPRA: Thank you. In seven words, I'm a researcher and advocate for the researcher for technology. I really have to congratulate you for this report. It's the question of human rights in the larger global political economy. I want to talk about some of the principles you've outlined. I will also present some examples to illustrate how those principles might actually work in the research that we've seen until now.
So in terms of rights and the right to equality, I want to talk about the use of AI in education. And a lot of AI solutions are sold in education to say that you can now give personalized learning to your children, but we see that it is also a matter of reducing the spending on education, reducing public spending on education, and increasing the privatization of education. So you're almost in a situation where human contact is for the rich kids, and everyone else gets to be taught by a computer.
So we have to think about the right to equality in that way.
Also, we've done some work in terms of automation and the use of AI in the ports and logistics sector. We know in India, and we know that AI over there is used to track how the workers ‑‑ the truck drivers drive along the ports and track every single moments of their lives. It's about worker surveillance. It's about not being able to take a bathroom break. It's about the computer not telling you two steps in front about what you're supposed too. These are the important questions we think can be solved by reclaiming standard setting as a public function, which I think also is a part of the report where UNESCO and other public bodies ‑‑ and now thinking that we have left the setting of standards for far too long to the private sector.
Then I want to talk about openness. I'm really happy you've talked about open markets and open datasets, both of these together. We've seen there's a demand for open data for government data and data that is publicly collected but also simultaneously a demand for the exclusive control of data by the technology giants.
And this is a situation that doesn't make sense. I'm glad that you've asked for open datasets in general as well and for data commons under the next principle of access. Data commons, we think, would be extremely useful for people to determine what they want to do with their own data in a democratic way. There's a lot of data that is not personal data, soiled data, things like that that need to be used under the commons framework to go through governance channels.
This, for example, in agriculture, would be extremely useful. We see big data is used in agriculture primarily for business process, reorganization for speculation on ag commodity futures but not for preticketing droughts and predicting what crops you should grow for the individual farmer.
We think if these datasets could be used more democratically, you would be solving more problems and more problems that make more sense to the people.
So one of the questions that was asked to the panel is: How do we actually regulate artificial intelligence? How do we make sure there is no bias. Of course, there are problems with existing datasets. There are problems with the inputs, but there are many ways to regulate the outputs of algorithms. You can have minimum standards. You can ask for transparency of the algorithms which is not always possible, but through minimum transparency, you can also reverse engineer some of the things that are happening. We that it's important for governments to not sign away the rights to have access to source codes, given these facts through international trade agreements especially.
So I would like to end, because my time is getting over, by saying that I really like the fact that you've talked about how technological determinism is not the way to go anymore and realizing how things are changing and that it is important for people to determine their own technological future.
>> Highlights on education, the complex relationship with privacy sector not only here in education but also private sector as a standard sector in this area, issue of surveillance, open data, and data commons as a public good and how we can regulate that to be effective and protective of human rights. So if I understood you correctly in a few minutes, on what you just said.
We'll go to the next speaker who will offer a complimentary review to this discussion.
Professor Robert Krimmer, you have seven words and eight minutes.
>> ROBERT KRIMMER: Thank you for having me in this room. It's good to see former colleagues and friends here amongst the participants. It's very nice to be speaking here. I'm also speaking on behalf of my two co‑authors as we're commissioned to edit a paper. Maybe we can put my slides on quickly. Great.
We're working on the intersection of elections and using technologies such as AI, a guide that we have been commissioned to go forward with by UNESCO and UNDP.
What we're dealing with is being former election observers, active election observers, we're committed to observing for a context that provides for genuine elections. We see technology is on the mind and leads to voting on the influence. This influence can manifest either in deceptive claims about candidates, AI using different kinds of videos. As we heard before, about fakes or misleading messages that are being put forward in social media or even we hear about voter suppression, so actually leading to information that brings people to keep away from elections. All of those effects actually undermine our elections and the way that we're going forward.
So we have been commissioned to provide and get insights and understandings of how technology can influence election and how to counter that and how to overcome those barriers to a genuine election.
We are actually also building a media and elections guide that's been previously published by UNESCO. We're complimented by our undertaking that's working with media, international election, assistance providers, civil society organizations, and civil service providers. You see this holistic approach that's driving this Internet Governance Forum. We're also to include that in our guidebook that we're working on.
So how do we actually do it? We have established an expert advisory group of 25 members that are well represented across the board so not only white males from Europe but actually having the whole world present. That was important. We'll have all the paragraphs of the 150‑page guidebook where we discuss. We're using technology to make sure we have a consensus‑based approach in our handbook.
So what are we actually working on? In one insight, we work on this intersection of social media and AI. We are working on international standards that provide for a genuine election. We're looking at issues around AI and social media and the actual use of the Internet. As we show what role the practitioners can play in tackling the emerging issues. Last but not least, we'll give recommendation to stakeholders on the way of dealing with the Internet.
With that, I'm already about to close. I will just say we're looking forward to any comments you might have or pointers that you would like us to include in the paper. Here are our emails. Thank you very much for giving us the time. Looking forward to the discussion.
>> MODERATOR: Thank you very much. Again, a key element here is the impact of all of these in a core element of our representative democracies, our elections. It's very interesting that electoral observers' missions now are, for instance in Latin America, included a special repertoire in genuine elections. This is more related to the incorrect development of elections. Thank you so much. I think this is definitely another element we should bring to our discussions when we open the floor to all of us in this room.
Now I'm going to land the floor to Ms. Izzy Fernandez, who will have five minutes and your seven words to present yourself.
>> IZZY FERNANDEZ: My words would be teen Ambassador and youth perspective. So I got into AI last summer, actually, as I went to an accelerator of two weeks. I went for climate change, as I feel quite passionately about that. We made a web extension that take what is you buy normally every week and gives you alternative that are more ethical and eco‑friendly.
After I left the accelerator and went back to school, I realized that what I was taught there was really different to what we were being taught at school. I realized that we need to make changes. As obviously AI is in the future, it will be my future, my generation's future. And, well, we need to start teaching ethics in AI in school, computational thinking to younger children so that they can go into the workplace and understanding what they're doing. I also so looked at democracy. Obviously, I can't vote yet, but I really hope when I can we've made sure that voting under the influence isn't a thing anymore, that I can understand that the news I'm getting from social media is true. It's not made up. It's not a deep fake. It's what is really happening in our world.
Also, I'm hoping that if we educate young people on AI, on ethics, that this will enable social mobility for everyone so they can go to school and have access to these things. They will be able to say, I understand what's happening in the technological industry, and hopefully that will make big changes in our future.
>> MODERATOR: That was very good. You saved us time. I would like to underline this point on information literacy in this discussion, that it's not only for children and young people. It's for all of us. We all need to learn more how to navigate in this new ocean and also Izzy underlined issues related to environment and sustained development, ecosystem, water. Thank you for pointing out those new elements for this discussion.
Last but not least, our final speaker in this round of initial remarks ‑‑ I would ask my colleague to pronounce this correctly, the name of our next speaker.
( Speaking in non‑English language )
>> MODERATOR: Thank you. So you have five minutes and seven words to describe yourself.
>> Thank you, everyone. I'm glad to be here. I actually prepared something, but I want to say something else. So I'm thinking why we're here. Because we're people from different backgrounds. Why are we here? I think we recognize that AI is not just a technology. It's not just one industry. It's something that is far‑reaching. So we're thinking in the past it could be called Internet plus, but, in the future, does AI bring us to an intelligent society, AI plus. I think this is a good challenge. It's very important to see some strategy with what we are facing.
So, actually, I'm glad we notice that importance. Yesterday, I think we talked about the principle of AI. I think something very similar like reliable and agile and responsible and something like this, especially I think we all agree that it should be human centric. I think it calls for the importance.
Second, I think AI, we are talking about a lot of risk management but also development is very important because we see that in the new economy, it's very important for the economy development, that AI is a core power of digital economy, which is a big part of the new economy. We can do from different fields, especially like, for example, for the government, electronic government, I think at least three things.
The first is use AI to help livelihoods, to reduce poverty, to help medical care and education. We have mobile service. Smartphones, everything, I use my smartphones.
Third, that it should call for open data and make a healthy ecosystem.
I think there's a call for cross‑border international cooperation. We should rethink the role of machine and people. I'm glad to see we have young children ‑‑ we have a child here. The time is for them, not just for us. It's young generations that are most active in cyberspace. We should pay attention to what's happening.
>> MODERATOR: Thank you very much. We talked during the previous panelists on lots of human rights social elements of the 2030 agenda. Also, the environmental elements but now it was important to raise, as well as, the economical and economic development and digital economies, AI has a large role to play in this area as well.
Thank you very much.
Due to the kindness and sharp time of all of my speakers, we have 17 minutes for discussion. This is a luxury in this meetings in IGF. I hope you all take advantage of this opportunity to raise your questions, concerns, comments, if possible, briefly, so we have space for more people.
So let's collect a first set of questions, comments, and then back if the panelists wants to comment on them. And, if we have time, another set. If people want to send written comments, our colleagues can help with them. You're free to use the mics as well.
Who wants to break the ice?
Please, sir, identify yourself.
>> AUDIENCE MEMBER: I'm a faculty and I work on AI policy. My question is something specific. It's in the scenario that's been observed in certain American and European jurisdictions that we often have this curious case where AI development and code is preceding policy. Certain AI artifacts get made. They inundate the market. They approach governmental and policy makers in various forms, and then they get implemented, and policy comes later to sort of explain the usage of these artifacts.
One famous one is, of course, facial recognition technology. I'm not talking about privacy and the other aspects of facial recognition technology. One aspect which doesn't get talked about much is that it is causing a policy function in which you have it to be used like it has been used in certain places where you see law enforcement officials eventually saying that it was not us who was responsible for certain decisions. It was this artifact which told us so.
So this is very much, I think, a human rights issue as well aside from being a policy issue. Certain jurisdictions have responded to this by abolishing certain collections of data, abolishing certain AI artifacts. It's been made illegal in certain jurisdictions. I would like to hear the thoughts of the panel when it comes to this specific problem of technology solutionism and capitalism entering into the politisphere and leading policy development rather than the other way around.
>> MODERATOR: Thank you. We had a question.
>> AUDIENCE MEMBER: I'm from Berlin. I work for Africanized City Foundation. We are talking on literacy. One key is surveillance. Is this true or false?
>> MODERATOR: Thank you.
>> AUDIENCE MEMBER: I am an MP from TLC. My feeling is to ask about what is the limit we can put on AI? Because we know sometimes we tend to give the priority to the machine to decide, but sometimes machine and human can be in conflict. An example is in a company, the authority is tended to give to the automated pilot because there's remarks from an insurance company that assures to take the landing and takeoff by the automatic pilot. When there is a conflict between human and machine, I think we should also give the priority to the human, a decision that should be some of our fight within UNESCO.
Another thing, I saw it in the document, the problem of our African country. We are not yet on the road to get access to AI and big data, but we are having most of the Internet users, Africa users will be more than M and Europe combined. So the data will be coming from our continent, but we are not able to access it.
I think the document is very specific. I saw it. We should also give some rights to access data, local data, to our research, to our company. We are not yet ready to do that, but at least to have that option that our data which are stored in the U.S., we should be able to have the right to have it once our researcher and company are ready to use them, they should have that right.
Another specification is on the Internet in general, not only in AI but on UNESCO side. In the opening ceremony, the translator says about having Internet award public good. We should see how to make ‑‑ in our country, we are facing pressure from NGO and Europe and U.S. because we tend to shut down Internet sometimes. It's happened when there's some trouble. We shut down against our people, but what will happen if sometime a company decides to shut down Internet against another country like the Internet depends on U.S. infrastructure. U.S. can make a decision to shut down, like a sanction. It's already put a sanction on China to avoid them to use the Android, but in the future, it can shut down the Internet for a country. How can the UNESCO protect that? Can we achieve to make Internet a world public good as it's done before?
>> MODERATOR: Thank you. Another question and then back to the panel.
>> AUDIENCE MEMBER: Thank you very much. I'm with Slovakia. I wanted to elaborate more on the use of AI during elections. It's been mentioned that AI has been used for dissemination of this information. This is currently one of the biggest channels. I mean, the spread of this information which undermines the trust and credibility into the system as well as for an interference in the elections.
This is why I think it's important to continue developing, like you mentioned in Latin America, you know, there is more attention, freedom of expression during elections. Other institutions, such as the European Union, OCED there or observers who are paying special attention to social media during the elections. It's always important when there is introduction of ICT for elections that, a sudden group is disadvantaged. The overall use is not done for malign purposes which undermines the trust in the system.
>> MODERATOR: Thank you. Just to refresh the memory of the panelists, obviously you don't need to reply all questions, each one of you. Pick some questions among you guys.
But the first one was the time gap between technology advancements and policy development.
The second one raised the issue of surveillance.
Then we had different questions from the lawmaker here ‑‑ and that's very important that in this IGF we have several Congressmen, congresswomen, members of parliament in different parts of the world because you're essential stakeholders in tackling this issue. So the limits of AI when there are conflicts of rights, then there was a question on data rights and sovereign rights related to data and access to those datas that perhaps a question for Cedric on UNESCO as a public good.
And the issues of overall impacts of use of ICTs during elections, not only fake news, if I understood you correctly, but those using voting machines and how we protect democracy.
Who wants to start? We have seven minutes for some of those questions.
>> Thank you. Thank you. Thanks to all the excellent questions that give me an opportunity to share more findings from research.
First, regarding the time gap between the policy making regarding the AI and also the existing AI applications in society. That's something that's a big challenge. The impact of this kind of AI product can be irreversible if it is harmful. That's why we are really calling human rights mainstreaming strategy among all the stakeholders. Human rights, they are the norm framework to influence to shape the national policies. We're calling the national states to make sure all the regulations in place are conducive to the human rights to contact assessments of the human rights when they assess a new product. Same equally applied to the private sector, the companies, shouldn't their terms of service, shouldn't their technology guidelines go through a human rights assessment before they put the products to market, before they develop this AI product. Should human rights check be in place?
Also, for technical community, we're developing products for applications. I think it's not something in the air. I think it is something very substantial.
In terms of issue of surveillance, it is a reality in place. I mean, whether it's mass surveillance, I mean, used by law enforcement in many countries or the target of surveillance also used by different agencies in countries, they are really influencing everyone. To what extent is this AI‑based data, massive data collection, being collected in a way and being shared under due process to certain parties, fourth quarter parties, multi parties. That's why we're here, to develop a framework to make sure the surveillance, we need there to be legitimate surveillance on the one hand to protect society but on the other hand without compromising rights and the dignity of human.
Maybe I talk too much, but the last thing I want to mention is crucial challenge of all these issues is the lack of transparency and counter processes are put in place to check the development stages of the AI at all levels. That's why it's urgent, the issue. That's why we should keep this discussion and exploration and dialogue and collaboration ongoing.
>> MODERATOR: Thank you. This very last point is highly important. We should have more efforts and more thoughts on how to develop better transparency and accountability mechanism for all of this.
Cedric, perhaps you can offer insight on the questions directly to UNESCO.
>> Thank you. Of course, for UNESCO, the Internet is a public good. It is central in terms of enhancing human rights and overcoming the threats related to human rights. We addressed a number of issues to address again also to the framework we'll be developing, because the questions raised about accountability, about privacy, about human control of technology, about fairness and discrimination, all of these are categories which need to be addressed within a human rights‑based, human‑centered, ethical framework.
And, of course, it's not new that policy runs behind technology. It will continue to be like that. For that reason, it is exactly important to have a framework which guides, from the outside, different communities. The technical community has been established in parts through IEEE but all the different stakeholder groups, and they're coming together.
This is, for me, the only way having a human rights‑based approach and ethical framework together to guide also and to be head of this technology. I think it addresses a number of questions raised.
>> I will just be very quick. I think there's a definite gap between technology and policy. I think surveillance is a huge problem and especially facial recognition. We're talking about it now. It's being rolled out very quickly. In South Africa, when they're rolling out fiber, the Internet connectivity, they're rolling out surveillance. We want to have facial recognition, machine learning, in the city of Johannesburg. We have them on campuses and in police cars, all in one year. So I think, yeah, I think it's unrealistic to say we can ban facial recognition, but I think at least we should push back very rapidly in perhaps some kind of moratorium on facial recognition.
>> MODERATOR: Thank you.
>> Thanks. I would like to respond to the gentleman over here. I fully agree with you on data sovereignty and network sovereignty. This is especially important for African economies. That's why we last saw it was the Africa group that actually helped stop a terrible eCommerce agreement that would have ordered data and network sovereignty rights. I think that's important to talk about every time we talk about AI. There's nothing to hold data or regulate. It doesn't mean there cannot be in the future, and those rights need to be protected.
>> MODERATOR: Thank you so much. We have exhausted our time. I don't know if our speakers want to offer some tweets.
>> Just very brief. I think this relationship between technology and policy is so important. There have been plenty of papers written. I just want to recommend these that see laws as a leader to new developments.
The other thing about elections and technology, I think we always have the problem of the elections being kind of like the celebration of democracy so that all the issues that we have with technology in our everyday life also come to play at election and vice versa. So we need to take into account that the 51% challenge. With data, if we have the wrong data, it might dominate all the minorities and all the issues that we have around that. That, we know, is a problem of democracy. We need to take freely into account how we can make AI represent the whole of the world.
>> MODERATOR: Super. Doctor, you wanted to add something? It's okay.
>> It's not technology decides everything but people.
>> MODERATOR: Thank you.
>> I feel like the policy would be helpful if it was trying to move the AI and all the algorithms from the private sector to AI for good because then it won't be ‑‑ like the surveillance issue, it won't be to infringe on people's rights. It will be for their safety.
>> MODERATOR: You have 20 seconds.
>> Just to pick the question about the open data, yes, it means a gap of knowledge and research among countries. That's why UNESCO has been standing strong to promote open access to technology and resources. That can help to merge these gaps.
>> MODERATOR: Thank you. I'm already seeing the panelists of the next session here. If you have a very quick tweet, please.
>> Thank you. Sorry. Very interesting session. I'm aware that the focus of the session is on discussing policy questions. I have a sense that we're all sharing similar concerns and worried about the same things. Perhaps a friendly criticism is that it would have been very interesting to hear the views of those who actually developed the tools on policy framework and what is their vision of what we're discussing here. I came in late, so perhaps I missed that bit. It was here in the room. It would be interesting to see the linkages.
>> MODERATOR: I guess in the publication, we launched it in the beginning of this session. Obviously, we need to keep fostering and enhancing these discussions.
A final announcement: Many of the things we have discussed here today, we, UNESCO, have developed a massive online course called "tech For Good." They're back at the end of the table. Perhaps Alexander, Fabio, they are the people involved in producing this 10‑week course that is available now and open at this very moment in English, Spanish, and very soon in Portuguese and Russian. If you want to enroll, you can see with Alexander and Fabio, at the end of the session, how to do it.
At this moment, we have almost 2,000 people from 134 countries enrolled in this course. So perhaps it can help you to keep the discussion and the ball rolling.
Thanks a lot. It was a flesh. A round of applause to our panelists and discussants. We'll see you around in IGF. Thanks. Enjoy your day.