You are here

IGF 2018 - Day 3 - Salle IV - WS182 Artificial Intelligence for Human Rights and SDGs

The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

>> GUY BERGER:  Good morning, everybody.  We are very pleased to have you at this UNESCO session on such an important topic and with such a great panel, a multistakeholder panel.  I'm Guy Berger, I'm director of freedom of expression and media development here at UNESCO.  Here is my colleague, Indrajit Banerjee.  We're co‑chairing in this.  Indrajit, are you excited? 

>> INDRAJIT BANERJEE:  Very much so, Mr. Berger.  It's a very timely and relevant session.  We're delighted to have all of you here with us.  Hopefully as we go along, the session will become more and more interesting, and the room will get extremely packed.  With that hope, we're going to start.  Guy, I'll let you do the introductions.

>> GUY BERGER:  Good.  We're going to try to keep the pace moving fast because artificial intelligence is moving fast and we have very interesting people here. 
      So the speakers I'll tell you briefly who they are.  We have His Excellency Mr. Federico Salas Lotfe, permanent delegate of Mexico.  So we have a member state involved. 

Then we have Nnenna Nwakanma.  People may know her well from the IGF.  She's interim policy director there. 

Then we have Marko, he's right on the far side over there.  And he is head of AI at the Institute in Slovenia.  He's their digital champion and the OECD's AI go committee. 

Then we have Sylvia Grundmann who is head of the media and Internet division at the Council of Europe.  Thank you again on coming on the UNESCO panel. 

Then we have Thomas Hughes, executive director at ARTICLE 19.  ARTICLE 19 is probably the leading NGO research group on technologies worldwide on freedom of expression online and offline. 
      Then we also have a video input from Mila Romanoff from Global Pulse which is the big data section of the U.N. Secretary‑General's office. 

Let me give you Indrajit, and he can tell you what we will be focusing on. 

Who has heard of the UNESCO concept Internet universality?  Could you raise your hands?  Have you heard of Internet universality?  For those of you who don't know, I will tell you.  UNESCO member states agreed Internet universality.  They want one Internet to serve everybody and be available to everybody.  That's as simple as it is, Internet for everybody everywhere. 

What kind of Internet and why?  It's an Internet that could help achieve the sustainable development goals.  And to have that you need these four principles.  This is what the UNESCO member states agreed.  The four principles is very easy to remember, ROAM, R‑O‑A‑M, which stands for rights, openness, accessibility, and multistakeholder participation.  We want the Internet to be as we want and to contribute maximally to sustainable development.  We want it to be aligned to human rights, to openness and multistakeholder participation. 

That's the way UNESCO approaches all Internet questions, whether it's a question of radicalization of youth online, whether it's a question of so‑called fake news or whether it's a question of artificial intelligence. 
      We know in the same way that fake news can mean many things to many people, artificial intelligence can mean many things to different people.  We're not aiming to define definitively.  We're talking about broad range of advanced technologies and the issues around them. 
      These technologies which are dealing with big data.  These are the things that we're trying to address now.  UNESCO is particularly concerned with human rights, with openness, accessibility questions, about AI.  And also very concerned about these debates should be multistakeholder.  Hence this panel is multistakeholder in its nature, interregional organizations, Civil Society, member states.  We have freedom of expression organizations. 
      So we're ready to go, I think.  And we would like this discussion to pay particular attention to rights, openness, accessibility, and multistakeholder participation. 

There's one person I didn't introduce.  One more panelist her name is Elodie Vialle.  She's the head of journalism at Report Frontier.  That's it.  Indrajit, let's go. 

>> INDRAJIT BANERJEE:  Thank you, Guy, and let me join you in welcoming all the panel members and the participants for being here with us.  I would like to acknowledge the presence of assistant director general Mr. Morris who is sitting here with us.  He's spearheading our work in the area of artificial intelligence.  His presence here is extremely crucial here. 
      Now as my colleague Guy Berger just said, we want this to be an interactive session, no long speeches.  We are the CI sector.  We believe in action, not too much words.  I think it was very well framed by my colleague, Mr. Berger.  The ROAM principles, Internet universality as a concept should be underpinning all our work in this area not only to artificial intelligence but everything else we do in the field of new technologies and societies. 
      As all of us know and we have seen over the last few days and weeks and months, artificial intelligence has great potential to foster open and inclusive societies and promote education in scientific processes, cultural diversity.  These contribute to standard democracy, peace, and help achieve sustainable development goals.  Let me reiterate the fact that the title suggests this session is going to be focusing on not artificial intelligence, per se, but how artificial intelligence can be leveraged, harnessed for promoting human rights and sustainable development goals with a multistakeholder inclusive and open approach. 
      So having said this, artificial intelligence could also exacerbate inequalities and increase digital divides.  We've already seen a clear polarization in the world in terms of countries which are very advanced which have not only the know‑how about but the resources and funding to invest very heavily in artificial intelligence.  And I guess the race is on. 
      That is why I think this session is extremely timely.  It is now unlike in the past with other technologies, it is now that we can put things right, set things in the right context keeping in mind human rights, sustainable development goals.  Before we get this one wrong too, we have missed many of these technological advances.  And I think our discussions on ethics and other important dimensions have come far too late often in the day or in the game. 
      And this is an opportunity for us and this is where UNESCO comes into the picture and communication information sector, in particular dealing with new communication, information communication technologies to bring to the fore in this debate of artificial intelligence, a whole range of new technologies which are emerging and will continue to emerge of how we put them in the right framework so that these debates, these crucial debates can take place at the outset and not by the time it gets too late.  Because by then the race will have long started, directions would have been set, and it would be very difficult to turn the clock around and go back and set things right. 
      So I think, as my colleague Guy Berger rightfully pointed out, this is a great opportunity for us at UNESCO and the world in general.  And how big data is influencing development processes for different countries, increasing access to new technologies, and resulting in knowledge and information which has the power to transform national economies and positively influence social progress and the building of a more inclusive knowledge of Civil Societies.  It would beg the question, how the governments are responding brought by advanced ICTs like artificial intelligence?  What is the current understanding on the ensemble of these technologies? 

Without much further ado, I would like to invite the permanent delegate of Mexico to UNESCO to give us some of his opening talks.  Mr. Ambassador, the floor is yours. 

>> H.E. FEDERICO SALAS LOTFE:  Thank you very much.  I want to thank our two moderators, first of all.  And I want to thank UNESCO for inviting me and more than inviting me but inviting Mexico to be present at this panel. 

In my country we think it's particularly important that a forum like this which we understand is probably the first time that there's this link between artificial intelligence and the questions of human rights and sustainable development.  That is a crucial issue we really need to address and focus on, as you correctly said, before it's too late. 
      I would like to also point out that Mexico has been very active in promoting the discussion of this issues in U.N. New York.  We're chairing the forum for sustainable development goals for 2018, 2019 at U.N. headquarters in New York. 
      Let me start by saying we think it's very necessary to recognize that the development of emerging technologies such as artificial intelligence have accelerated exponentially in the last few years due to current technological and financial trends.  These emergent technologies will necessary impact areas of our common living agriculture, medicine, education, entertainment. 
      But they also have implications on a social, economic, ethical, and legal nature which must be discussed now in order to maximize the benefits and mitigate the risks. 
      Indeed new technologies can be used positively to accelerate solutions to accomplish the 17 sustainable development goals of the 2030 agenda and the 169 targets.  But they can also increase in equality among other countries, replace labor forces, affect vulnerable groups, foster critical knowledge and wealth, and even pose significant challenges to human rights. 

The demand for specialized human resources with greater technical training is increasing.  This includes technical jobs and other professionals who are capable of implementing the new technologies in their own expertise.  This impacts the universal right for higher and specialized education. 
      Likewise, the use of intelligent systems will have a profound affect on the labor market eliminating jobs especially those with a greater component of mechanical skills.  And these jobs, as you're quite aware, are concentrated in developing countries.  On the other hand, new better paying jobs will be created. 

The question is whether the population in general will have the skills to obtain those jobs or whether it will increase the inequality gap, marginalize sectors of society and their right to a decent job.  That is why we think education at all levels is key both to increase the artificial intelligence expertise and to ensure that the benefits of AI development are shared.  It is also necessary to develop and support systems that allow an easier access to lifelong learning including retraining. 
      Regarding the ethical aspect, it is urgent that we begin analyzing the requirements that intelligent systems must have to prevent prejudices and negative cultural and political reactions such as hate speech through the social media.  In particular there's a growing need to increase transparency and reliability. 
      The design of an ethical framework to help guide good decision making by those who are finding new uses for artificial intelligence technologies will guide it to be used thoughtfully inclusively ethically in all countries to achieve greater benefits for society. 
      Technology is neutral and AI may not be capable of prejudices, but the people programming it can.  In this sense it will be necessary to promote more representative systems that take into consideration diverse and open views of the population.  This can be achieved by strengthening and promoting alliances between all relevant stakeholders.  Some artificial intelligence applications are already questionable:  Bioethics, data collection that intrudes on privacy, facial recognition, algorithms that's supposed to identify hostile behavior or racial intelligence, military drones, and autonomous lethal weapons.  Important challenges for humanity such as aging societies, environmental threats and global conflicts, refugee support as well as to achieve the agenda 2030 for sustainable development. 
      While research is moving full speed ahead on the technical side of artificial intelligence, not much headway has been made on the ethical front.  The World Economic Forum at Harvard University has started this discussion regarding the use of AI for inclusion as well as the International Communications Union through its AI Summit For Good which is why it is imperative to start an ethical dialogue on unprecedented human rights challenges that arise.  UNESCO is the perfect forum to obtain this. 
      The digital cooperation of information and communication technologies.  The government of Mexico has created an office of national digital strategy which is attached to the office of the President.  And it has established the foundations for the development of a national digital policy that will allow us to use technology to transform our government into an open, modern, innovative one and turn it into a platform that detonates innovation and inclusion. 

Within the basis of international strategy we provide a framework for reference to how technologies work and how they interact in a broad context that includes the technological ecosystems including their positive and negative impacts and implications. 
      As a result in taking into account the capabilities of computational thinking, disruptive technologies partnership and services, according to the users, is promoting many initiative regarding the utilization of emerging strategies.  One is for AI and another for blockchain. 

The strengthening of the promotion of a multistakeholder's dialogue to help create better process as well as public policies to achieve the SDG's. 
     This is why Mexico considers it vitally important to establish alliances and create between governments and academia and Civil Society to take advantage.  Technological of companies and industries facilitate its distribution and allow access and training to the population for its most optimal application
      Furthermore, in supporting AI development we need to support the infrastructure that supports it especially good quality data, Internet connectivity, and modern intellectual property law and privacy protection. 
      This is why, and to conclude, it is paramount to analyze the impact of the rapid technological change within regional and international organizations and forums including UNESCO in order to address its challenges and opportunities in the effective implementation of the 2030 agenda leaving no one behind. 

Thank you. 

(Applause)

>> GUY BERGER:  Thank you very much, Ambassador.  It is wonderful to see a member state engaging in issues like that.  Now you highlighted a lot of the global issues.  I was pleased you describe the national level with Mexico itself, and you described the regional level and UNESCO's role.  This is super. 
      Now we come to our next speaker who will also make the introductory marks.  Nnenna from the Worldwide Web Foundation.  If you want to see artificial intelligence personified in a self‑learning organism, I present you Nnenna. 

>> NNENNA NWAKANMA:  Oh, my goodness.  The only thing I can accept is that I'm tweeting while I'm here including video tweets.  Let me not accept everything he says. 
      Good morning, everyone.  And thank you for coming here.  So the last video tweet was like people are still flocking in for artificial intelligence.  I was in the last session in this room.  It was full and this session is full again.  What is it with artificial intelligence that is making everyone excited?  I don't know. 
      I was asked to speak on something less glamorous which is policy implications at the global level at this time. 
      My name is Nnenna.  I come from the Internet.  I work with Worldwide Web Foundation as its policy director.  I've had good advice that I want to say don't go beyond three points because people won't remember the fourth one.  I'll keep to three points and each three points is three points.  So it's going to be three times three. 
      The first implication I would like to put on the table is that of access, right?  So we're gently reaching the 50/50 tipping point in which we have 50% of the global population access, with online access and the other 50% offline. 

The question around access is how can we be developing technologies when we have half of the population online?  Can we move ahead when we are leaving 50% behind, Your Excellency?  I don't know.  On the access I've noted the things that are challenging to access.  Of course, the first is affordability. 
      We at the Worldwide Web Foundation host the Alliance For Affordable Internet because across the world those who are offline, the first barrier to access is the price, the cost of access of the Internet itself before you get to anything else.  Affordability is something we have to be worried about when we're dealing with policy. 
      The other one is meaningful access.  When I finally put my $10 to get connected, what exactly am I being connected to?  What's the content I'm getting?  What's the feedback?  What's the return on investment on someone in Sudan who will put $10 which is basically maybe 15 to 20% of that person's monthly income to get one gigabyte of data, what is he or she getting in return?  So the meaningfulness of the access is very important. 
      I cannot talk about humane, the humanity part of it.  I'm sure you must have heard about the principles for the contract for the web that Tim launched last week during the web summit. 

One of the principles is we should be developing technologies that promote the best in humanity and challenge the worst in humanity.  For those who created the Internet, the world‑wide web, the initial motivation was that when we bring technology to human beings, human beings will use technology for good.  But that's not what we're looking at the moment. 
      So the first thing policy consideration I would like to put on the table is to go ahead with artificial intelligence considerations is that we should use artificial intelligence for the best for humanity and also use it to challenge the worst in humanity.  So that was access. 
      The second thing I want to talk about is trust.  Trust in technology, trust on the Internet.  And I think this UNESCO you are hosting us, we trust you.  That's why we came here. 

(Laughter)

>> NNENNA NWAKANMA:  You're hosting the Internet of trust.  I think it's become bigger than we imagined it to be especially in the last few months when things have come up.  So trust is very important. 

And under trust I've noted data governance because everywhere you go, they want you to put down your data.  They want you to put down your biometric data.  I think we should be able as users to ask the question, where is it going to?  Who else are you going to share my data with?  And transparency in governance is really very important to us as the web foundation.  And I think while we as stakeholders, we need to take this responsibility very important. 
      Of course, rights.  Technology for education, for health, for agriculture, so sustainable development, it's our right.  I'm glad this time around the U.N. is not looking at SDGs for the north to give to the south.  It's something we need to get a hashtag for everyone, leave no one behind.  I'm certainly not going to be left behind. 
      The other thing is online safety.  For those who are working and big shoutout to everyone who is working on Me Too and gender‑based violence, online and offline.  What happens is the same violence, the same gender‑based discrimination, the same inequalities that we have offline have come online.  And when they come online they are threatened, they are abused.  They are shut down, physically shut down, verbally shutdown and they withdraw. 

I'm just saying these middle‑aged white male who are developing AI technologies, they're aware of the black older women or younger women who are coming online.  And how do we engage on maintaining safety and the trust around the whole of the technology?  Because if you try it once and it doesn't work, then you're afraid.  You lose trust and you go back.  So trust is one of those things we want to talk about, we want to take into consideration.  And it should be part ‑‑ online safety should be part of it. 
      Finally data.  Data is my third point.  My first was access.  My second was trust.  My last one is data.  For those who know me I'm working on opening up data.  On one hand we have open data.  We are saying make it open by default.  Public data should be publicly available so we can use it in transport, agriculture, in schools, in everything.  And we cannot have effective artificial intelligence without data.  Data is the life blood of any artificial technology. 
      So the availability of data is very important.  And what I would like to challenge everyone here is to do their part in making sure that public data is indeed open, available in the format that artificial intelligence can use. 
      The other thing is data integrity, information integrity; otherwise, the flip side is fake news.  It's very important if we are building artificial intelligence systems, we build it on data that is of quality data that is true.  Because whatever you feed into the system will ultimately affect what the machine or whatever it does.  If we're not feeding it with data that has integrity, then we are all ‑‑ the big F word.  That's where we are if we don't have ‑‑

(Laughter)

>> NNENNA NWAKANMA:  I cussed in the last session so I'm watching myself on this one. 

(Laughter)

>> NNENNA NWAKANMA:  The other thing about after data availability integrity is, of course, privacy, privacy of my personal data.  I think the debate is now here.  We want to be sure that our data belongs to us and we know where it is going.  We are sure that it is in safe hands.  And I think that we cannot be developing any AI policy at this time without taking into consideration the respect of personal data privacy.  And it is good that we have GDPR which is beginning. 

African countries are also beginning, quite a number of other countries across the world beginning to think, beginning to understand that our data is an extension of our own selves.  I would like to leave you with this thought.  Would you rather be naked for two minutes, or would you rather that your Google search history be exposed?  I think I would like to go naked for a minute and cover‑up because very soon you'll see all the pretty women and you forget what you've seen. 

(Laughter)

>> NNENNA NWAKANMA:  If you have ‑‑ come on, guys.  If you have my Google data search history, actually you just know what is outside.  You know what is inside and you even know what's my future and that I don't want you to have. 

(Applause)

>> INDRAJIT BANERJEE:  Well, thank you very much, Nnenna.  I think you presented some extremely interesting and very useful insights into artificial intelligence, not only so much for itself and its own sake, but you highlighted a few points which I will briefly summarize.  One is the notion of access.  As you may know at UNESCO, we are very concerned with the meaningfulness of the access. 
      There is far too much of new technologies, et cetera, connectivities being a buzz word for a long time.  At the end of the day, how does it have an affect on people's livelihoods?  Here today we introduce another element in terms of access is not only the meaningfulness but connecting.  Otherwise, you pay $10 a month or whatever it is to access the Internet when I don't have that kind of resources. 
      The second aspect is also access in terms of multilingualism.  As you know at UNESCO we do a lot of work on multilingualism in cyberspace.  And the statistics are pathetic, even drastic in terms of languages that are available online.  400 languages are available online out of 7,000 official languages.  About 60 to 65% of languages are going to disappear at the end of the century.  This becomes worrisome.  It is closely linked to the notion that you mentioned about meaningfulness.  If the access provided is not the content which is meaningful for me, my livelihood but also in a language in which I can understand, I don't see what the point of access is. 
      The second point you mentioned which again which is.

>> GUY BERGER:  Before you say that, I have to say something.  I mentioned the ROAM, R‑O‑A‑M, A is accessibility.  It covers these issues.  It covers the issue of language, relevance.  It also covers media and information literacy.  If you want to find out more about how UNESCO sees access and what is media and information literacy which we see as crucial for access, check it out.  ROAM.  Thank you. 

(Laughter)

>> INDRAJIT BANERJEE:  That looks like the mantra for today.  In any case, I think the second point you raised, trust, highlighting special data governance and online safety.  Again, UNESCO is keeping a close eye on.  Over the last few days you heard a lot of concerns expressed in terms of data governance, in terms of online safety, especially towards vulnerable groups, women, children.  And the statistics that we received on the situation, the state of children and women in terms of what's happening online is quite dismal. 
      Last but not least you talk about data, open data again and the life blood of artificial intelligence of all intelligent systems, which also highlights the importance of data integrity, privacy of data.  The GDPR coming into effect is a positive step in this direction. 
      So thank you very much, Nnenna, for those opening comments. 

>> GUY BERGER:  The open data is part of UNESCO's O in ROAM. 

(Laughter)

>> INDRAJIT BANERJEE:  So now I would like to invite Mr. Marko Grobelnik to make his statement.  Keep it compact because we come back to many of the questions that you mention in your statements. 

>>  MARKO GROBELNIK:  It's hard to keep things compact, but I will do it, right.  AI right.  So I'm researcher in AI.  I have many hats.  Most of my time I try to create AI for last 20, 30 years from my high school on. 
      So these things go up and down.  In the last few years, this means like after 2010, there was this explosion of AI which was mostly due to basically one single invention which is like 60 years old.  So this is what's called nowadays deep learning. 

So there is this magic tool which is open source, everybody can use it.  You have free tutorials in principle.  I tried to explain it to my son and kids, high school kids, and they understand it.  Because it's that easy.  It's not hard, right? 
      I don't know how many of you are engineers and definitely you have interest in AI sitting here, right?  But if you think AI's hard, it is not.  So this is like doing Lego, pretty much.  So try to sneak into at least a couple of short tutorials, and your level of competence of AI will certainly go up. 
      Now, the question is AI ‑‑ is this the final stage of AI?  Of course not.  You can expect in the next like maybe five, ten years, right, way, way more.  Everything what we have now will be just one Lego brick of what's coming in the next five to ten years.  You can expect lots of innovation. 

It seems like AI is very powerful.  In the end it's not really.  You can have a fairy tale like Snow White.  AI can't come close to understanding Snow White as well as a kid understands.  There was this invention, right, which is a simple algorithm.  Suddenly computers are starting hearing objects, hearing, understanding, translating ‑‑ not understanding, but translating.  The machine still doesn't understand how the world is causally connected.  It doesn't understand simple text like this and so on.  But there are huge investments going on these days which would even overcome these things. 
      Maybe just a couple of more thoughts.  So pretty much everything is free, right?  So if it looks like AI is just for rich, big companies, it's not true.  Everything it's free.  Everything that's relevant is free.  Today even the big companies like Google, Facebook, Amazon, and so on, Microsoft, they cannot afford closing down this technology.  So that's why you can get all these things available, and you can use it actually. 
      Another question is question of data, right?  So data makes a big difference, right?  If you have data, then you can do things.  Data is pretty much also available, not maybe the most valuable ones.  But if you make some effort ‑‑ so in principle AI is really good topic really for underfunded, nonrich countries, institutions, or even individuals really to make a big difference.  The only thing is fear of technology.  Without fear, you can do this.  With fear probably you cannot. 

And competence, knowledge, right?  Knowledge is more or less is also available.  This is my main message to you, right?  So since you have interest in AI, you can participate in AI as well.  It's not something which is closed down, right? 
      What's also interesting, right, since we are on the panel of human rights and SDGs.  So how AI can help through all these things?  Well, since I said things are available.  Problems with ‑‑ well, human rights violations, SDG, all this, SDG points which we know.  So this was a problem always, right?  So what's the difference today? 
      We can uncover this and we can see it, and we can even influence with this technology.  AI is not as said many, many times, very popular topic by kind of important speakers, right, how the singularity will come.  Nothing will come.  This is mostly, I would say, nonsense. 

AI is really a tool which can help as well.  It can be a danger, I agree.  Autonomous weapons, many things are problematic.  In the same way AI is a threat, it can also be a help, right?  We just need to see it from the other side. 
      Maybe so much for now and I'll continue later. 

>> GUY BERGER:  Thank you so much, Marko.  To sum up what you said, this is a fast moving game ‑‑

(Applause)

>> GUY BERGER:  Yes.  Let's give him applause.  It's very fast moving.  There are huge investments but what was interesting that he said was that links up to ROAM because O is about open markets and open opportunities.  And he seems to think that actually this is not the reserve of such huge actors.  He thinks there's enough free software and open data.  That's about openness.  That's very interesting. 
      Okay.  I have the pleasure now with my co‑moderator, I get to invite two inputs.  He's doing one at a time.  I'm doing two.  We move now to a video.  This is from Mila Romanoff which is the U.N. Secretary‑General's initiative on big data.  This is about the U.N. trying to get smart about big data in relation to sustainable development.  So if we could switch to the video. 

(Video Playing)

>>  Thank you for this opportunity to present to you today.  Apologies that I'm not able to be there in person.  Thank you to UNESCO for inviting me.  I am a privacy and legal specialist of the U.N. Secretary‑General where I lead a privacy and ethical program.  We're looking to how big data and artificial intelligence could be used in a responsible way to assist the implementation of sustainable development goals. 

Global Pulse works with the public and private sector to get access and analyze data coming from various sources such as postal transactions data, financial institutions, mobile sector data, radio, and many others. 
      How such data could be analyzed in a responsible way to, for example, understand pairing, key humanitarianism could be delivered and how can we automize data from public social media to understand perceptions on immunizations or on education‑related policy.  As part of these missions and projects, we also work on tools and guidances on how we do this and how we can do it in a responsible way. 
      Incorporating privacy techniques, privacy protective techniques as well as ethical code of conduct.  So within a four‑minute time frame I was asked to give a short presentation on some of the key initiatives within the U.N. Global Pulse and the U.N. in general. 
      In 2016 Global Pulse chaired a public privacy group which includes all of the agency ‑‑ representative all of the agencies in the U.N. organizations across the system.  I'm proud to suggest that just a month ago the U.N. privacy policy group has adopted and developed and adopted a set of the principles on the protection of personal data and privacy for the United Nations.  The principles aim to harmonize the approach of data protection across the system as well as to recommend ways forward, such as the development of more detailed and comprehensive guidelines for each agency in accordance with their mandates as well as the implementation and use of the risks, harms, and benefits assessments for the data processing prior to any project or the considerations of harms and specifics of group harms when it comes to vulnerable communities and groups of individuals including women, children, refugees, and many others. 
      There are ten principles that are provided in this set.  They will soon be published.  I'm also happy to say that specifically on big data and artificial intelligence, the United Nations Development Group has issued a guidance note which is a more detailed document providing a set of recommendations on how the United Nations organization should be dealing or could be dealing with non‑U.N. organizations, particularly the private sector, when it comes to accessing data coming from private sector and recommends due diligence and ensuring that partners we are working with also have proper standards and data protection, and the data they provide us with or would be providing us with is coming from proper sources and channels. 
      The due diligence also recommend the employment of the risks assessment and management frameworks to mitigate the risk that comes with data use as well as those coming with data nonuse.  So the guidance note looks into also those missed opportunities of when the data is not used and how it would be to our society. 
      Lastly, I want to mention the development of the risk assessment tool by the United Nations Global Pulse which actually is the tool that is recommended by the U.N. principles and data protection of privacy or the United Nations Development Group note.  Questions for everyone who is dealing with data or starting a project needs to ask himself or herself to identify the key risks that are coming with the data use as well as those risks that are coming if the data will not be used or if this project will not proceed.  It allows you to balance and understand the assessment of the pros and cons of data use and nonuse from a more human rights‑based perspective. 

My last point is on the recent report that is very relevant to this discussion.  It was issued by the United Nations Global Pulse and IEPP Association of Privacy Professionals which recommends end of the ‑‑ privacy is one of the tools.  Ethics is something that will help us go beyond black letter standards already provided by privacy protection.  It's something that will help us understand those issues coming with questions posed by artificial intelligence, those gray areas like life and death decisions, right?  Who can answer those? 
      So the report actually recommends a few tools that could help with making some of the design‑related questions, such as the ethical review board or engagement of professionals with different skills and stakeholders.  The recommendations also go to suggest in the implementation of the risk assessment frameworks such as privacy assessment or ethical assessment. 
      Of course, finally it recommends the implementation of such consideration as group harms given that we're now looking at the community level data more rather than just individual data. 

With that I thank you very much for your attention.  I will be happy to answer questions through the organizers of the panel.  Thank you, again. 

(Applause)

>> GUY BERGER:  Well, I'm pretty sure Mila woke up early to watch this livestreamed.  I hope you remember my starting off with UNESCO having the ROAM model.

>> NNENNA NWAKANMA:  I've tweeted it.  You don't have to do the campaigning anymore. 

>> GUY BERGER:  The rights to privacy which was graphically mentioned by Nnenna and now was mentioned very much here.  And I think that's extremely interesting to highlight that right.  UNESCO, the rights are key and so is openness, multistakeholder participation. 
      In terms of going further on this rights question, we have Thomas Hughes, executive director for ARTICLE 19.  Thomas, I hope in your remarks, your initial remarks, you're going to touch on what is the relationship between ethics and rights vis‑a‑vis artificial intelligence?  So help us ‑‑ how does ARTICLE 19 see that? 

>> THOMAS HUGHES:  Well, Guy, thank you very much.  It's a pleasure to be here today.  I will touch on that briefly.  With your indulgence I will stick with the question I was asked to prepare for. 

I would like to start by reinforcing Nnenna's comments.  I must admit my personal jury is still out on the nakedness versus search results.  I need to look at my recent research history before I make up my mind on that.  Broadly speaking, I think I'm in favor. 
      But the issues that you outlined I think are the core threats.  For ARTICLE 19 really the three are AI‑powered surveillance data used online by intermediaries and AI systems and content display as well as content monitorization and removal.  Those are the key and threats.  We asked to focus on disciplines, the crucial ones, and what research needs to be undertaken looking forward to the creation of ethics, standards, and guidelines. 
      I'm going to stretch your attention span, if I may.  I'm going to go from Nnenna's three points to five.  I have two quick clusters of five points to highlight.  I will do so within the four minutes. 
      First of all, in order to understand what guidelines, standards, and recommendations are actually required, it's important to understand what the challenges are because they will inform where we look.  So first of all, I want to highlight a lack of respect for the rule of law. 

So current industry initiatives around AI are narrowly focused on the development of technical standards, ethical frameworks, such as transparency, and accountability.  These frameworks have to be enforceable and must comply with the rule of law.  I'll come back to that. 
      Second, the lack of transparency.  Many companies developing AI systems do so in ways that are nontransparent and cannot be scrutinized externally.  There's a lack of accountability.  So the hidden nature of AI systems makes it difficult to study or the impact on the right of freedom of expression unless in obvious cases a tangible harm occurs.  Public perceptions in the role of the media.  A lot of the media discourse around AI focuses on artificial general intelligence rather than artificial narrow intelligence. 

Guy, you said at the beginning we don't have the scope to go into large‑scale definitions at this point.  It's important to note the difference and also the importance of discourse to focus on narrow intelligence at this stage. 
      Thirdly, data collection use was mentioned a few times already.  So the various freedom of expression and privacy concerns in ways the data is being collected.  That's been commented by all the panelists so far. 
      What does this mean in terms of which disciplines we should focus on and what we should be looking to research?  So, again, five points, quick fire. 
      First of all, I think we need to think about the legitimate purpose.  There's a lot of focus and discussion on how AI can be used to solve different problems and earlier comments around simplicity or complexity of AI.  I think the question we need to be asking ourselves is why.  Why are we using AI in the first place?  Should we promote a more deliberate understanding and even in cases delayed deployment to make sure it works better and more inclusive?

Secondly, national regulation.  So at the national level existing AI applications are regulated broadly by frameworks of legislation, freedom of expression, data protection, consumer protection, very importantly, media and competition law as well as different sectoral regulations and standards, but we need to look at these and to understand whether they're actually adequate for addressing the myriad of ways in which AI impacts freedom of expression and on human rights. 

Extremely important and as mentioned AI at the margins.  We need to research and look into the affects of AI on the margins in terms of data collection and the margins in terms of digital inclusion.  Again, that's been mentioned.  Because if we don't, as we are all aware, AI will reinforce historical biases.  They will not resolve them. 
      There's an important question of AI in content regulation.  So obviously AI is a huge power as has been outlined and will be outlined in terms of journalism and media, but it is very poor at understanding cultural context. 

Nuance has also been mentioned.  We need to ask ourselves the question, is there a need to apply AI to pluralistic, independent media?  Or is there not?  We need to go into an open mind. 
      Last and not least ‑‑ and this one's for you, Guy.  I have to ask you not to say ROAM afterwards ‑‑ for the ROAM principles.  Specifically for UNESCO we need to test the ROAM principles to see if they encompass and incorporate AI.  I think there are many very good multistakeholder bodies at IEEE and the partnership on AI looking into these issues.  But at the moment it lacks state actors and a lot of multilateral actors.  I would ask UNESCO to get involved in those four and add those perspectives.  Thank you. 

(Applause)

>> INDRAJIT BANERJEE:  Thank you, Thomas, for your insights.  I think you introduced a few new elements with specific focus on the question of the rule of law.  I think that, again, is much more in hindsight that we are trying to fit things into existing or new regulated frameworks. 

And you also mentioned the media discourses on AI.  I would like to know some more about that, what's your take on this?  I'm sure Guy will have something to say on the same topic because I think to a great extent we've been conditioned from all the positive hype and the news about AI.  Again, completely keeping out of sight the more difficult and complex questions such as human rights, freedom of expression, content regulation in the media, issues of data collection, transparency, accountability and so on. 

I think it be would be very interesting to see how this pans out because I think we tend to neglect or ignore to a great extent the role the media are playing in this whole game.  I think that's a very interesting new element which you brought into the discussion.  And I hope we can come back to some of these in our discussion.  I would like, of course, guy's point of view on some of this because he works in a very concentrated manner on some of these issues relating to the media. 
      Let me now invite Sylvia Grundmann to make her opening statement. 

>> Sylvia Grundmann:  Thank you very much.  And I would like to thank UNESCO for having put the human rights and the sustainable development goals into the focus of this discussion.  This is of utmost importance because what we want at the Council of Europe, we want a human rights‑centered artificial intelligence and in fact a compliance with human rights democratic principles and the rule of law.  How are we doing it concretely?  You know we are pan‑European, we have 47 member states and we cater to 850 million people. 
      So in order to assume this responsibility we've started already two years ago.  And at the beginning of this year we came out with a study, and that's algorithms and human rights.  Indeed we have to debunk the fear.  The fear factor is not conducive to our debate.  Here is the first study to better understand the human rights implications.  Everything is available on our website.  If you need hard copies, contact me. 
      So concretely after that study, we drilled deeper.  And currently we are working on policy recommendations to come out with very concrete guidelines, not only for our member states but also for business.  We want to give some first indications with a focus on responsibility, on transparency, on accountability. 

By the end of this year you will be able to see a very first ‑‑ not a very first but a very ripe draft, I hope.  That will have to be adopted by our highest body, the Committee of Ministers. 
      These policy guidelines that we are developing are not exclusive to Europe.  On the contrary, they are blueprints.  They can be used globally.  You can pick and choose.  So whatever fits for your respective systems please use it.  And it's free of charge. 
      Now, we are, of course, aware of the manipulative powers of algorithms, to the technology developments due to this huge speed that artificial intelligence systems have demonstrated and will further demonstrate.  These manipulative powers are extremely dangerous to elections.  We've seen already first examples.  They will become more dangerous to elections, and I'm not sure that our politicians are sufficiently aware. 

Therefore, at the Council of Europe, we have decided to come up by the end of this year with a declaration to address these manipulative powers and to better protect our democratic principles.  That will be a cumbersome policy process, but we are ready to pick up the fight. 
      Now, we need to drill deeper, I said it.  Artificial intelligence is simple, we have heard.  To others it might be complex.  And I've said it already.  We must debunk the fear factor.  Therefore, we are currently working on a very extensive study in the field of artificial intelligence.  There, again, a first draft will be available in the course of the next two months.  It needs to be shaped further.  We are in dialogue with all stakeholders, especially with Civil Society.  We have excellent academia on board.  It's quite an endeavor. 
      So from that study we will then see what concretely we can do to go more into a regulatory framework for artificial intelligence.  Because I think it is necessary.  And we have heard from the late Steven Hawkins, a fear expressed by a man who was very brave throughout his life.  And this man has said, if we don't regulate artificial intelligence, it will regulate us.  Now if a man like him was afraid, I think we need to really thoroughly reflect on what's going on here, and we cannot leave it all to the businesses.  Even they have deep pockets.  And states always pretend to be poor and therefore need the support of the money side.  This is too simple.  States must also assume their responsibilities. 

And Civil Society, I call upon all of you, they must hold states to account.  We need you here especially at the international level.  I'm very happy that UNESCO is always reaching out on all levels to Civil Society bringing them all on board and giving you the possibility to engage in dialogue and hold the states to account. 
      Now, last point.  Of course artificial intelligence is transversal in nature.  It goes into all wakes of our societies.  Therefore, we have more colleagues at the Council of Europe working in their respective fields.  Let me just tell you, data protection colleagues, bioethic colleagues, gender equality, of course, crime problems, combatting terrorism, and (?)  You'll see toward the end of the year already some policy guidelines when it comes to predictive justice.  It might be very important for all of you.  If you want to learn more, please go to our website. 

We have formed a task force at the Council of Europe for all of those topics.  We have a specific website on artificial intelligence.  So Google it.  That's always the simplest and you get right to our website.  Artificial intelligence for you.  Thank you. 

(Applause)

>> GUY BERGER:  Thank you so much, Sylvia.  In the interest of openness, I would say you can bring it or buy it there as well.  Right.  Thank you.  You made very, very important points. 
      I hope you all noticed the theory of change which is there's a role for governance, there's a role for business, and a role for Civil Society.  If you don't put those three together, you're not going to get the change which we need to be able to use AI as we would like to use it for rights and sustainable development. 

You mentioned very engaging questions whether AI can be deployed to undermine the integrity of elections or can be used to support the integrity of elections.  This is up for grabs.  People know the European Parliament has election next year.  Many countries have election next year.  We're going to see this beginning to take place as we speak. 
      So, okay, let me move on to introducing our next person.  And I won't say ROAM, but I will site the universal declaration of human rights as ARTICLE 19 which Thomas' organization is named.  The ARTICLE 19, the right to freedom of expression, everyone has the right to receive and impart information across all borders and using any media.  There you have the organization Reporters Without Borders.  Elodie, please. 

>> ELODIE VIALLE:  Thank you.  Good morning, everyone.  It's a great honor to be here today.  I would like to thank UNESCO for this invitation.  I hope that you all know this map, world, freedom of the press index.  You can take your pick with tweeting. 
      For more than 30 years Reporters Without Borders has been fighting for freedom of information.  We have been fighting for journalists who are sent to jail, who are arrested, who are tortured.  15 years ago we started to fight against what we call invisible prisons.  I mean online censorship.  A massive surveillance to spy on journalists and activists. 
      Today we're launching an international appeal for a pledge on information and democracy.  This is it.  To protect the public space of information.  The matter today, as it has been said by the panelists, is that technology is growing faster than ever.  At Reporters Without Borders we are concerned with the rise of issues of those news technologies and the impact on freedom of information. 
      Last July we published a report on online harassment against journalists.  You can type on your favorite engine online harassment against journalists, attacks, you will find it on our website.  We wrote this report to denounce the fact that online threats today are amplified by armies of trolls.  Those are increasingly used as a way to drown reliable journalistic reporting. 
      So I would like to say that AI winter is coming.  And AI winter is coming if we don't pay attention, if we don't establish guarantees to protect the new public space of information shaped by private actors.  That is why last September Reporters Without Borders launched an independent information and democracy commission gathering 25 personalities from 18 different countries, Nobel Prize laureate, Joseph Stiglitz, secretary‑general, text journalists, lawyers, they wrote this declaration on information and democracy.  This declaration recognizes the global information and communications space as a common good for human kind. 
      It also defends the right to information.  It says that platforms that contribute to the structure of information must respect basic principles.  Basic principles such as political, ideological and religious neutrality.  They must establish mechanism for promoting trustworthy information.  And last but not least, they must be open to inspection. 
      Last Sunday at the Paris room a dozen heads of states announced they decided to launch an initiative on information and democracy inspired by this declaration.  Among them, the Senegal President Macky Sall, the Costa Rican President Carlos Alvarado Quesada, the French President Emmanuel Macron, and the Canadian Prime Minister Justin Trudeau. 

Besides this initiative, Reporters Without Borders is working on a complementary project to protect the integrity of the public debate, the journalism trust initiative.  Nnenna just said trust is fundamental, and we believe that trust is fundamental. 

So the journalism trust initiative is an inclusive debate within the community to set up standards on journalistic procedures and to provide concrete advantages like a kind of label for the public. 
      So we have this today that I want to present this declaration on information and democracy and the hashtag is information democracy.  And the journalism trust initiative, those two initial initiatives are at different levels.  What we train to develop to reinforce freedom of information in our globalized and digitalized societies. 

(Applause)

>> INDRAJIT BANERJEE:  Thank you, Elodie.  Thank you for your insights again.  Another interesting and new insight into the discussion on artificial intelligence with, of course, a much clearer focus on press freedom issues, openness, transparency, freedom of information. 

And you mentioned something which is very close to UNESCO's house, my colleague Guy Berger's very implicated in this.  It is the whole question of safety of journalists.  We are a key player in the discussion, debate, and action plan, U.N. safety action plan for protection of journalists and the question of impunity.  That adds a new dimension.  What will be interesting to see is to what extent some of the declarations, some of the reports, et cetera, you mention or you publish and your work is greatly appreciated as you know, across the world. 
      To what extent do they have an impact or will they have an impact on the way we look at issues related to artificial intelligence?  That's the core of the discussion here.  How does press freedom, freedom of information, human rights, ethics, all of that link with what's coming?  You call it the winter of artificial intelligence.  We are headed towards winter.  Everything will become winter soon, but hopefully we can move on a much more positive note. 
      So thank you for those comments and insights.  I'm sure that the audience here will be looking forward to reading some of the reports, especially the recent reports you mentioned.  Some of your reports are well known and have been around for a long time.  But some of the new declarations and reports published will be very much interest to the audience. 
      So I will give the floor now to my colleague, Mr. Guy Berger. 

>> GUY BERGER:  Thanks.  We want to come back to all the panelists in a minute.  First, we want to hear from you because we thought you might have some quick, short comments or questions.  And then they can respond to those.  And if you're going to be silent, we have questions for them.  But I'm sure you're not going to be silent. 

I want to single out somebody who is here, Amos.  Amos, are you still here?  Good.  On the original panelist was David Kaye who is the rapporteur of the U.N. on freedom of expression of opinion.  He couldn't be here because he had to go back because of the fires in California.  Amos is the legal advisor to Mr. Kaye.  We'll take questions and comments and then come back to the panel. 

>>  AMOS:  Sure.  Thank you very much for the excellent interventions.  I think we politically align ourselves by comments by Tom in ARTICLE 19 pertaining to the impact of AI and its developments and freedom of expression.  David apologizes, as Guy said, for not being here today.  But he recently put out a report mapping out the human rights impact of AI, specifically on freedom of expression.  And I just want to raise kind of two key points that he made in that report before the General Assembly. 
      One is, I think given the role that artificial intelligence and automation plays in content and personalization as well as commercial content moderation, one of the more underlooked parts of freedom of expression is how that interferes with freedom of opinion.  Under ARTICLE 19 of the AICCP and UDHR, freedom of opinion is absolute and there is no permissible interference in freedom of opinion. 
      For that reason as well there's been low juris prudence on freedom of opinion.  David mentioned in his report that AI assisted creation which contains or might reflect certain biases and inputs and nevertheless held out to be an objective curation or factual information politically in search results.  And that might have some interference with freedom of opinion.  That's something that we believe should be researched and explored further in future policy and other discussions. 
      And I think the clearer impact that AI‑driven curation and moderation has on freedom of expression is on access to information.  And there we have much clearer principles and standards.  And it's not just the fact that AI's being used but the fact that certain forms of AI‑assisted moderation and curation are so dominant in part because of the monopoly on platforms have on online spaces. 

If you have any questions about the report or check it out, it's on our website.  Thank you for letting me have this intervention.  The General Assembly's website. 

>> GUY BERGER:  Amos is the legal advisor to David Kaye who is the rapporteur of freedom of expression and opinion.  Comments, questions.  Please introduce yourself. 

>> AUDIENCE:  Hello, everyone.  My name is Manuela.  I'm a fellow.  I'm from Brazil.  My question is directed to Elodie.  Since you are a journalist, I read this interesting article that talks about IA journalists who are producing news about simple things, not simple, but saying there's going to be an earthquake and you don't have a human that produces this kind of news.  I'm wondering about the future.  Because the owner of the narrative Science Enterprise he wants in 2025, 90% of the news will be produced like this. 

And today we talk about hate speech.  We talk about social bubbles.  And I wonder, if you are directing the speech for a person in a really personal way telling them what they want to hear in a way that they understand directly, what is the perspective?  Because you know, right now we have this problem with What's App, Facebook, social bubbles.  If you have news produced for every person in a different way when they enter a journalist's website, what is the perspective?  Do you think it's possible? 

>> GUY BERGER:  That's really interesting.  Let's collect a few more comments, questions.  Anyone else in the front? 

>> AUDIENCE:  Thank you.  My name is Firdausi.  I am a student at the University of Barcelona.  I ask the question on the preface today this morning, but I don't quite get the answer.  Maybe to ARTICLE 19 but others can also respond. 

What is the best approach for us in creating the regulation, the U.N. regulation about towards AI?  Do we consider AI independent like humans who are minor and later they will become like adult, yeah, so their responsibility is not shared with other human, maybe just independently.  Because the way we survive with technology is often by making the analogy and metaphor, for example, women entering the physical environment.  We make analogy when we access, for example, a website or email.  Other people is like we're entering other people ‑‑
      (Feedback).

>> AUDIENCE:  I think that's the question. 

>> GUY BERGER:  Thank you.  Anyone else in the front here? 

>> AUDIENCE:  I'm Jacomb from Central European Broadcasting Union.  We heard in the opening speech of President Macron some worries about the application of artificial intelligence to media and how this could blur the frontier between the true and the fake.  I would like to have a general opinion on that from the panelists because this will pose a lot of problem to the media, a lot of problem to journalists and a lot of problem to the algorithms that is supposed, to make the journalists in the future. 
      The second is for the Worldwide Web Foundation.  You talked about the public data that have to be base for the common good for the future.  But the problem is that we have seen example of public data made available, the NHS data in the UK that have been hijacked to transform into personal data belonging to each single person.  So how in this reg or rules for the future setting up of the artificial intelligence world we can be sure this data will remain public and not hijacked and formed into personal data? 

>> GUY BERGER:  Thank you. 

>> AUDIENCE:  Hello, my name is Carlos.  I have a simple question.  Mr. Thomas Hughes mentioned what I think is an important subject, delaying deployment.  I want to ask the panelists, what do they think of this idea of delaying the deployment because the world does not want to delay the deployment of many technologies.  And I think this is what we need to do. 

>> GUY BERGER:  Yes.  Very interesting points.  Yeah? 

>> AUDIENCE:  I think ‑‑ my name is Bhanu, I work for UNESCO.  One of the questions I think this panel was supposed to respond what is the state of play for artificial intelligence as far as the sustainable development goals are concerned?  I think there are esteemed panelists and maybe a lot of what is the state of play?  What is the extent of digital divide as far as artificial intelligence goes?  We call someone in Gabon today and find out if someone is doing analytics, and we call that person again and they're approached by somebody in Berkley. 

Are our member states aware of this thing?  Are humans doing something to increase the level of AI literacy globally? 

>> GUY BERGER:  Okay.  Okay.  Let's get back to our panel then.  Those who want to comment on any of the points or make additional points, we have 15 minutes left.  So each person no more than two and a half minutes.  Let's start at the end with Elodie. 

>> ELODIE VIALLE:  Okay.  Thank you.  I just want to say I've mentioned the AI winter.  It can be an opportunity for the news room.  Algorithm can help journalists for instance to investigate.  I just want to mention the amazing work made by OCCRP, Organized Crime and Corruption Reporting Project.  They analyze emergence of databases to investigate on corruption so we can use AI in a good way.  It can be very, very good for journalists.  I go back to your question on the future of journalism and how we can defend the freedom of information considering the fact that all this field is changing and being transformed today. 
      From my perspective we can map four main threats on journalism today due to new technologies.  And I go fast.  First, the amplification of threats against journalists.  Today you can buy 10,000 threats for $45.  And this is sold by private companies.  They have social responsibility in the fact that their tools are used to harass people online, especially journalists who intimidate them. 
      Secondly, that microtargeted ‑‑ sorry for my accent.  This information is a threat.  It's threatening the space, this new public space of information today.  This information is spread through chat apps as well.  Today in Brazil 46% of Brazilian people share information through What's App.  And of course we all have in mind what happened during the last Brazilian election and a lot of journalists, false information on journalists were shared on What's App as well to discredit journalists as well. 
      Thirdly, I would like to mention that the technicality of this information on hate speech.  At Reporters Without Borders we feel concerned about the development of what we call deep fake videos, for instance.  And we know we're following some cases of journalists and especially female journalists who has been harassed through this technique.  You know, as Marko said, today with AI, the thing that everyone can use those technologies and you can find very easily some application online, and with those application you can create false videos.  Some people mix faces with journalists with pornographic content to discredit journalists online. 

Last but not least I would like to mention the impact of the algorithmic distribution of information.  This is what Alwi called the filter bubble.  Now we can speak about echo chambers, and social information are like echo chambers.  We think we need more serendipity. 

>> GUY BERGER:  Elodie, serendipity.  Please. 

>> ELODIE VIALLE:  It's necessary to make journalism pluralistic again, I will say. 

>> GUY BERGER:  Thank you.  Before we come ‑‑ maybe we take Thomas because he's on this question on rights and expression and AI. 

>> THOMAS HUGHES:  So kind of three points.  I would say, firstly, we at the moment see an enormous proliferation of standards and initiatives and groups of governments and businesses and Civil Society coming together and producing different documents which is very welcome, and the enthusiasm is great.  It's extremely important that these initiatives comply with existing international human rights standards.  There is a tendency to start introducing new concepts, words, pieces of legislation not to connect these back to existing the Council of Europe and so on. 

It's important that these continue to comply with those standards.  Otherwise, although we're in a perfect harmony around AI that will not always be the case.  Otherwise, these will be standards that will erode human rights standards as they currently exist.  What's offline should be applied to online.  That's the mantra.  Let's stick to it. 
      I think in terms of regulations, ARTICLE 19 is working with partners around independent multistakeholder self‑regulatory body for social media and social media content.  I think that could be applicable to other layers within Internet infrastructure.  So I think that would be a pathway that could be looked at.  Certainly countries like Japan and Germany have also introduced nonbinding guidelines around AI and advisory groups and so on from the state perspective.  That could be explored. 
      Lastly, on the issue of delayed deployment, one of my favorite quotes from pioneers of social media is from Reid Hoffman, the LinkedIn founder who said building LinkedIn was like jumping off a cliff and building a hand glider as he was falling.  It creates massive innovation, but it's extremely important that there are human rights impact assessments on AI, and the understandings of them are well understood in advance to their deployment.  Whether that means a long delay or short delay, is not the question.  It's not about time but doing an analysis and understanding and sharing the reports. 

One of my colleagues has started another panel.  I'm not encouraging you to leave but just started talking about human rights impact assessments.  I would encourage you to go to that after this. 

>> GUY BERGER:  As well as openness and accessibility and multistakeholder assessments. 

>> THOMAS HUGHES:  That's right. 

>> GUY BERGER:  There are indicators at UNESCO to help you do this. 

>> INDRAJIT BANERJEE:  Thank you, Guy.  I will quickly pass on the floor to our three next speakers.  Please keep your comments brief.  I'll begin with Nnenna. 

>> NNENNA NWAKANMA:  Thank you.  Just as the question about data.  The basis that publicly funded data statistics for public use should be open.  That one is what we're pushing.  And SDGs, that's still the Bible for development for the meantime.  17.8 data is actually one of the goals of the SDGs, raising data, data capacity.  So data was at the beginning of the SDGs because we need to know where we are.  As the goal itself, we need to enhance our capacity, have data available.  And at the end because we need to monitor where we're coming from.  I still live by data.  If you will allow me, our ROAM. 

>> INDRAJIT BANERJEE:  Thank you, Nnenna, again, for your salient and cryptic response on this one.  Now I pass the floor to His Excellency, the ambassador of Mexico. 

>> H.E. FEDERICO SALAS LOTFE:  Thank you very much.  I don't want to be pretentious, but I think the fact that there's a representative of a government is very important because I think what I've heard here today is what we, all of us representatives of governments, should be hearing and listening to.  This is the idea behind the context of this panel today to involve all stakeholders, Civil Society, private sector. 

I think it's important that we have more of an interaction and a mutual communication on all of this very, very important issues and achieve the proper balance to deal with the questions that ‑‑ and the challenges that artificial intelligence present to all of our societies. 
      As I mentioned in my original presentation, I think governments should establish strategies and initiatives to deal with all of this issues that we've talked about this morning and at the same time there should be coordination between those strategies in cooperation amongst all states on this issue.  So this is where UNESCO comes in.  I think ‑‑ I hope that this is something that will be a continuing development that we will pursue.  And you will of course with the support of my government.  It will be very important to get everybody on board to deal with these issues.  Thank you. 

>> INDRAJIT BANERJEE:  Thank you, Your Excellency.  I pass on quickly to Mr. Grobelnik.  We're almost out of time. 

>>  MARKO GROBELNIK:  Thanks.  I will try to go quickly through the answers and the questions from the audience.  The first the generated news.  You mentioned narrative science.  This is simple technology and won't go far away working with Bloomberg and New York Times.  They use it for very simple news.  This won't have affect by itself but automatically generated content as Elodie said before, right, this has its own place in like trolls and social pressure across social media and so on, not so much editorial.  Now, going back to this ‑‑ your question before on threats, what Macron was saying, threats to media.  The problem is not visible threats.  The problems are invisible threats, the ones you don't see. 
      With today's AI technology you can influence and move society, mindset.  You can move mindsets.  These are invisible threats which most of you, maybe you are aware of them, but most of you are not aware of them.  So manipulation is much bigger problem that, let's say, simple fake news.  Simple we can find, right.  Manipulation we cannot find easy. 
      What's the problem with media and AI, right?  AI produces not the quality content but produces speed.  Speed is a problem, right because it's very easy to make somebody dirty in a second.  We can do this in one second with one improper tweet.  It's really hard for this person or organization to make it clean again.  This disparity is causing the trouble, right? 
      Now state of play related to SDGs, digital divide, education.  Education is a problem, right?  It's not a problem of the content, how to learn AI, how to operate AI is not there.  It's there.  The problem is education system, teachers are not adjusted to this quick change, right? 
      Maybe just one more comment since I'm researcher, right?  So I'm involved in OACD, in European Commission, UNESCO.  Right now we are starting, right, soon this UNESCO AI Institute which will kind of take part in this topics as well.  What I see as a problem, in all these bodies, right, there are too many ‑‑ not enough competence about AI.  AI's happening while we talk, right?  Let's say European Parliament and others.  The problem is they don't know actually what AI is.  What's being produced?  What's possible? 

It's very easy to talk about high‑level goals.  But if you sneak behind the curtains of big companies, so delayed deployment, no way.  This is not realistic.  This is not realistic.  If you see what's actually happening behind ‑‑ we can talk here easily about this topic but it won't happen.  It's not about me or anybody here.  It's about other people, right, which are competing on that front. 
      Okay.  I could talk about this topic way more. 

>> GUY BERGER:  Sorry, Marko, to cut you there.  But we have Sylvia Grundmann, your last quick remarks, please? 

>> Sylvia Grundmann:  Thank you.  And let me come back to the real world for a moment and the real threats that cost real lives, and that is why the fight against impunity where UNESCO is leading is of utmost importance, and it needs all of our support. 

Therefore, at the Council of Europe we are at the moment devising an implementation strategy to better implement our policy recommendation on safety of journalists and other media actors because we want to all team up and inspire the governments to better protect journalism and journalists.  And stop impunity.  That's my number one. 
      My number two, artificial intelligence and media and the blurring lines between fake and true.  Well, there are some antidotes.  For us antidotes is support to quality journalism.  We are working at that.  We want to see how we can fire up governments to help quality journalism in all its different aspects but without interfering with freedom of expression and the media, a challenging exercise, but we're ready to embark on it and work on it.  At the end of this year you'll see already some fruits. 
      The last point media literacy.  This is what was also mentioned.  We couldn't go into it here.  There is specific floor for it, but another one where we really have to team up, as I said, and where, again, we need resources for it.  So I call also on all of you to help us to help international organizations.  I know Civil Society also needs resources, but if we go more into pressuring those and also pressuring the companies that have deep pockets in a neutral manner support the good cause, we can have more impact. 
      I should not forget to say that for artificial intelligence the developments will be so rapid.  We've heard that from all of the panelists.  But I think we have to think more strategically and also despite the rapid developments think long term, so we are reflecting now on a strategic agenda to also find the right balances between the benefits and the challenges and always come back to the human rights protection there.  Thank you. 

>> GUY BERGER:  Thank you, Sylvia.  You will see outside copies of the UNESCO, and it's called artificial intelligence, surprise, surprise, the promises and the threats.  The way we've been talking we've been talking about the threats and the promises but not the promises/threats.  It's important to say promises first.  The threats we've discussed, the threats for AI needs to do no harm, but AI also needs to make a promise which is strengthen human rights and sustainable development. 

I also want to tell you tomorrow at Mozilla Foundation Paris there's a meeting dedicated to helping advising UNESCO, what should we as an organization do that will really make an impact in this field?  You're welcome to come if you want to.  Mozilla Foundation in Paris. 
      I want to thank Bhanu and John and Justin who has organized this, and I want to thank the panelists and I want to thank you.  And I want to say enjoy the rest of the IGF and I want to thank my co‑moderator.  I think it's been a great session. 

So don't forget the word that begins with R and ends with M and look at the UNESCO resources on that topic.  It's very relevant to AI and all these questions.  Thank you. 

 

Contact Information

United Nations
Secretariat of the Internet Governance Forum (IGF)

Villa Le Bocage
Palais des Nations,
CH-1211 Geneva 10
Switzerland

igf [at] un [dot] org
+41 (0) 229 173 678