IGF 2018 - Day 2 - Salle XII - WS161 Information Disorders: Towards Digital Citizenship

The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> MODERATOR:  Hello, everyone.  We will start the panel.  I will first give the floor to Divina Frau‑Meigs who will present the panel.

   >> MODERATOR:  Thank you, thank you for gathering here and IGF is here and I'm in particular happy to welcome my young students who are present here and have had the experience of participating in a global event like this.  I think like all of you, we want young people to be present and to benefit also from our exchanges.

I also want to thank the panel for being here.  Some of you are coming from a long way, and we'll go a bit more ‑‑ do you have an echo like me?  Is there an echo in the room?  No?  It's okay?

I also wanted to thank the panel because they've also come from different parts of the world and we'll give each of them time.  I just want to make sure we know how it's going to work.  The panel is going to be given about five minutes to present their perspective on this idea of information disorders and the kind of solutions they are proposing.  And then there will be an exchange in the room with them, and then there will be a second time where people in the room can also participate about the solutions with the perspective that they have so that's where we have really a two‑way dialogue and we can figure together the way forward, and that will be another good 40 minutes.

So the important thing for us is to interact as much as we can on this issue.  The issue is also to have online participation and Francesca over there will be overseeing this and will also signal to us some questions that can be coming from abroad.

And to go about the theme of this session, when we started proposing it with everyone, I thought in general a Internet Society that information disorder was really big and everybody was becoming aware of it, be it radicalization, be it misinformation, but I think we're a little bit further away in this conversation and discussion.  We have taken stock, though some of our panelists will also help us and take stock, and so I think would all like to push the conversation more toward solutions, how do we find solutions, what kind of solutions can come from the public sector, from the private sector, from Civil Society, from youth, and I think this is I think the spirit in which we device this session, and this is what we would like to take away, and we will be reporting on this and so you will have access to the report, Pascale over there is also doing the reporting.  Okay.  Do you want to start?

>> Thank you, Divina.  So we will quickly present our panelists.  First, Emmanuel Adjovi from intergovernmental organization.  I'm sorry, I'll correct that on the website also.

Also we have Denis Teyssou.  We have Paula Forteza from the French Parliament and Vilano Qiriazi from and Rasmus Nelson from the university of oxford.  We want to give the floor first, thank you.

>> Thanks very much for the invitation to join this panel.  I think the premise of conversations here at IGF as elsewhere is that we all recognize that this information represents a wide variety of different problems that cannot be usefully collapsed into a single problem.  We also, I think, all recognize that these different problems are animated by a wide variety of different actors and operate in different contexts, and then the response has to recognize both the diversity of the problems, there is a different between foreign space and elections or domestic propositions with misinformation or bottom‑up misinformation spread in good faith by individual citizens but that may still be harmful.  Again, the context matters and we have to tailor our responses to that.

I think we also increasingly recognize, as we discuss this around the world, that part of the difficulty we have in responding effectively is that many problems of misinformation are enabled by technologies that are also used for entirely legitimate purposes ranging from the completely mundane where we simply entertain ourselves or coordinate everyday life or purchase thinking alike, to things that many of us in this room might think of as progressive.  The same technologies that have enabled propaganda and disinformation and hate speech have also enabled something like Me Too, for example to flourish in many different countries around the world in ways that was deMonday strably was not the case before digital media.

So how can we respond to the variety of different disinformation problems in a way that minimize harm without jeopardizing gains and future potential?

How can we respond in ways that take these problems seriously but also try to understand the scale and scope of them and bring evidence to the discussion that is sometimes dominated by self‑interested voices, weather from private companies or other sites, and sometimes used to undercut people's confidence in independent institutions that are trying to hold, for example, politicians to account or enable free communication among citizens in a way that are sometimes uncomfortable politically, so I think it's very important to not confuse a crisis for the political establishment for the crisis for democracy necessarily, these are not necessarily the same things.

So in this context, my personal suggestion would be that we start from the idea of fundamental rights, that we recognize that many problems of disfundamentaly are not around the distinction between true and falsehood and not in any way between things that are simply harmful or not harmful but deeply ambiguous because they reflect ambiguity in our universe and we live in a diverse society where people have different views of what the good life looks like and often disagree strongly about how to live those lives and how to live those lives together.

So my personal suggestion would be to think about how we empower other societies and renew robust institutions to allow citizens to make good use of that liberty.  That's about avoiding rushing into direct content regulations first of all, I think that's about addressing other issues that are important but not necessarily about disinformation separately.  These issues would include privacy, competition, and national security, and that should not be collapsed into disinformation from my point of view.  I think secondly, we need to focus on the role of public authorities not primarily in terms of directly intervening and constant regulation, which I think we have many reasons to be skeptical of if we want to enable people to be free and have robust exchanges of view, but to orchestrate and incentivize collaborative multistakeholder approaches or what I call a soft power approach that is very different from a hot power approach that directly goes into these and forces entities to do specific things but instead tries to support independent media, Civil Society, meeting information and researchers and others as they are trying to equip citizens to navigate and evolve information in the environment.

And I think finally, thirdly, we need to monitor progress.  I'm personally a big believer in the idea of multistakeholder collaborative responses based on self‑power rather than direct intervention which I think we have many reasons to believe I say risks being more harmful than disease they're purported to represent, but of course self‑regulation and multistakeholder approaches should not be an excuse for inaction or dragging of feet, and needs progress through assessments, greater access of data and research by public authorities and also independent researchers and third parties that can independently assess research, no one private space or companies should be allowed to mark their own homework.

We would never regulate our physical infrastructure without access to serious and rigorous assessments of what we're trying to do so that at least the bridge collapse as we've seen in Europe, and it shocks me we're sometimes considering doing the same for the infrastructure of free expression, I think we should be as serious as infrastructure for freedom of expression as any other infrastructure and I hope you join me to move toward such an assessment.  Thank you.

   >> MODERATOR:  Thank you, Rasmus for looking forward in the role of research and giving us caution about how to use it and the skeptical approach we should have to other kinds of reactions.

As a result, I think I'm going to ask Paula Forteza to take over, because he sent a challenge, he's skeptical about the role of public authorities, and you had to deal with this, so could you speak to us about what information disorders do to politicians and how some of these solutions that you have looked at yourself have been active into could come up as possibilities to consider worldwide?

   >> PAULA FORTEZA:  Yeah, so information disorders are really having a great impact in our democracies, and we tried to touch on this issue through the Fake News Law that France put in place.  We had a lot of debate around this law, which demonstrates how delicate this issue is and how much a lot of rights that are fundamental are at stake when we're talking about these issues, we're talking about freedom of speech, we're talking about access to information, we're talking about information, we're talking about the rights of loyal evictions and so this is a very, very delicate issue.

What we tried to do in France was to choose, it was a choice not to take the German road, which gives the responsibility to companies to take down content when they think it's fake news or illegal content or defective commentary content.

This can be a very tricky in terms of freedom of speech because when you have a very important find, the companies take down more content than they should because they don't want to be find, so what we tried to do in France was to put a judge in the picture and tried really to circumscribe the fake news issue in moment of elections and there was a lot of debate around if the Judge is able to define what is the truth, is the truth contextual, is the truth cultural, are we creating a Minister of Truth like in the 1984 book that we all want to ‑‑ that scares us all.

So we tried to find this balance, and we did to circumscribe this the most, and we did also a lot of advances in terms of transparency, and this was I think the best part of the law because we asked transparency around sponsored contents on social media, and we worked a lot also on transparency of recommendation algorithms, and we tried to do this without going against the secrets of affairs because as we know we can't ask a company to deliver their algorithm or their source codes on a mandatory basis, but we can ask them to give statistics about the outputs of these algorithms which can help Civil Society researchers to understand what are thebiases of these algorithms and if there is some consensus that has more issues than others.

That's what we were look working on at the French level, but this is not enough.  Why is it not enough?  Because these issues are evolving very quickly on a daily basis.  Legislation needs to be updated very quickly, and that's why I was ‑‑ I proposed to have a census for this law.  I haven't heard, but I think this is one of the pieces of this kind of adaptive regulation that we're trying to put in place where we can update the legal framework at the same speed as technology will change.

And one example of why this is ‑‑ why this isn't enough is for instance what happened in Brazil for the last election, so an election that for me, moved me an a personal basis because I represent French people from Latin America and I think it's dangerous what happened during this election.

We had a very coordinated campaign of fake news that was led by Businaro.  He paid up to 3 million dollars to companies to send messaging through websites, for instance, and this is very tricky in terms of what kind of fake news we're used to because it's private messaging, so it's encrypted messages.  We can't recognize or identify that this fake news is being diffused and this is not new because we always had these spam campaigns online through mailing, for instance, and that was very present, for instance, in 2015 for the election of the Constitution for Europe, but in Brazil, this took place also because they don't have the kind of protections that we're trying to put in place in Europe with the GDPR.  We have these protections of personal data which allows that we don't have these huge mailings that are on sale and that can be acquired very easily like it was the case in Brazil, and we have this neutrality politics that makes that in Brazil that we can have, for instance, zero‑rating issues that make that some platform or some messaging applications gets like ‑‑ consolidates the majority of the users.

In Brazil we have 96% of users that own a smartphone that are on websites so it's very easy to diffuse messages and it's very quickly to do that.  Some of the messages that were conveyed, for instance, was that the candidate for the left, I'm sorry, was going to put in place a gay in schools to teach how to be gay to children, so this is nonsense, and they were faking pictures of left‑wing candidates next to Fidel Castro to show at what point they are extremists and polarized, and all of these issues diffused very, very quickly thanks to this lack of regulation in Brazil, but this is a new way of fake news and it's asking a lot of questions on how we can regulate this effectively, so I mean, a couple of months later, the vote of this Fake News Law in France, we're already seeing new ways for fake news to spread, so I encourage all of us to really think on how we can have the most flexibility and adaptive regulation when we're talking about these issues, and of course work on other kinds of solutions on education, and such but I'm sure the other panelists will talk about that.  Thank you.

>> Thank you, Paula.  Indeed transparency is quite important when understanding bias, and in key in fighting fake news and information disorder, and I would like to give the floor to Denis Teyssou to talk to us more about the challenges and the solution that you are working on.

   >> DENIS TEYSSOU:  Thank you, and thanks for the invitation to talk to this panel.  I've been working in the last three years on a European, original 2020 Project called ENVID, a project about verification of videos and social networks, and obviously since the end of 2016, we have been tackling and debunking a lot of so‑called fake news.  We prefer to use the word (?) information, and more and more what we see is that images are more and more important in terms of the information and information disorders, especially because images trigger more emotional reactions from the audience and there are more and more shared on video, either at the contextualized videos coming from the past and just putting back on social networks to administrate some kind of new breaking thing, and especially during elections, we also have a lot of fake content coming out, but not only about elections.  Also, what we have demonstrated during the project is that you have a campaign, especially against migrants in different countries around Europe, but even around the world, by using the same kind of videos in different context in each country, and the only similarity is similarity of image, and even involves fake news, in fact, are debunked in one country and still reappear a few days after in another country, and so there is fake news and that's what is called serendipitous wrote a chapter talking about the force, we are facing this, the fake news coming back in the loop, months among months the same kind of pictures, the same kind of racist campaigns are coming back on social networks, on different private social networks as well just as others but of course on Facebook and others.

So this is one of the problems and that's why we provided some tools to journalists to human rights defenders, to others to be able to debug rapidly the fake news because if we want to avoid the reality of those misinformation and how information travels, we need to debunk it very quickly, and so to inform the network that this is not appropriate, but this kind of content isn't appealing, and then we may sometimes, or most of the time we can stop the spreading of this content and have less effect in terms of swindling the voters and misleading citizens and so I would insist on that notion of it because that's becoming more and more important because as you may know, we are foreseeing some kind of new technologies which are still separated, but we have face‑to‑face technologies where anybody can imitate the face of any famous person, we have vocal and voice where some software programs claim they can learn the voice of anybody in at least 20 minutes of discourse, and if you combine all of those technologies plus images created by artificial intelligence, you can create almost anything, making anybody talk about anything.  This is dangerous, as we'll see, and especially dangerous because from the journalistic point of view, we miss the source.  There is a deletion of source on the Internet when you're asking young people, where did they get the news, they almost tell you almost all the time on Instagram, on Facebook, on Twitter, and but you never catch the source of the news, so that means the accuracy of what we're reading and even it seems they don't bother about that.  So in that aspect, I would say that median literacy is key to make people understand that the sources are extremely important, especially today when the information is trolling all over the networks and you need to think about the source and to identify the source because especially in a world where we are going to be deface this kind of thing, it will be very important to be aware and just not to believe any kind of misevaluation.

   >> MODERATOR:  Thank you for this perfect fake scenario that we have to take into account in our discussion.  We talked about from a perspective of a profession one of the stakeholders, that there is also the pure players in the private sector who are stakeholders and that's why we would like the hear the perspective which is one of the European Search Engines that may present an alternative.  So please.

>> Thank you.  So I hate what I'm going to say here, but I have bad news for all of us.  Fake news are not going to disappear.  I mean, and as a matter of fact they've always been here since the beginning, since the beginning of times, we've always had extremist people and people who were disliking each other and spreading news that were absolutely wrong.

So the difference that we're experiencing in these days is obviously the magnifying lenses that the information platforms are providing, and dare I say, the monopolies that they represent in the world today.

Not to mention, but there are other monopoly distributions or monopolies around there, obviously, and we mentioned a few of them here.

So, one possibility, one option to fight fake news is to address the problem of monopolies and dominance of information.  In the search engine space, which you know is where we're playing, we're obviously facing, dare I say giants is probably not even the right word for this, huge companies that rule most of the information that we experience every day, but nevertheless, you know, we took the challenge and tried to develop a search engine that brings forward a certain number of values that encourage transparency and trust and those are privacy, respect.  This is absolutely key to understanding that they may be different perspectives in presenting data and presenting information, but when you consider that the search engine that you are trusting is actually not personalizing or overpersonalizing the data you are getting from this platform, so personalization is one key element of the combat against fake news.

The Quants does not collect any personal information whosoever.  We don't care about who you are, where you are from, what your interests are, all we care about is providing information based on your request and that is available on the web.

So once we have done that, we don't ‑‑ we absolutely won't forget everything that you have done with us not even before because we don't know during this request, but that gives us a position where users can trust that the data they experience is the same for everyone since there is no personalization.

So I want to put the emphasis on the fact that overpersonallization is a drama in today's world when it comes to overpersonallization and there are alternatives which are basically about favorrizing the emergence of competition in the space where there are observed monopolies.  I think the European Union has taken some movement, or has made some movements in this space recently, and this is playing in the right way.

I think this is pretty much about what I wanted to say to you today.  I just want to repeat myself somehow and say that the key challenges of the coverrability of information, when you have information spread across one single platform, obviously you cannot discover alternative points of view, alternative views that may challenge your perspective, so again for us, the issue and the challenge is to multiply the sources of information that have the benefit of providing you with different perspectives on the information sources they can experience.  Thank you.

   >> MODERATOR:  Thank you very much with a very different perspective on business, that shows that business men talty in data and data privacy and views, and especially coming from the self‑‑regulatory field of the players.  There is also the option of co‑regulation, of intergovernmental agencies and institutions that can bring several partners together, so that's why I'd like to hear what Emmanuel has to say about this issue.

   >> EMMANUEL ADJOVI:  Thank you to give me the opportunity to join this panel.  Certain rules are notdescript, you know there is for democracy, there is for society, there is for economy, most of the policies that have been put out seem to be adapted to the nature of this business, which is global and inaffected by national territories, and national often with these.  Some suggest given the trusted institution media and university ‑‑

In my perspective, this information will continue ‑‑ it will continue as long as some people benefit from manipulation or lives.  I would like to focus my presentation on two categories of solutions or suggestions.  First, firstly, maybe on social network education.

It is important to educate the eyes.  Education works with the population to help them to hone the judgments.  It's really important.  It's also important to introduce in the school systems education for understanding the global information ecosystem, actors, challenges, and how it works.

We have to teach children to be critical of digital production.  It is important and still important in my perspective to teach children to select their content consumption on the Internet.

The second thing is about citizen cooperation, involving citizen funding.  As it is possible to set up websites to allow citizens to report signals ‑‑ on the Internet. ‑‑ that the second step, it is important to give the possibility to the media or to the police to project the effects.

At first though it is also an emotional issue, it is important to teach adults entrusted of citizens in charge of ‑‑ to communicate facts or are in a that highlight the emotional, sentimental, and irrational impact, the way to communicate is important.

Thank you.

   >> MODERATOR:  Thank you very much for this for co‑regulation and for critical thinking for citizens.  I think the Council of Europe has also a few other solutions, so can you tell us about it, please?

>> Good afternoon, thank you again for this opportunity to talk about briefly about a recent project on digital solution education.  Actually, there are two ways to tackle how to manage or how to, you know, bother with information and all this information disorders.  One is presentation and the other one is fighting or protecting people from the bad circumstances, negative circumstances from disinformation.

So if you attended other discussions or follow the previous IGFs or follow the Council of Europe's Work in the field we already know that we've been doing a lot in combating fake news, information disorders, et cetera, or contributing to the governance ‑‑ the good governance of the Internet.

But I would like to brief you on the other side of the picture that there is also prevention, which means that I cut the fake news at the source or the information disorders.  What I mean is you educate young generations how to tackle and how to deal with information.

I mean, it's not just information disorders, but dealing with information itself is a big challenge because there are lots of information.  Yes, there has been, you know, fake news in the history of humankind, but the vast amount of information is really the first time in the recent years.

So the Council of Europe's action in regard to Internet is mostly protecting children so far, but in 2018 a new project was launched, inter‑governmental project on digital education to children to respond to this challenges that Internet and digital technologies bring.

So IGF, a few information about the concept of digital citizen education that we are proposing to our citizens in Europe, and what mean by digital citizenship is to empower the learners, in particular, the children, the young learners with necessary competences and to empower them to be able to engage positively, critically, and competently in the digital environment and to practice forms of social participation that are respectful of human rights and dignity through the responsible use of technology.

This project builds on the Council of Europe's long‑standing program on education for democrattive citizenship which is promoting responsible and active citizenship throughout European countries or Council of European Member States and so a digital citizen for us is the one that creates, shares, socializes, even works online, participates active learners possibly in local, national, and global communities.  For example, if I may, (?) is in a global campaign that everyone can start something globally, and at political and economical and social levels, and the person seeks for continuous personal and proficient development, which means life‑long learning in informal, formal, and nonformal settings and most importantly the digital citizen attends, respects and protects human rights.

So the digital citizenship education concept, the way we see it, yields on the competencies for democratic culture, and so it doesn't mean that you're online and you're fully disconnected from the real life or offline life, so what applies to your offline life applies to your online life, so we need to build the personalities, the peoples, capacities, and competencies to deal with the challenges of life and not offline only but also in the online environment.

So there are 10 digital domains or expert groups that we would like to improve digital education, they're divided under three main clusters, being online, not being online, and rights online, and 10 digital domains will help us to steer the development process of digital citizenship education and they are the underpinning ‑‑ they are the underpinning concepts for digital citizenship education and, I would say, for example, median information literacy is one of our digital citizenship domains.  This is necessary to gather information, process information, and proceed with information.

What I mean is that, yes, it's a good opportunity to get information, but it's up to you how to process it.  You can manipulate the information, and how you proceed with it, I mean, can you share it directly or you can check the credibility and then endorse it like on Twitter or Facebook.  You can share something, but you have to make sure what you're sharing is, you know, credible.

Maybe in information literacy, it's important ‑‑ it's not enough.  So for example, you have to develop competences like antics and empathy in the learners because before they share the information, they have to make sure that this information may affect many people's lives so they have to take from other's point of view.  And you are creating your e‑presence and digital footprint, so you have to make sure that what kind of reputation you will have, so and then lastly, privacy and secretly, and privacy may main the personal protection of one's own and others online information, while security is related to one's own range of online actions and behaviors, and so it covers competencies such as information management and online safety issues and to deal with dangers and unpleasant situations.

And as the topic of this year's IGF is Internet of Trust, so if you want Internet of Trust then you have to have trusted citizens which produce trusted information and content so we are feeding ‑‑ we are concerned about artificial intelligence, but we are feeding the artificial intelligence by sharing information, creating information, and so then we face the consequences.

And as, you know, in Turkey we say the best treatment is not to get sick, so if you can prevent this happening at the very early ages, starting from preschool, then I hope in the future we may have less problems regarding information disorders.  Thank you.

   >> MODERATOR:  Thank you very much.  This concludes our first round of discussions.  I would like to have ‑‑ we have an hour left, we have a little bit of exchange of questions from the floor to the panelists before we really open the presentations up on the floor and do questions on the floor by themselves about possible options.  Yes, present yourself so that we know who you are.

>> Thank you very much.

(no English translation).

   >> MODERATOR:  Just to everybody on the panel, the question from him is about cybersecurity and social media.  It's hard to have control over them, it's hard to know which entity has control of them, and so what can we do about that?

>> I would like to respond.  (no English translation).

>> Yes, from Europe.  On the topic of prevention, because I also wanted to address that topic, I haven't really heard anything about socioeconomic factors around fake news.  As you know, for instance, I'm taking the example of the United States and there are a lot of people pro‑trump for instance and it's clearly tied to the socioeconomic situation, loss of employment, poverty, and many of these people are ready to jump on any kind of easy explanation for the situation they're in, and I'm wondering if you have a response to that, which is if we continue into a world where people are poorer and poorer and inequalities are rising, I wouldn't be surprised that fake news would also rise, and you know you can do whatever, can you filter, can you sensor, you can educate, but the underlying causes is that people are kind of lost and, you know, between the top economists which try to explain this in very complex concepts of why we got here to this point, even explaining the 2008 Crisis, for instance, good luck.  And you know, the people they're saying it's the migrants, it's this, it's that, which ones are people in that situation likely to believe, and so I wonder if you have anything to say about that.

   >> MODERATOR:  I can tell this one is for you.

>> Thanks for raising that.  I think that is critically important and we see it in country after country in the world that a critical variable in terms of how severe problems of disinformation are are at levels of discontent in the public and the degree to which political actors are seeking to speak to that discontent sometimes in ways that are not premised in the truth in particular in public discourse, if you will.

So I think there is no question that the backdrop of this is a profound competence of public and institutions and many societies around the world, and I think we can also see very clearly in countries in which this crisis of confidence is far less pronounced, arguably we have seen far less severe problems of disinformation, even though the very same technologies are equally or even more widely used.

I would point to the example of Sweden, for example, but also of Germany as another example, so it's clear that technology is integral to the way in which this information is spread, no question about it.  It's also clear that the companies that profit from the technologies can do a much better job of taking their responsibilities seriously.  Fundamentally much of the public disinformation is deeply political and the only thing I will add to what you said which I think is important, is that the country I think the most empirical research of this is most developed is in the United States and one thing sometimes overlooked in discussions like this is as important as discussions in media literacy are around young people and I second the work of the Council of Europe and for young people, in fact the greatest consumers and disseminators of disinformation online in the U.S. electoral context most research suggests were older men, often highly partisan.

So I think we should be quite careful about associating this problem with younger people and I think it's the or at least it suggests that really recognize that much of this is fueled by deep, deep discontent with established institutions.

   >> AUDIENCE MEMBER:  Hello, my name is Amy.  My question is to the speakers that raised and as well as the responses transparency measures on interintermediateries or platforms and there are a couple of transparency measures that I think with well understood and gichen as examples throughout the gefer so the idea of transparency around paid content and who it's targeting and who paid for it and so forth, so that's one kind of transparency that could be sought.  Another kind is the types of reports about the way that moderation is happening, so for example how many complaints they got, how much of that content was taken down, how many times it was appealed, what the results of those appeals were, but beyond those two vectors of potential transparency are there other areas of transparency that you think would be useful, and specifically to Rasmus how you need the inputs in the research to take the next steps in the problem, so what other types of transparencies would be useful?  Thank you.

>> I think it's a collective enterprise to develop those, and I'm very glad that Paula and others are trying to push this discussion politically and we heard earlier in the panel and regulators that no one on their own has the answer to what we're looking for here and I'm glad there is a central discussion here at IGF of what we should be asking for.

What I would say, I think at this stage, is that from the point of view of the independent research community, it's clear that we are currently unable to really assess the scale and the scope of the problems that we face and the effectiveness of responses.

I would say for two reasons that have to do with one the access of data, primary data from the platforms themselves and methods that we have that do not rely on primary source data often don't have the granularity or question the pace of things to really address these things in realtime, and only the companies themselves have access to that.

Now, I realize, of course, there are very real concerns about data protection privacy here, and I think if you went to Facebook these days and said you've become a famous medieval university in the UK and added the work Anylitica and suggest you want access to data they might chase you away with armed guards or something, so I think there is a complication here we need to recognize is a real complication.

I think the other thing is into the about transparency but about funding, and I have to say if politicians and high‑income democracies have been as serious as supporting research into disinformation as they have been about talking about it, I think we would know a lot more about where we are and how effective the responses that we've seen so far have been and thus perhaps would be able to protect our democracies better.

>> Yeah, on this issue of transparency I'm an activist of transparency on all issues so I think we can't be transparent enough even if we are a private company it's a way of trust and we're talking about ways of trust.  But there was an interesting thing proposed yes, I did which was a pilot where regulators can access to all the information systems of Facebook and this is the first time something like that is tried, and I think it's very interesting that they're trying to copy the model of the banking system, and I think when they see all the data that is available, they will also understand which data can be released or which data can be asked by a citizen directly, so if the first time we open the blackbook, so if this is meant to be a very interesting experiment.

There is also a lot we can do with existing data that we can strap on the websites or that we can access through APIs, most of social media have APIs.  For instance, there is an interesting report that was done by the Knight Foundation where they tried to understand which fake news are deployed automatically through bots and you can really understand how some kind of profiles of accounts are very easily to identify as bots because they tweet, for instance, on a regular basis or they use some kind of vocabulary or they have a way of acting as accounts that makes that they must be bots, and so you have for instance, a website that is very interesting that was developed in Brazil and that's called Pegabots where you put your account or any account and it automatically tells you what is the probability that these accounts are bots or automatically are fake accounts.

So these kind of information can already be done with the information that is available at the moment.

>> Yeah.  Just, I would like to pile on what has been said.  Transparency poses a problem of neutrality versus editorialization versus censorship, and you know if you look at the problem from those three angles, then you basically ask yourself, you know, to what level should we combine information delivery together with the algorithms that have made this delivery possible, so in other words when Paula was talking about these bots and discoverability of those bots, I wonder if Facebook was about to implement this bot to sensor the content that those bots have been producing, then in which case you will start answering or asking yourself what neutrality is here, how can I trust in what Facebook delivers here or Facebook with some other social media platform or social engine, doesn't matter.

So the one question that we may ask ourself is, should we deliver information in the same way that we deliver press information in newspapers where when you buy a newspaper you know what's bias, which basically is the brand of the newspaper or the editorialization bias.

When you deliver information on the web, then should we also make it available together with the algorithms that have led misinformation to become accessible, and in consequence accompany the information delivery with enough metadata, excuse the lingo here, that allows users to understand what extent this information can be trusted and what is left to the information selection.

   >> MODERATOR:  So the last question from the room, sir?

>> (no English translation).

>> (no English translation).

   >> MODERATOR:  I think some of the ideas could be also cooperation, exchange of practices, et cetera, and this is what I would like to hear from the floor now.  Some of you have practices, have applications.  I see some people I know, so don't ask me to name you, but go ahead, people from digital parenting, people from storytime, et cetera, can you please present to us what you've come up with for solutions?

>> Good afternoon my name is Claude and today I'm here representing some work that we're doing in digital intelligence capacity building, and so in October of 2016, as part of one of our workshops, an NGO partnership called the DQ Institute was launched.

   >> MODERATOR:  Can you speak away from the mic because we hear the feedback.

   >> AUDIENCE MEMBER:  Is that better?  No?  Okay.  Is that okay?  Everybody can hear me?  Excellent.

Okay.  Is that okay?  All right.

October 2016, DG Institute was launched in Singapore through the World Economic Forum in our partnership.  This Institute has been able to essentially expand its program in 30 countries around the world.  It took just over a year and we're active in 30 countries around the world in 15 different language, and what the DQ Institute has is a, first of all, it's a digital intelligence framework, which is a definition of different competency levels that essentially explain and provide capacity‑building around digital‑citizenship skills including a content‑delivery platform and measurement platform.  This is a very scalable solution and we've been able to, as I said, launch in three different countries, including two in Africa, so one in Nigeria and one is South Africa, we have two pilots active.

And out of this DQ Institute, through the forum, we have been able to evangelize this framework and created a coalition for digital intelligence which involves OADC and IEEE, so the education side and private side and public sector side and through IEEE we're creating a global standard around the DQ Framework and through the OACD, their 2030 Future Education Model is adopted by OADC and officially endorsed, so we are expanding and we're looking for, obviously, any countries or partnerships that are willing to look into the DQ Framework, adopt it, and essentially help us build out the solution moving forward.

It's focused on 8‑12‑year‑olds, but we're also expanding to other demographics because as rightly said, let's say that digital citizenship isn't just for the youth but it's for many different demographys, so if you're interested look up DGinstitute.org.

   >> MODERATOR:  Anybody else?  Yes.  And then you back there?

   >> AUDIENCE MEMBER:  Yeah.  I think there has been an enormous lot of initiatives but I see one fundamental problem as in of a a dozen countries in Africa, Australia, and over the world, and basically how many of us here are actually educators who are going to take these ideas and put them in the classroom, because this has always been the problem.  The people who talk about these issues aren't the teachers, and the teachers who would like to do something about it and are not getting teacher training that is going to help them actually do something about information disorders and others, there seems to be a real gap.  I've been saying for years that how can you have an IGF without having those people that are educating the children?  And how can you have people who are not trained psychologists and teachers going into schools and teaching about these things?  There is a real gap, there is a need for this multidisciplinary, and I'm afraid after 50 years as a teacher and the last 25 years trying to do something in this field, we're not going to make it until we start bringing the educational staff in here, give them a equal voice, and don't try and tell them what to do about information disorders.  They're part of the issue, so this should be part of this discussion.

Just a question, how many teachers are there in this room?  Or how many people in this room who train teachers to actually do something about these issues?  Most of us are university teachers who aren't teacher trainers.  Thank you.

   >> MODERATOR:  We're trying to cheat but you saw through it.  I see there is something to answer there.

>> Yeah.  Thank you for bringing this capital issue, thank you.  So we have been very sensitive to this problem back in the days, you know, the unfortunate tare of the attacks back in Paris in 2015, where we actually considered, you know, what was happening the day after the terrorist attack and looked at what Google was offering and unfortunately, what you only saw was just, horrendously horrible pictures of dead bodies on the floor.  But conversely if you typed anything related to nudity at that very time you would have seen anything perfectly clean and filtered, so the question is about culture, the question is about education of education of young people, students in particular, about making the distinction of the cultural bias that they're exposed to, and to that extent we've created a search engine that is entirely dedicated to children, and I think to answer your question, we take pride of with this product is to actually make a very sensitive judgment about censorship, and generalization and all.  Together with the French Government we have decided that we should be able to ‑‑ we should allow ourself to editorialize and sensor some content based on some very well‑identified list of topics, which is basically pornography, violence, drugs, and hatred in general.

Based on that, we're able to give teachers a tool in which they feel confident to educate children to go and search on the Internet, discover the entire diversity of content that are available, and yet be confident that nothing inappropriate will show up.

And then we have teachers starting to develop pedagogical sequences where they put together a search on Google with a logged‑in account and search on Google without an account, and search on Quan Junior and then having children express differences and see the difference and reflect on what they see, so this is one of the initiatives that we're taking, one of the many of course, but being a search engine, we're very central to information access and I think we should start from there.

   >> MODERATOR:  Please introduce yourself.

>> My name is Talimoris from Ukraine.  Ukraine is the sixth year of facing information warfare and we have a little bit of experience in it and I would definitely tell you that media literacy is very important but not efficient.  There are many initiatives to educate and prevent, but it should be probably discussion on the high level on level of policy and reunite opinion leaders and see things to find approaches, comprehension approaches of how to combat misinformation and today what I hear, we talk a lot about the targets of misinformation campaigns, audiences, users, and we talk a lot about intermediatories like Google and tech companies, but we haven't talked about those who initiate the campaigns, and for sure with respect to freedom of information and media freedom, we still should talk about some media outlets which may be the agents of foreign influences.  There are some media which launch informational text and we should talk about politicians who order these campaigns, and as we heard from Brazil.

And on the level of policy, there should be the strategy of how to combat misinformation, and there can be two approaches.  One is follow the money.  What sources were used to launch information campaigns against countries or citizens, and; second, we should start talking about the approach which is called naming and shaming because within the actors, there are definitely quality journalism, but there may be like the websites which nobody knows who registered, or where they're registered because you can easily hide identity of any ownership of any websites and it can run from foreign countries, and this is ‑‑ it could be like the approaches to responses not just in media literacy but on the policy level, and this is first of all, the issue for national government with respect to freedom of information and reference in general.  Thank you.

   >> MODERATOR:  Thank you.  The last presentation because we have to close.

>> Thank you.  I'm Francoia from Europe, there is a need in Europe, I think we have a need in Europe like a magical pill, an easy solution, so why in Europe we've facing a problem like fake news?  We ask, we're begging, we're creating for an easy solution.  Let me explain my points.

So most of us here are thinking about a technical answer for a human problem, a technical answer should be for robots or bots.  But so far we are not ‑‑ we have emotion, and I think why we have fake news now is somehow easily explainable.  I mean, if you remember in 2003, in the mainstream media, the news of the media was explaining French people and people in Europe that it would be the best to invade a country called Iraq, you know the rest of the story.

As we're made of flesh and bones, of course people will go crazy, there will be an insane environment, it makes sense.  I don't think we need a technical answer.

I'd like to underline what you just said about the culture in Africa, the culture importance of culture and togetherness.  I think that's what we lost and that's the main points facing fake news, togetherness.  We lost togetherness because in the same country, people will exchange, if I have a lebonese or Syrian neighbor, I wouldn't mind I would share with my neighbor, not everything, but I will have a discussion and then learn a different point of view.  That's what we lost, I think in Europe, togetherness.  People, some of us don't want to live into diversity of mixed culture, you know.  We've lost it, and I think it's a very sad ‑‑ it's very sad points.

And the last question wants to underline the ‑‑ it makes me underline the fact that togetherness is not only the answer for fake news and also education, and the lack of togetherness is alleges a security matter with all the major understanding and players, and togetherness is the great weakness in the European psyche.  I think to me, normally you have to have a critical mind if you go to school, or unless our educational system is isn't relevant at all, are we going technical just like robots?  Thank you.

   >> MODERATOR:  Okay.  I don't want to stay on a negative note, so I would say with the intent of togetherness today at the table and around the floor, and I think together, united, we certainly can fight and hopefully win this war on trust and this war on truth for a culture of peace.  So thank you all for your presentations and for your participation and see you online with the final report about this session.

>> Thank you everyone on behalf of France and we continue to work on the issue and to try to foster against fake news and hate speech.  Thank you.