IGF 2018 - Day 3 - Salle VII - WS415 Countering Misinformation Online: Policies and Solutions

The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> MODERATOR:  Thank you so much, everybody, for coming.  We are just waiting for one of our panelists who I think is at the door but I think we can start and she can join. 

Thank you, once again.  My name is Asad Baig.  This is being hosted by Media Matters for Democracy, Pakistan.  I am a journalist and founder of Media Matters.  What today we wanted to speak about in general was how online information is evolving specifically.  And what are those policies and solutions which are being posed as good ideas and interventions as something to push back and whether they're working. 

I won't know into detail right away but we have four panelists today including myself.  I'm going to moderate the session.  For each panelist we have about 10 to 15 minutes.  We have somebody from Facebook coming as well but they couldn't participate which means we have slightly more time for the audience engagement.  We have about 25 minutes for an open discussion, more or less 25 minutes.  It can be more or less depending on the presentations of the panelists. 

Before I go into opening the session formally, if I can introduce the panelists to introduce themselves quickly. 

>> ROSLYN MOORE:  I'm Roslyn Moore and I work in DW Akademie.  I'm responsible for developing our initiatives regarding media information literacy and looking at both our internal and external kind of approaches in this regard and that's what I'm going to talk a bit about later. 

>> PADRAIG HUGHES:  My name is Padraig Hughes.  We litigate cases around freedom of expression, press freedom in particular.  This topic, I guess, and relevant for us because we're involved in litigation and relation to legislation that's introducing fake, false laws all around the world. 

>>ASAD BAIG:  I've introduced myself.  One discussion I want to have today and this is my own personal agenda, if you would call it.  To see sort of how this subject, misinformation, is not really something we could put a binary solution to.  A black and white model which is take care of that.  This is the solution being offered in south Asian countries including Pakistan.  Get Facebook to remove the content, we're done, that's the end of it. 

You have misinformation online and you create a cyber crimes law to which you can criminalize whatever is being said and you put the people in jail and that's that. 

So this kind of binary model which we usually see working in some ‑ most of the south Asian countries and now even the Western world.  My personal sort of idea is to debunk that through discussion today and how that black and white view is not the solution. 

We have the fourth panelist here.  Thank you for joining us.  If you can very quickly introduce yourself and then we can move on to the presentations. 

>>  Hi, everyone, I'm representing an NGO working on the policies, and we're representing the media which include ‑ I'm working in the local language frame work and we work with English. 

>> MODERATOR:  So we'll have the first presentation from Ashara.  I will talk about misinformation.  The economic forum has categorized it as one of the biggest threats to human society.  Why is that?  In 2016, the Defence Minister of Pakistan, who, by the way, is also in some way the lead of the army of Pakistan and includes weapons development and so on, tweeted in response to a supposedly fake statement from the former Irani minister and his tweet was essentially a nuclear threat and that was in response to a fake statement which is being circulated by a certain group called AW News.  It was picked up by mainstream media without the common sense of getting it verified from anywhere. 

After a couple of days when the media was exhausted, this tweet was essentially a fake tweet or fake statement and the original information ministry said this was a fake statement, it never happened. 

This is the kind of extent we're talking about.  We're talking about nuclear threats.  Later we found out there was a news report in October of 2018 that that news outlet was a part of a larger campaign being run on Facebook and most of that campaign could be linked through ‑ allegedly linked through sources in Iran. 

So this is the kind of threats that we are talking about misinformation when we talk about misinformation, this is what we are looking at. 

Similarly in India, there are many cases of lynching now which can be linked to rumors or dysfunction as we call it. 

Recently in Pakistan, there was a huge protest.  I'm sure many of you must have heard about that.  It was in response to a certain decision which came out of the Supreme Court and during which there were many, many documented cases of misinformation which was directed at people who were trying to suppress that, law enforcement agencies, calling for lynching in public, calling for hanging in public and so on.  And all of these were based on the information which was completely unreal, which was fake. 

What we are talking about is recognizing dysfunction.  It's not just about the tweet.  It's not just about a poster.  It's about using it as a weapon, especially when has a religious and political undertone. 

We've done some research in terms of elections before the election and so I will move onto that in more detail in my presentation.  I'm going to skip that for now. 

Now the newest framework we're looking for is artificially intelligence images which are being created through algorithms and applications.  There's this new thing which we recently got to know about, GN, which, apparently what it does, is it takes ‑ you feed it pictures and it learns your face and it creates a face picture which is none of those pictures, it's a new picture.  It's a total fake.  And similarly it's creating videos.  So the fake videos, celebrities online now has become a big thing and these videos never happened, of course.  But if you look at them you will find out how real they are. 

And then there are similar ways to push back on it.  People come up with their own algorithms to pushback and identify these networks.  If someone works on AI you would know how easy it is for AI ‑ the fake videos people don't blink.  It's a matter of adding one line to make them blink.  It's as simple as that when we talk about artificially created fake videos and images. 

For me as a journalist, it has a human factor of journalism.  Most of these videos, most of these images are being circulated through fake websites which are created in the name of famous journalists.  It's absolutely nothing to do with them.  Not only are these websites making money out of advertising but they're using these fake videos as click baits.  You see a journalist who has a talk show on a prime time television is saying a porn video, XYZ person was released.  People are bound to click on that because it's coming from a face they're very accustomed to. 

Only problem is this information gathering generation model is fake, it has nothing to do with the person whose picture is being used. 

Fact checking, sorry I'm taking a lot of time, but just wrapping up, the fact checking, people like myself, journalists tend to put their faith in the mainstream media outlets but it has turned out the misinformation or disinformation like this is becoming profitable because many times when we see something like this pops up it gets reported in the media primarily because it generates for them. 

Two days later it's easy for them to come and say "It was fake and I'm sorry about that", most of them don't say that.  In the last few years we've seen many fakes of people impersonating other people getting reported by mainstream channels, the newspapers.  Some of them are still online. 

Same if people like me put their faith in organizations for fact checking. 

Just one last thing.  As a response to this we're seeing new models and new initiatives coming up.  How useful are those models?  We've seen a law in Germany, you know, how useful that is.  We've seen similar models getting started in Pakistan and other South Asian countries as well where they're talking about cross‑regulation.  You regulate the social media platform alongside the mainstream media.  I don't know even how that's possible.  How can you even compare the two?  Apparently governments are thinking about it.  In Pakistan we have a government‑funded fact checking.  Now governments are stepping into fact checking so then who is ‑ who in the government is prepared to define what is fake news as they call it, or what is misinformation?  Is it anything that is politically irrelevant ‑ sorry, politically critical of them has become misinformation and disinformation? 

Pretty much like CNNN is fake news for Trump.  These are the kind of things and responses which we have seen.  How useful are they?  This is something we are going to talk about. 

>>  ASHARA: Thank you very much for giving me this opportunity.  I know it's the last minute and I'm grateful for you for giving me this opportunity. 

When I'm coming from a crisis situation like now.  How and why fake news has been such a persistent problem in Sri Lanka, I think it's necessary to understand a little bit about the political context in our country.  Right now the current situation in Sri Lanka a motion signed by 122 MPs in Parliament after 20 days of argument in the parliament and today the Parliament has held after 20 days. 

So right now there's a lot of ongoing misinformation that a lot of misinformation is being disseminated but Facebook and Twitter and a lot of social media apps right now at the moment I'm talking to you. 

There's huge campaigns run by the previous regime and this information being disseminated to the public by public figures so people believe them and there's a risk of that because the misinformation which has been circulated around, it seems true.  So that's the kind of situation right now in Sri Lanka so I speak in a situation like that. 

First of all, before I start, I would like to address the point of context of Sri Lanka in two ways.  One is a divided country right now.  What I call a divided country, Sri Lanka is recovering from nearly three decades of civil war, along lines of language, land, education, unemployment. 

Social media have all put out information so why I call it is a demographic deficit, to bring out an example, I come back to alternatives in 2016 and processes in Sri Lanka.  So only 13.2% of Sri Lanka have trust in the parliament.  36.5% of Sri Lankans have no trust in the Parliament.  They don't trust political parties. 

In March 2015, 16.2% indicated that they trust the election but when it comes to 2016 it has gone down.  If I ask today I don't know how people would react.  With things going on it has changed drastically within two years. 

Coming back again, like during the 2015 presidential elections ‑ the President said to the press, accompany me to the polling station.  A the candidate had decided to go himself instead.  There was also an ad on the newspapers that it's making promises, the presidential election in 2015 so in the election itself, they made use of more than 150 running the campaign,.  And also a report by the Center for alternatives,, it says that, the Sri Lanka army found guilty.  Four of the defendants were released due to act of evidence. 

The politicized the news as the campaign platform which the President.  This is a reflection of the military impunity and it's interesting to know this report also called for 20% of Sri Lankans was saved. 

Also a survey conducted in 2015 in the western province, the most connected part of the country to social media, said that they would like to take sample online and 51% of people with friends and family, it can speak through relatively high level of literacy and engagement with the media. 

Fake news and misinformation has a systematic way of spreading.  Also, it was interesting incident in Sri Lanka that happened, it also directing newspapers.  So one of the example is like recent example of a blood donation and reading the daily paper, I'm not going to name the paper, it was saying members of the upper class were refusing to donate blood for the fear it would be given to patients of a lower caste.  So that kind of story was ongoing which is a very misinformative one. 

Another examples ‑ there are a lot of examples that are upcoming in Sri Lanka with the situation ongoing. 

I would like to conclude in this, like as a civic leader team, active in Sri Lanka, the question is how do we counter this fake news?  A lot of media stations, a lot of major news outlets in Sri Lanka are owned by a lot of politicians.  So how do we go back and counter this fake news and misinformation spreading? 

We are involved in combating this misinformation and quality in reporting on our websites.  Also like normally we will check always when we report but I can guarantee that there's a very low habit of fact checking in Sri Lanka.  It's low literacy so nobody is going to fact check or show anything on Facebook.  It plays a role in misinformation. 

I think there should be some, I don't know how many are coming from the background, but the custom and the culture we are living in so it's very difficult to handle sometimes.  We need an effective mechanism to counter this misinformation, before the misinformation becomes dangerous.  It should have implication to change how this mechanism should be in a country like ours with a lot of multiethnic and a lot of religions are activated and a lot of languages are being taught.  English is not the major language and there's a lot of local language. 

So how to address it and how to effectively address the session.  So thank you very much.  We can talk later also. 

>>ASAD BAIG:  Thank you.  The reason why I wanted to talk about the local case studies in Pakistan is generally the solutions that we are very ‑ that we are big fans of, fact checking solutions, they tend to come from the global noise and they always forget to keep the nuances of the global south when developing these solutions.  So generally we want to also bring a friend from Sri Lanka and I'm quickly going to share some facts from Pakistan, very excited to move onto other panelists. 

So we've recently had elections today ‑ I'm sorry, recently had elections this year and Pakistan is a country of 200 million, around 35 to 40% of people connected to the Internet, about 33 million Facebook accounts and about half of that, 30 million also on Twitter. 

Before elections, we found a lot of political activity on Twitter and we want to see how much of that activity was organic, so to speak.  There were allegations that there was engineered hash tags calling for actions targeting journalists and they're getting really, really popular.  Often they would have many, many thousand tweets in just a couple of days and there was a lot of activity happening online. 

So what we did was we started this research called Trends Monitor.  The idea was using KPIs we captured some of the data and through a algorithm we try to find these hash tags are being generated by fake accounts, in the research we call them human bots.  Unlike the spamming bots or the marketing bots, these fake accounts are manned by humans.  So they're often one person would be Manning like 10, 12, 15, 20, even more than that from one laptop.  It's so easy to do that. 

Often the accounts are from teams in other cities.  One account is being used to set out tweets.  It has 47 tweets in one minute, which is not humanly possible, of course. 

In two months we captured about 225 political hashtags.  There were about 68 million tweets.  I won't go into the details of the findings of the research.  It's an ongoing sort of initiative also on the website.  So you can see the crux of the idea which I'm going to share, there's many fake accounts and when we found that these political hashtag, most of them had a huge amount of activity from the human bots.  Again, I'm not going to go into detail. 

In almost all of the political hashtags, these accounts were directly involved in spreading very intelligently made misinformation/disinformation content.  What they were actively doing was that this content was in Urdu.  Urdu is the local language in Pakistan.  Most of the times this content would be in the shape of images so there's no way for Googling it or there's no way for sort of text searching it online. 

Then we found out that there's a whole other dimension of the spread of the information that we're missing and that's WhatsApp.  Because there's practically no way of monitoring the content going into private group on WhatsApp.  So that's a spread of the information we missed. 

Capturing the medium, which is popular is Twitter.  Even there we found out that these instances of misinformation/dys information were rampant, they were getting out of control. 

There's one campaign in particular I want to talk about.  Pakistan is a very politically polarised country, as some of you would know.  There is this campaign going on about asking voters to cross the ballot or cross the name of the person you don't like in addition to the stamp.  We have manual papers and you have to stamp the person you're voting for. 

So this campaign was asking you to stamp the person you were voting for and cross the name of the person you don't like.  That cancels the vote and the vote is not counted in the electoral system. 

This is one thing which we found going around on Facebook, mainly on Twitter, and I'm quite sure this was going around, which we have no way of finding out, this was going around in WhatsApp group.  There were a lot of politically based WhatsApp groups. 

Often time this is in Sendi, which is a local language and other local languages and so as a result after the voting was down we found out the total datas the number of rejected votes, we don't have any evidence to link it with that, but the number of rejected votes were higher than they were last time in 2013.  One could argue that it could have influenced but we don't have any evidence linked to that which is something we need to work on. 

Just a little bit of an idea of what we saw in Pakistan through our research.  Now it's been named misinformation debunking solutions.  We don't know what works but we know what doesn't work. 

What doesn't work in Pakistan is, again, a global north idea of reading outlets.  It's easy to read outlets on Facebook and rate them as credible/non‑credible outlet.  But the fact of the matter is when misinformation starts spreading, these so‑called quality journalism content creators, they're often at the forefront of making use of that information.  They're often promoting them as much as any of their fake account and doing more damage because when they report on something which they have absolutely not verified, then they have more of a chance of reaching a larger audience which people have started to know in Pakistan, and I'm sure in other South Asian countries on Twitter and Facebook. 

So that idea of rating outlets is something that we need to reset. 

Also it does not work for anything which is being circulated on WhatsApp groups.  You have absolutely no idea of doing that in any way possible. 

Another thing, another so‑called solution we're seeing is that we simply outsource this content to censorship platforms.  We write to Facebook, we write to Twitter to take off a few thousand accounts and keep an active eye out for these kind of case.  But to me, and Roslyn pointed out this morning, summed it up very nicely this morning is what we're doing is we're outsourcing to a private American company.  That's what we're doing, and that's not what we're doing, that's the government of our country, that's what they're doing. 

Frankly speaking, I'm not comfortable with that idea.  Also, it doesn't work.  Let's face it, very recently they took off the count of one of the leaders of the protest I was talking in Pakistan leading a very violent protest.  That one account was taken off.  We monitored that Hashtag campaign they were running.  There were at least 1,000 fake accounts running simultaneously promoting the content that one account was promoting.  That account is taken off, nothing happens to the campaign.  There's no way of stopping it.  So we know that doesn't work. 

Facebook‑sponsored fact checking or any fact checking sponsored by an actual private company, well, in Pakistan, for instance, Facebook has given the task of fact checking to ASB, which is a credible news agency.  They keep an eye out for misinformation content.  Of course Facebook also shares the content with them and people can also tag them on Twitter and Facebook pages.  But what they do is they take the content and they generate misinformation/fact checking content in English.  So they're clearly killing the spirit of fact checking idea. 

If you take off a post which is written in Urdu or another local language, and you're responding in English, a language which 90% of the people in Pakistan cannot read, and you're not perhaps doing enough. 

Like I said, so these are the things we don't know.  We clearly know that they don't work.  As a journalist I have to put my faith in, of course, two things.  One of them generally is credible fact checking content which is both in Urdu language, which is somehow also profitable for the media houses and for the existing newsroom to create similar content and so on so there has to be some economy around fact checking and so on. 

The second thing is the media information and literacy.  These are the two things that we yet don't know if they work because we haven't tried them.  Now the information campaign starting out in Pakistan, now there are people like Don.  They're stepping the fact checking but even they're doing it in English.  Nobody is doing it in Urdu. 

I will stop there.  It was just to give a basic picture of how the misinformation content is spreading in Pakistan which is I'm sure pretty much the same as in Sri Lanka. 

I will move to Roslyn who is from DW Akademie which is one of those institutions running a very effective media and information literacy campaign.  How has that panned out and what are the lessons learned from that? 

>> ROSLYN MOORE:  Just for those of you who don't know, DW Akademie is the media wing of the German public broadcaster.  DW Akademie started working relatively recently on information literacy, and we've been doing projects for five years.  Given we only started to hear about this topic on a kind of global level, a little bit ahead of the game but the actual kind of field of media education, media literacy has been going on since '70s, it's not new. 

Media information literacy is the umbrella term that UNESCO developed and it encompasses digital literacy, news literacy, all the literacies we're hearing about under this umbrella. 

The plan of it, what's important about it is we talk about how we need to read, write and count as our basic literacies in schools and that you had to have to be engaged citizens.  Now it's felt that in the world we live in, increasingly digitized world it's also important that you have these skills if you want to be an engaged, active citizen. 

For us at the Akademie, we embraced the UNESCO definition and we understand that information literacy is about developing five core competencies in an individual or in a group and that is that they have access, in terms of access to information and freedom of expression and that they have access to information and they know how to access it responsibly, that they are able to analyze that information, that they can understand media systems, we can recognize dysfunction, that they can understand and recognize propaganda.  They can also recognize satire. 

This is important and it's not something that is in education systems everywhere.  That they can create their own media and they can voice their opinion and that they can reflect knowing and kind of understanding their own rights and also their obligations and they can recognize hate speech, bullying, extortion. 

Ultimately what we want is finally that they act and that they can fight for their rights and this is the five competencies that we're working on in the media information literacy environment. 

Themes or issues like disinformation then can fit into this.  So what happens is in a country or area that you're working in and you think what are the issues.  Now obviously we're talking about Asia but disinformation is a problem everywhere.  In DW Akademie we currently have 20 media literacy projects in Africa, Asia and North America.  And we are working ‑ we don't have one core target group.  It's easy for us, and for most people to reach young people because you can reach them in an institutional setting and they also quite like the idea of creation, so there's definitely a hook that you can get young people on.  And most of the countries we work in young people are the majority of the population. 

In Cambodia and Palestine we managed to lobby the Minister of education and get media education literacy into curriculums so they're now in the curriculums of those countries.  It was before the current changes in Cambodia which has seen a crackdown on a lot of freedom of expression and access to information.  But it's in the curriculum of the country. 

On kind of international level, we set up media expert network and the point of the expert network to bring experts from across the world working in different approaches and some of them actually focus on hate speech.  Some of them focus on propaganda and misinformation in Georgia.  We also work with a Dutch NGO that set up this, I don't know if you've heard of it, but this Get Back news app that went viral about six months ago and they have a kind of alternative way of getting people to learn about the process and in order to recognize it and see the damage that it can do. 

Then we also work with young journalists also because it's kind of assumed that journalists should be media information literate, that is part of the professionalization of people in the profession but it's also sometimes in many of the countries that we work. 

On a European mantel, media information literacy has been proposed or put forward as an area to develop by the European Commission.  So there was a report released in February this year on dysinformation.  The idea is to strengthen the ability and level of citizens before looking at regulation. 

Where we've seen regulation, in Kenya, for example, also Cambodia and I'm not sure what regulation you can talk about, but from my experience it's normally led to the crackdown of freedom of expression and a crackdown on journalists.  So this is the kind of challenges.  Definitely media information literacy is not the silver bullet and it's not going to solve the problems that Asad was introducing at the beginning with this developed videos that have been developed by computer programs. 

No matter how much you as an individual, yeah, this is, I guess, a challenge.  But you can give people basic skills and good enough critical analysis skills that they know what to look out for. 

>> MODERATOR:  Thank you.  I was hoping you could perhaps talk a little bit on the responses, especially the legislation and policies around fake news and how useful they are and what are the chances of being used as tools to target journalists. 

>> PADRAIG HUGHES:  Our angle on this issue relates to, at the moment, in relation to fake news and false news, challenging legislation that introduces provisions that seeks to regulate fake news, false news and a range of other issues relating to the cyber world.  So you see a proliferation in the last number of years of legislation, restrictions relating to what's referred to as preventing electronic crimes or cyber crime acts where they contain some very problematic provisions from our perspective which will have a serious impact on freedom of expression. 

Our approach is that there are certain well‑established fundamental freedom of expression principles in the human rights world which need to be complied with and where the failure to do so, those legislative provisions should be shot down. 

The starting point, in a way, for us when this type of legislation is introduced, that we see almost a malign intent on the part of governments to regulate and to suppress and to manipulate and to prevent people from critiquing or exposing corruption or exposing government state activism, our involvement is to engage with local lawyers to assist local lawyers in bringing challenges to this type of legislation

A lot of this legislation across jurisdictions contains very similar provisions, strikingly similar provisions, from one State that's introduced and other States introduce legislation that is framed in almost exactly the same terms which is a problem because the legislation that we've been challenging recently is usually vague and over broad in the sense that it doesn't comply with well‑established international freedom of expression standards. 

There's what they call the three‑part test when it comes to freedom of expression.  Any restriction of freedom of expression must be provide by law, it must be necessary in a democratic society.  When you apply that test to this legislation it falls down on almost every level and all of those limbs of the three‑part test must be met and complied with. 

Today, we're just recently working on a challenge to the law in Tanzania.  Today it's in court before the East African Court of Justice where section 50 and 54 of that Act are being challenged and they talk about false news and false news had the effect of upsetting or offending people.  One of the fundamental principles of international freedom of expression the is fact to shock.  This is problematic and being implemented in order to suppress journalists who are seeking to talk about corruption and incompetence among people connected to the State. 

So in Kenya there's an ongoing challenge which is going to the Supreme Court which seeks to challenge the cyber crimes act, a case before the Court of Appeal in Nigeria as well, in Pakistan the prevention of electronic crimes act is currently being challenged before the High Court in Islamabad. 

We've been working along with those cases and we can see the commonalities.  We've been providing a working with the lawyers to sort of construct arguments which challenge the vagueness and the arbitrariness of those laws but also challenge the fact that those legislative provisions give a lot of leeway to the Government to implement them. 

So there's a discretion that the Government has in the way in which it uses those laws, we say is contrary to international law. 

The other thing we notice with these kinds of legislative provisions is there's a lot of similarities to legislation that was introduced during the war on terror.  So from 2001 onwards, national security laws were being introduced.  Again, they are subject to arbitrary interpretation by the Government in a way that would allow them to circumvent what would have been well‑established human rights defenses. 

So, for example, in relation to the war on terror, you had legislation that allowed for extended periods of detention without access to a lawyer.  You don't have the same type of extreme provision within this kind of cyber crime legislation but you do have this loose language that suggests that it's only a matter of time before governments begin to copy each other and to implement these provisions in a way that will have a serious impact on human rights defenders, only journalists and so on. 

In Kenya, for example, we're representing a journalist who has already been arrested twice and detained once in relation to tweets that he's put out and he's been charged under their cyber crime laws and the allegation is ‑ the allegation is as yet unknown.  That's one of the limbs of the challenge that we're bringing but he's been picked up on disseminating what's claimed to be false news under the cyber crimes act. 

Among the freedom of expression legal fraternity, there is this instinctive reaction every time legislation is reduced that regulates speech.  This is a difficult one, I think, for freedom of expression lawyers because the understanding is the freedom of expression, and you will see this in the juris prudens from the courts around the law, if you have these problematic dissemination going on you have to wonder whether that in itself undermines democracy. 

So the instinctive reaction is the challenge to these types of laws, and there's some really interesting commentary from the UN special rapporteurs on freedom of expression who are talking about the negative impact that these kinds of laws will have on freedom of expression, but the problem is they don't necessarily offer very concrete solutions, just some suggestions, like Asad mentioned earlier on, around fact checking and this kind of approach. 

There's also a very American reaction to this kind of law from the legal fraternity which would be challenging bad speech with more speech.  So this idea of maximalization of speech which may or may not work but studies suggest that fake speech, hate speech in certain instances, can be more effective in the way in which it infiltrates society in the way that it's shared among communities. 

So it's difficult to know precisely how to counteract it but one of the concerns we have is that regulation will not necessarily be the appropriate avenue to go down primarily because it allows for potential abuse by governments and that abuse may ultimately result in the suppression of critical thinking, the suppression of investigative journalism, for example, which is one of the key areas we litigate on and work in. 

So as with the other speakers, I think it's difficult to suggest a solution but certainly you can be vigilant as to perhaps dangerous outcomes in the efforts to seek a remedy to or solution to the kind of this ongoing fake news, false news. 

The reduction of these kinds of laws is extremely problematic. 

>> MODERATOR:  Thanks you.  We'll open the floor for open discussions, feedback, comments, questions.  Anything that you want to offer. 

>> AUDIENCE:  Hi.  What was said about the initiative of without borders, is it something that it can ‑ ‑

>> MODERATOR:  I think it depends.  We've learned it the hard way.  There is no one size fits all solution to this problem.  And, like I said, there might be from the Asian countries and so on, to believe this might be a possibility and this might be a potential solution but there really isn't a way to believe that because we've seen, like Padraig said, we've seen the solutions fail.  

So unless there is a way to, how should I say, balance it in one way or another, it's very difficult to say.  But one thing we do know is that most governments in global south have very elaborate track record of using policies for protection in their own favor and own political point score. 

There is this whole irony, and I'm going to take a minute and share that irony with you.  There's a irony around the cyber crimes law that we call it in Pakistan.  The Government which drafted this bill, who drafted this legislation, had a protest of another, they took up a role in front of the Parliament House for about 100 days and they made a lot of noise online and on Twitter, and so on. 

The Government at the that time, they drafted this law thinking about that protest and the irony was that after a couple of years, the same law was basically used to arrest the people who drafted the law. 

Another irony was the people who were opposing the law are using the same gimmicks and same objectives in mind, using the same law for what they were protesting against and for what they were criticizing the other government for. 

These are the things that at least I have to keep in mind whenever I think about solutions. 

>> AUDIENCE:  Hi.  The region that we had the so‑called socialism, we are afraid of laws because every law has been written to support ‑ in Venezuela they have hate speech law that actually puts in jail for up to 20 years people who, according to the judges, they are hate speech because they criticize any government figure.  So you cannot hate him because they are corrupt.  You cannot hate them because they are violating human rights.  So if you hate them for that and criticize them, you are going to go to jail because you had hate speech. 

We're really afraid of that kind of laws.  When I ‑ I used to oppose to any kind of contra of Internet.  We have heard from South Asia where misinformation can be used.  We think that there should be some kind of violence.  When you come from South America we don't have that kind of problem.  The problem is the ones who spread misinformation are government and then after that governments blame the press and then they say that they have to countermisinformation and fake news and do various laws to control the thing that they provoked, that they did. 

So I don't know what's the middle ground in this.  I don't think, or we don't believe, at least we don't believe in the laws because at least we were able to denounce corruption through the Internet because it was like the last frontier for investigative journalists in Ecuador.  The Internet was the last frontier.  We even have problems in the Internet because the Government copyrighted their logo of public documents and then you cannot publish the documents, public documents because the logo was a trademark.  Even the American sites take down so to not infringe copyright law. 

If you published the same document without the logo it was fake news.  So I don't have an answer for that but I'm really scared about artificial intelligence because if they ‑ we don't ‑ we are the ones who don't have the resources.  They are the ones that have the resources even to make a fake video.  I don't have answers either.  I'm afraid of that kind of thing. 

Now we are seeing something not quite Asia but we are seeing something from the religious extremists saying like they're undermining family values because they weren't having some messages that were inclusive of LGBT people.  That they're trying to teach sexual education for kids was to turn every kid.  They are quite tricky.  It's not political it has another source and it was mostly not through Twitter or Facebook, it runs through WhatsApp. 

>> AUDIENCE:  What I wanted to say is a kind of put a question this way.  Is it OK to prepare for regulation because the way things are going I think one way or another, at different levels we are going to eventually have regulation.  She was talking about our country, in my country we have a hate speech bill that has been introduced by the executive into the house and when the executive introduces a bill it's a very dangerous thing.  It's a hate speech bill and has the penalty of death sentence in Nigeria.  She's talking about 20 years of jail there's a death sentence in this bill for hate speech. 

For hate speech or fake news, the argument is usually the things that are happening around clash, tribal and regional, ethic wars that are being blamed and I'm seeing that the trend is not unique to Nigeria or Africa.  It's like the panelists have talked about, it's happening all over the world.  Somewhere I see regulation coming and there's a strong argument for it and I'm thinking that should we be preparing as stakeholders for this kind of regulation and what's the best way to engage when these regulations come? 

>> AUDIENCE:  Hi, Lucas from Brazil, Civil Society.  Regulations and it's relationship with WhatsApp is still a big case study for countries to take place in.  As Civil Society and member of academia, it was hard to analyst the reach through WhatsApp.  I have colleagues, political analysts, that actually infiltrated some of the groups that were spreading fake news but there's so many ethical issues reporting on that particularly because there's no content from all the members of those groups. 

So you have the information, you have frequent users that have been disseminating fake news but you can't really publish a report with their names and numbers.  But you see some of the methods they use to spread their news.  So I would like to share that frustration.  It's really hard to even report on what's going on through WhatsApp.  But there are ways in which you can look into the problem indirectly.  Like even the way Facebook tried to deal with the situation, shutting down accounts that were spreading too much content without actually realizing or checking the actual content of the account itself. 

And also with regard to regulation and not regulate, I know it depends on each country's traditional legislation and judicial account but at least for Brazilian purposes, I think an ideal mix would be a balance between judicial oversight but also civil society and some of our partners in private entities because there's no way just trusting ‑ entrusting the private system, the service providers to do that for themselves or even letting it all rest on judges' hands or politicians to make laws regarding hate speech.  There needs to be some sort of combination, either by participating, lobbying strategies for lawmakers or even direct contact between private entities and civil society.  Thank you. 

>> PADRAIG HUGHES:  Hate speech is recognized as one of the restrictions to impose and it's appropriate to impose legislation that prevents that from happening.  But then the question becomes whether you can criminalize it first of all and secondly, how has that legislation come into being?  Have stakeholders from various areas of expertise been involved in putting that together?  I think that's the essential question.  How specific is the legislation in terms of its content such that it cannot be open to arbitrary application or abuse and so on. 

So when you're talking about imposing the death penalty, that raises all kinds of human rights issues, the restriction on hate speech.  But the starting point of the Government, it's intention to introduce this legislation in order to suppress free expression and so on then it's going to be a bad piece of legislation. 

What you can do, somebody's here working on challenging cyber crimes case in Nigeria.  It's a question of testing the constitutionality of the provisions before the courts and seeing whether those courts are taking into account international part of legal standards, whether those courts find it's consistent with the constitution.  

>> ROSLYN MOORE:  I was going to add to that in terms of the question of do we foresee more regulation?  And unfortunately, from the discussions that I've been party to, yes.  I already heard that this is something to expect and they have to look at OK, as civil society, how do you come together in Nigeria to raise awareness and be part of advocacy and policy. 

>> AUDIENCE:  I would add looking across the measures that have been taken and media laws and provisions countries have made over the years, what we've seen in the studies that have been done of this is that really strong coalitions and civil society and different players in the system are the way to go.  It's much better to try to build a movement of concerned actors and not to try to do this as just the media or just one civil society organization but to try to build broader groups to generate conversation and debate about provisions that I think really are coming and many people feel some of this stuff is needed. 

But the real thing is to have people like yourselves in the room and other players and I think Latin America is a good example of where that has happened.  The use of the different multilateral institutions, the Latin‑American system on human rights has been an effective instrument for generating those debates.  Africa has used this to some extent but it could do more on that front.  The scholarship on this, the different things that have been written about media reform laws really comes to the conclusion that it really requires a broad coalition and including groups that may not naturally always come together, child protection groups and people who have diverse interests, LBGT groups that may eventually have some voice in realizing that we need freedom of expression and freedom of assembly and open and a very high quality press in order for us to survive it in our work. 

So we need to get active in bringing these groups together and creating these coalitions and getting movements because we can't do it alone and in silos. 

>> ASHARA:  I agree with you and the point you made from Nigeria.  From perspective of what happened in the recent past in Sri Lanka, I had the same fear.  Like I think it's reality coming.  There isn't violence happened which made certain things seem what happened in Myanmar.  The violence projected online which directed offline spaces and damaged a lot of people.  So in that State as well, and seven days blockage of social media apps by the government. 

The government said it was the best decision to take by the violations were going on.  So there's a risk that government is trying the easiest, which is the control.  So I think in our country and we have this fear to consult China on social media so like we have a fear that regulations are coming and for Sri Lanka, the problem we have is the national approaches and the attitudes we are going on, people tend to believe the disinformation as the true information because they're biased. 

So there's a problem, this is fake and this is misinformation.  There's a problem like that in Sri Lanka because most of the people like half of the people in the country want to be singular only.  So they take the misinformation as true information because for their own advantage.  So for society and on the other groups it's very hard to fight against.  The argument that you have founded, you're coming from NGOs, that kind of attitude because all the nationalistic approach that idea goes into people so that problem also the attitudinal change is a must because otherwise like however the initiate counter programs to eradicate disinformation or misinformation, people want to believe it as true information because they tend to see the disinformation as the true information. 

You know, like we have this critical issue in Sri Lanka, like sterilization, like there's a rumor going on that starts spreading sterilization, to stop the population in the country.  So these kind of ideas are like they don't want to fact check and go and see that is accurate, this is not accurate, they just believe it.  It has created violence and State media and the press is like you know, like marginalized communities are always being marginalized by the State and the press and they're being used that, you know, like for the advantage. 

I see a risk of regulations coming to Sri Lanka because, yeah, those attitudinal, because people tend to, as I told you before, people tend to treat misinformation as the true information. 

>> MODERATOR:  My colleague mentioned the oversight in one way or another.  I'm not talking specifically about judicial oversight but oversight of civil society.  Mark mentioned the need for strong coalitions of like‑minded groups but groups who are not present in this room, who are not present on the table when discussions like this are happening.  This is something I've realized as well that most of the conversation we have are in isolation.  Most of the ideals that we discuss are with friends, people who are already converted. 

We're not trying to reach out to people who needs to be in our discussion.  The reason we are, like you said, so scared of regulation and new regulations coming on a daily basis, is the fact that because we are living in societies which have been oppressed for nearly half a century now, perhaps more than that.  We are living in communities where people think that the regulations are good for them.  That the State‑imposed laws, which often stand to curtail the free expression and other fundamental rights, are in fact for their own protection.  This is what the people believe.  

Because when the instance come for these regulations, for instance the law that Padraig was talking about, they find only 10 or 15 or 20 people standing with us in a protest when we are talking about that.  This narrative, this dialog is not something that the majority of at least our people subscribe to.  This idea of State giving them protection is so enshrined into their basic philosophies, their lifestyles that even if there's a law which is to make it mandatory to install a government‑camera in their houses they think this is fine, this is for our protection.  This is what we're dealing with. 

One of the examples, and this might be an interesting story for all of you here.  One of the examples that we had from my country is that Pakistan was recently put in the FDF list and there were guidelines given about doing something about the money laundering and similar issues.  That, by the way, specifically related to the money laundering done by not‑for‑profit groups. 

Generally when I say not‑for‑profit groups you might think of NGOs working for civil liberties and rights and so on.  But in Pakistan, many of the religious groups who potentially have connection with terrorist groups are also registered as not‑for‑profits.  I'm not sure about what's happening to them but I do know that NGOs, for instance, like ours, are getting the brunt of it. 

These are the kind of things that I'm talking about needs civil society oversight or oversight in general of how those regulations are being implemented.  So I think I'm talking too much.  I will stop here. 

>> AUDIENCE:  Hello.  I was just tried to contribute to this very interesting conversation.  I think that we actually have to consider several processes that we need to work in unless we are trying to somehow give our response to spread disinformation and the negative impact. 

One, we need to work on technical aspect.  We need to understand how the social media AIs are working, how the content is being done.  What is the technical aspect of ‑ that actually enables this information being spread. 

Another aspect that we need to work on is actually we need to work with people and this is where MIL and similar courses are actually quite interesting to me and I think this is like a way for it.  I know it's not panacea and it requires several years of engagement.  It's going to be important, especially if we can include various stakeholders or various groups in our societies to embrace this concept, not to say that Internet is all bad but let's reclaim it and our means to be creative, as our mean to be free and our mean to be also responsible. 

And also, I think MIL should recognize journalists more because they need to be capacitated and their skills need to be upgraded and in terms of professional and ethical standards that they should obey because this is one of the ways to fight the disinformation. 

Legislation, of course, is also one of the ways and I'm not going to repeat everything that you said so far.  I think that's pretty clear.  And also what we need to consider, and which is like super flex, is actually the content itself and sometimes where does the content stop being professional and ethical and how does it slip into hate speech or disinformation?  I think that's going to be like really, really important and quite a complex issue that we need to discuss.  It's like example by example and the cases that you've been showing us and demonstrating us.  So it's actually showing us that we really need to understand the context where the information is coming from and how is it created and what's the final aim of this information, what's its purpose?  So that's it from me. 

>> AUDIENCE:  Hi.  Right now we are talking about how to counter disinformation, fake news online and my question here is on the other hand how we can start, you can start the governments framing there's a lot of fake news online.  When social media content is fake news but government people lying to make people think fake news is alive. 

I come from Myanmar.  Also government is trying to regulate on their own so they have already introduced social media monitoring teams so they can monitor fake news and also hate speech online and they are going to punish on their own.  In that sense we really scare and we're really worried how government will frame disinformation and then maybe they can overreact, they can overreact and also they can frame like it's more than, you know, content, social media content, it's disinformation and hate speech and then the reality. 

>> MODERATOR:  We should wrap up and thank you so much.  Honestly I was thinking because it's the last session and most of us are very tired at this point.  So we might not have a lot of people but thank you so much for participating and especially for the feedback and your recommendations. 

Like I said from my personal interest, I wanted to see what are the solutions which have not worked, what are those binary models that we pin our hope on and most of the governments pin their hopes on and how are they feeling polarised?  I think we have good information on that front. 

With that, thank you so much and we'll hopefully see each other very soon again, even before IGF.  One thing I would like to sort of request the journalists here, people who are associated with the media, something that we have learned the hard way again in Pakistan that unless there is some personal commitment to fact checking on social media platforms, it doesn't work. 

This is a request that I make usually to all my colleagues, people who I'm associated with back home, if you have the resource try to fact check whatever misinformation/disinformation content you can come up with.  This can literally save lives.  We've seen it in Pakistan, India, I'm sure.  This is something that's extremely important for us.  So if you're a journalist, if you come across anything that you have more information on, you can contribute on, please do that.  Thank you.