IGF 2018 - Day 2 - Salle XII - OF3 Combating Fake News and Dangerous Content in the Digital Age

The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> MODERATOR:  Good morning, ladies and gentlemen.  Welcome to the room.  We would like to request you to kindly be seated now as we may have to proceed with our operum for today.  Thank you. 

Thank you.  We would like to welcome you to the room on the people's internet and the effect news, folks and this information.  As I mentioned, I'll be mod rating the seg.  I am from Indonesian internet governance/youth IGF.  Thank you for your attendance and interest in this forum. 

As you know, currently, focus on the trend of distribution, effectiveness are massive nowadays. 

The distribution of fake news, this information, misinformation, are resulting our livelihood.  The pressing matter now is not only to prevent and dissipate but also putting a solution to it but to ensure the good and proper deflubment of our internet for the betterness of our society. 

In Indonesia, the government has several approaches in combatting fakeness.  However, we believe that this will not stop there and shall be improved further.  As such, we are here to discuss and share the best approach from our expert panels here in our attempt to counter fakeness on the internet. 

However, prior to start our panels today, we would like to share a message from David Kaye who is not able to be here and he will be replaced by his colleague. 

Without further ado, we will have an opening statement in our forum today.  The floor is yours, thank you. 

>> ANANG LATIF:  Good morning.  Bonjour.  Ladies and gentlemen, distinguished ladies and gentlemen, speakers, participants.

This is about Indonesia.  Since 2015, we have tried to resolve quality issues.  At that time, only 78 percent of the cities and districts numbering more than 500 were connected through fiber networks.  While it has to be fiber optic networks because we know fiber is the only backbone media that can provide broadband services with affirmative policy, the government must intervene so that every city and district can be reached by broadband services even though not visible on our business basis.

By the first quarter of 2019, all broadband productivity will reach all cities and districts through a project which is a 12,000‑kilometer fiber optics project that connects the last 90 cities and districts.  

Indonesia has 17,000 islands.  One of the largest islands in the world with 264 million population.  Along with resolving this productivity issue since 2015, the Indonesian government has also started how network utilization can be optimized and has an impact on economic growth significantly.

The number of internet customers that reach 143 million should have a positive impact on the development of Indonesian economic and become a new economic power, not be a victim of technology developments.

Based on that, I certainly ask 2018, the most widely used social media are Facebook, Instagram, and Twitter.  While the top messenger application is what's up and line.  It turn out that the highest hot distribution channel in Indonesia is from social media.  The most delicate content is related to political issue, especially to the candidates in local election as well as the presidential one, followed by issue of ethnic.

This type of information has been proven in threatening political stability and brings development and eroding democratic process in upcoming presidential election. 

Also, it is believed that the most effective way to prevent the spread was public education and legal action.  Among the last number of internet users, there are many ways to spread malicious content and information disorder.  Such as disinformation, misinformation, and malinformation. 

Last year, the ministry explained that at least 100,000 website were found spreading false information, not to mention via social media and instant messaging application.  A number of hoax actors turn out to be organized neatly.  This shows that the spread of hoaxes has become an industry.  In 2017, the Indonesian police arrested syndicate organization called cyber army network.  They conducted hoax and hex speech factory against political target.  They charge around 6 thousand dollars to $7,000 U.S. to publish and spread of hoax, social media, empowered by thousand media accounts, either it is fake account or hex account. 

The Indonesian ministry of publication and information technology is committed to carry out a message comprehensive and systematic top to down approach in maintaining a healthy internet system in Indonesia.  The top part is to strengthen education and development of human resources.

The digital national movement text part SIBERKREASI is a harmonious product among the government and mass media to promote for a better internet for all.  They promise SIBERKREASI to become a part of the ICC political but in formal and informal education, education institution.  Then at the part, the ministry will collaborate with law enforcement official from Indonesia police to act against those who intentionally produce contraband and/or disseminate malicious content that violates law. 

Thank you very much

[APPLAUSE]

>> MODERATOR:  Thank you for Latif for your presentation as well as opening statement.  Now as we have our panel already in place, I would like to raise a question as a follow‑up to Mr. Anang as you may have heard, at some point, legal action needs to be taken to ensure that the misinformation and spread of fake news are not really disseminating. 

But the fact is that many of them believe that freedom of opinion and expression shall not be limited, which, however, it also has its excess to deliver offensive speech and negative content speech.

Where do we draw this line between freedom of expression and to ensure that disinformation and fake news is not spread further yet?

>> AMOS TOH:  Right, I think that's a good question.  Before I go into that, I know some of you might be disappointed that David Kaye cannot be here today.  He sends his regards.  Unfortunately, as you said, he has to be back in L. A. For a family emergency.  So I serve as legal advisor to the special repertoire, and I think your question raises issues about what the fake news problem is, right? 

So the way ‑‑ and I think there was an excellent background paper in Mr. Latif's presentation kind of alludes to some of the issues that underlie the fake news problem.  I think the fake news problem sits on top of a complex stack of social, economic and political issues, and that was well‑reflected in the background paper. 

I think there were three things in particular that came out with that paper that are very relevant to this discussion on what the role is for human rights standards.

The first one is that the ideological fault lines and ethnic tensions that stem from Indonesia's history with communist insurgencies and it's a history that needs to be grappled with and which has fueled suspicion against ethically Chinese populations well before this rise of so‑called fake news.

The second issue is, you know, and Mr. Latif talked about this a bit in the syndicate and how hoaxes have become like a huge source of financial gain.  But that also kind of sit on top of a network of platforms, right, in a social media culture that very much still fundamentally prioritizes virility of a veracity, right? 

And so if you think about how that interacts with the way the content is fundamentally being promoted on these platforms, right, and how the advertising model kind of drives that kind of content, then we might think about solutions that go beyond or that do not have anything to do with prohibition on content.

The final thing is the political literacy issue which the paper flags about how the educational system has not necessarily equipped people with the standards of digital literacy to cope with this high information age. 

So I think there are some issues here, and I think just by kind of articulating these issues, it demonstrates that a straight up prohibition, right, of content, right, whether it's falsehoods, in itself, it actually presents problems because these usually are vaguely formulated but I think we clearly can see that may not also be the solution we're looking for because it doesn't resolve all of these underlying issues that makes fake news so poignant on issue today.

>> MODERATOR:  Thank you for your input.  And from what I understand from your statement is data prohibition is not always here for any of this issue.  There has to be another means and approach to accompanying this prohibition as well, I think.

>>  AMOS TOH:  Yeah, so I think that, you know, content-based permissions are generally suspect and national human rights standards.  A group of freedom of expression experts from various intergovernmental institutions have put out this thing called a joint declaration on misinformation and propaganda and one of the general principles they articulate is that based on observations on how states have been regulating this issue is that a lot of the time, the legislation is vague, right?  It prohibits falsehoods but doesn't really define, and I don't think is capable, of defining what a falsehood means which leaves it very open to the kinds of abuses of censorship that we see can be prevalent when you have a very vaguely formulated content-based prohibition.

>> MODERATOR:  Thank you for your qualification.  I guess we will go to the next panelist, Ms. Irene Poetranto.  So Ms. Poetranto, how do you see hoax and this in the presentation of internet trust? 

>> IRENE POETRANTO:  Hi, everyone.  My name is Irene Poetranto.  I work with the citizen lab at the university of Toronto.  Thank you for organizing this open forum, and thank you for all of you for being here today.

So I'll just raise a few points with regard to the question that the moderator just raised.  The first issue I think is definitions.  The fact that definitions matter.  What do we mean when we say hoaxes or fake news or disinformation? 

Oftentimes, precise definitions are left out to handle and promote.  I believe my colleague Jac sm Kee will demonstrate why this is problem specifically with the case of Indonesia.  As my colleague has mentioned earlier, the fake news and how hoaxes or fake news are defined could potentially result in injuring freedom of expression online.

With fake news increasingly become an issue, there's also increasing pressure on platforms to regulate the spread of hoaxes and fake news and I believe my colleague Jake will touch on that in his presentation.  So I'll leave that out for now.

It's interesting that this morning, research from BBC indicate that a rising tied of nationalism in the case of India is driving ordinary citizens to spread fake news.  And I think this is not just unique to India, and I think it's something that we're seeing in Indonesia as well. 

The fact that Indonesia is a country that has religious and cultural diversity make issue of hoaxes and fake news potentially explosive, and so false news stories are not just a problem in the west but other countries around the world as well is the point I'm trying to make.

So the question is why does it matter?  Right.  In order to maintain an open and democratic system, I think it's important for government, private sector, civil society, and institutions to work together to solve this problem because I didn't tell very complex.  It's great we have a multi‑stakeholder panel sitting in front of us today, and I'm looking forward to the discussions on how to counter this issue.  I'll end there.  Thank you.

>> MODERATOR:  Thank you, Ms. Poetranto for your input.  And I think we can also continue to Ms. Jac sm Kee.  As a fellow nation and neighboring country, we fear the same problem.  We also recently heard that Malaysia has passed a law to counter hate speech and there are some sayings that it may prevent proper enforcement of human rights and protection of freedom of speech. 

And I would like to know more and how can civil society engage influencing the government to comply to the internet freedom.

>> JAC SM KEE:  Thanks very much.  I think first of all I need to clarify that the antifake news law was never intended to court or address disinformation.  It was rushed through parliament, not really rushed, and presented a month before the elections and the primary objective was to prevent people from talking about or sharing information or having discussions about a very high profile corruption case about the then prime minister.  So I think the intentions don't matter.

That said, I think disinformation is a real problem and issue.  I'll share with you a particular example that I'm familiar with in Malaysia.  Just prior to the election, a website popped up and it looks like a news website.  It looks like a news aggregated website.  It seems to be very credible, very well resourced.  What happened was it had a lot of headlines that were click-baity.  And the headlines and the news was basically trying to put together two different things.  One is about gay people, LGBTQ community, plus progressive Muslims.  So they put these two together to create news information that got shared very widely on Twitter and social media because it was click-baity at a period I was squishing 20 to 30 bots a day on the accounts.

So what the impact of this is that this is ‑‑ sorry, I'm just trying to look at my notes because I'm talking very quickly.

What it did is encouraged ‑‑ when this circulation happened and what was alluded to where the platform itself are privileged by virility over veracity, the got shared very widely, attacked non‑conforming users on what is or is not Islam, what is or is not Malaysia, attacking particular kind of individuals who were already marginalized, for example, young women, transient, ordinary people who wanted to have a conversation about citizenship, nation hood or religion.  It created a broadbased appearance of community online. 

The resulted in raised on venues that were presented as being queer events.  And it's increasing.  You're seeing very real impact based on this kind of a situation. 

So then the community decide to do research in relation to this, try to find out who funded the website and found that it had links to government and state agencies.  Who are the ones who circulated the news in the first place, in a targeted way.  So this relied on existing network of users who are interested in social capital, gaining the network infrastructure of social media while doing this, 20 to 30 bots, again, and the government has also already said very openly, they spent hundreds of thousands to build a cyber army to man and look at the internet.

Maybe also on the reason of trying to keep the internet safe from this information.  I think what this reviews is it's a very complex landscape.  It's not very simple.  It's not direct.  And therefore having ‑‑ you know, you have many different actors involved, government resources who want to control narrative and news cycle who are abusing an army of people doing this.  If individuals want to do this in order to consolidate their own power, you ever private actor benefitters from this disinformation.  Companies being created, media landscape that is weak, high skepticism of mistrust and institutions built on networks of trust, basically forwarding by somebody I know and trust.

So it's super complex.  Then you cannot have a simple solution to do with this.  Okay, we're going to have a law and the law is going to then deal with this, especially when it's a very badly written law with poor definitions and wanting to create weird institutions to do this work.

So a complex landscape requires a complex equal system approach to deal with this, and I'll stop for now but would like to continue at some point later.

>> MODERATOR:  Thank you Ms. Jac sm Kee for your sharing.  It is very interesting that has we know, as well, misinformation fake news and this problem is a very, very complex landscape.  There is no single cure that can solve this issue instantly.  So I think we will go to Mr. Jake Lucchi from Google that will share how Google as a platform provider can do more to counteract harmful negative content on the internet and help with compliance with prevailing regulations in certain jurisdictions.

>> JAKE LUCCHI:  Good morning, everyone.  Thanks so much for the government of Indonesia for inviting me to be here.  It's a pleasure to be with such a distinguished group of copanelists.

I agree with basically everything that's been said so far.  That makes my job pretty easy.  This is a really, really complex problem.  I think one of the challenges for identifying solutions is that when people use the term fake news or misinformation, they're often referring to many different thing.  We've heard everything from election interference to stoking up hate speech and religious tensions to trying to stoke up hatred to our particularly marginalized groups.

There's also lots of discussion of spam, misleading type of content, some people use it to refer to reporting errors or biased reporting or opinion pieces mask radioing as regular sort of objective journalism. 

So I think that just shows you how complex the problem is when we're not even talking about the same thing.  We may be talking about many different problems.  And the solutions that we need to employ will probably vary a bit depending on the different manifestation of the problem we're talking about.

So that's been a lot of the way we try to approach a single goal.  We know it's a hard issue but one that cuts to the very core of our mission.  At Google, our mission has always been to organize all of the world's information and make it universally accessible and useful.  So if you think misinformation, that's the antithesis of our mission, what we're trying not to do.

We did recognize there were places we had weak points and we weren't doing as well as we could have.  So we started thinking about the ways we can kind of do three different things.  One is really product solutions.  How can we do things within our products to make them do a better job and surfacing authoritative content will also kind of varying things that aren't authoritative or could be misinformation.

Then the second two pieces I think are actually as or more important, which is how can we help to support journalists who are doing high quality journalism to both get their work into the digital space, give them tools to make sure their work is easily discoverable by consumers and users, and then also to make sure that we have networks of partners who can do fact checks and who can actually work with journalists to make sure that common stories that could be misinformation are debunked and they're easily visible to consumers.

And then the third piece that I would say was mentioned earlier as well which is media literacy.  At the end of the day, we know that misinformation has always existed, even before the internet.  We had to deal with gossipy neighbors who are telling things that were not true about people who lived down the street.  The fact that people are going to tell things that aren't true will always be a problem. 

So how can we make sure the education our young people are receiving and even as adults we're receiving that we have to critical thinking and skills to be able to navigate the online world and check our sources and have the orientation to not just believe but also go through and try to find whether it can be verified.  So this is another area we've been doing quite a bit of support.

I'll share just a couple of things I think in the first area that are important to mention, which is for us at the end of the day, it's going to be impossible to ever determine whether all of the content online is true and false.  We've been spending a lot of time trying to come up with scalable solutions to actually get at some of the virility dimensions that were mentioned earlier.

For example, we started doing a lot of improvements to our algorithms, starting with search and this has been later added into YouTube where for queries related to public interest type topics, we prioritize authoritativeness over relevance. 

What that means is say you search for something, you search for where what are movie show times.  There are two things you might want in surfacing an answer to your question.  One would be how relevant is the results to the query you've asked.  And then the other is how authoritative is the source, right?  And so for certain queries, you might think I actually just want something that's really relevant that's exactly the question I asked and I don't care so much how authoritative the source is because it's a relatively easy question.

Then there might be other topics like things in the public interest, even a source directly on point to the question you asked, if you're trying to get an answer on a sensitive topic, you want an authoritative question.  So we are focusing more on authoritativeness and less on relevance.

We're also making fact checks more visible, for example, in the search bar, you'll now see fact check tags for mini stories.  We're trying to make sure people have access to quality information out there and it's easily at their fingertips.  Those are some of the sort of things we're thinking about from the product perspective and how we can promote the good and also to get is at some of the bad stuff, we no longer allow advertising revenue for content misrepresented.  So content where the identity of the producer is misrepresented, as in this example here.  We don't allow that some type of product to receive add revenue anymore.  So we're trying to get at the money that can feed into some of these types of problematic misinformation we see.

We also have policies that prohibit hate speech.  So any type of misinformation that would be stoking up hatred on the basis of someone's religion or ethnicity or sexual orientation, we would not only demonetize but wouldn't allow it for products like YouTube where we host content.  We don't allow that kind of content.  So we would remove it from the platform globally.

So it's very complicated.  We have a lot of stuff going on.  I've only given a couple examples.  This is to show there's so many manifestations of problem and we're trying to think through them 1 by 1 and think what's the most appropriate response and how can we partner with journalists, government, third parties too, figure out ways to come together to solve the problem.  Thank you so much.

>> MODERATOR:  Thank you for your presentation.  As you may know already, we are supposed to be Ms. Julie Ward in our session.  However, she will have to come later as she has another matter to attend to at the moment.  While we're waiting for her, I think it will be better for us to proceed to our discussion session as this is an open forum.  Therefore I would like to invite all of our participants to address the panel or if you have questions on the deliveries or relevant issues.  Thank you.

>>  AUDIENCE:  Hello.  I'm here with a team from England. 

My question is about definition again.  I think it's really important to try and contextualize this issue, and you Jake, especially mentioned trying to find the line between truth and falsehood.  And I think one of the main problems that I find with fake news as an issue that comes up in discussion is that oftentimes it's not simply true or false. 

And we've had a lot of mention of Twitter bots already as being a problem in the Malaysia case study or all of these other kind of tangential issues with misrepresentation or manipulated images bots.  So I guess my question is how do you tackle fake news when really it's not one issue; it's all of these several, multilayered, kind of issues coming together? 

>> JAKE LUCCHI:  Was that one for me or Jac?  I couldn't catch.

>>  AUDIENCE:  Go for it if you want to.

>> MODERATOR:  So from the panelists, who wants to address? 

>> JAC SM KEE:  I think you hit it exactly where it is.  It is a complex multidimensional issue, and I think the other points about it as well is that it does defer from context to context and that it manifests and expresses itself differently.  So I think the first thing that actually is very, very critical and necessary is research.  Research based on context, especially by communities who are particularly affected to understand how is it happening, what are the strategies, who are the actors, what is the different kinds of economies that are circulating and underpinning it.  Because without research you really are not able to respond to this.

So to everybody sitting in the room who is interested in research or funding research in some way, prioritize this.  It's really difficult.  For technology companies to develop tools and products as you call it that enables research to be easier.  We tried to do this.  I'm with the association for progressive communications, I forgot to introduce myself, and we're a global network organization that works on ICTs and social change and human rights.  And we try to do the research in three different contexts and it was extremely difficult. 

It was difficult to get the tools, the data, to resource it.  And when you do the research, it's difficult to say I want to publish this because it's for your own understanding.  I want to emphasize these points as well.  That's really critical.

And then the other is to really strengthen media as an institution.  And whatever it takes to strengthen this in terms of the environment, whether it's in terms of strengthening freedom of expression and information laws, data information laws, that is actually critical because we need to also in some ways renew our faith on institutions to be able to do some of these things, right?  Because right now there's a huge trust deficit and skepticism in terms of institutions that we have developed for truth.

And the other ‑‑ and I think ‑‑ and I just want to say two more things.

One, it's necessary to support content creation and content creation by different actors and communities whether it's fact checking or creating alternative narratives and so forth, it's extremely difficult to do that.  I think often people fall into these kinds of economic incentives in order to generate and build content.  But to support other kind of content creation is particularly critical in this juncture.

And finally, literacy is something that is not just for young people or in schools.  It is a literacy that is necessary for all ages that needs to happen at all times to create a communications culture that is actually not just about trying to figure out whether this is the truth or not.  I think that's the problem with definition.  How do you define whether something is the real truth?  It is about contesting knowledges.  It is about contesting frameworks.  It's about power, ultimately.  So what do you need to do in this particular moment is not to say this is more truthful than the star but it's about being able to generate a kind of communications culture that says I'm going to critically analyze different things by different people, whichever the source it comes from, whether it's a website and so forth.

So just require sort of complex multidiversional approach.  No simple solutions, I'm afraid. 

>> JAKE LUCCHI:  I agree with your question.  As I was mentioning earlier, we look a lot at what are the purveyors of this information.  For example, monetary motivations and demonetizing advertising policies will have an impact.  If it's really about stoking up hate speech and it's not for monetary motivation, that will not have an impact.  So you need to have content policies to make sure you're prioritizing safety for particular groups.

So we try to think about different manifestations of the problem.  At the end of the day, I agree with Jac that having this culture of media literacy and being able to engage with information is super, super important.  And if I had to say one thing I think is the most important place to be focusing energy, I agree with Jac, it's probably there.  At the end of the day, no matter how great of solutions we come up with, this problem is always going to exist.

So I think we have to have a culture of being able to critically engage online.

>> MODERATOR:  Thank you very much. 

>> IRENE POETRANTO:  I'd also like to say I agree with Jac.  It's great to be on a panel with Jac.

On the point of there needing to be more research, I couldn't agree more.  As Jac mentioned, the biggest hurdle with research is always funding.  If there are funders in the room, research in this topic is definitely required.

And I would also like to say that I think partnership between research institutions with platforms are required because some platforms are easier to do this type of research on than others.  I've heard Twitter being described as a cesspool and I think we can say that because it's a lot easier to scrape data on Twitter than the other platforms.

So I think there needs to be cooperation between academics and researchers with these platforms to do the research and the funding that is required.

>> MODERATOR:  Thank you, Ms. Poetranto.  And I would like to go to second question from lady in back.

>>  AUDIENCE:  Thank you very much.  Hi, everyone.  I work as a researcher for policy research. 

It is open questions for all of the panelists. 

There is this statement from the previous speaker, Latif that say that legal actions is one of the most effective ways to prevent disinformation and I want to ask what is the view of all the panelists about this, imposing sanctions for online intermediaries for its user content that amounts to disinformation, having in mind that Indonesian government is gearing up for the coming election and I think that information issues is like gearing up in Indonesia.  Thank you very much.

>> MODERATOR:  Thank you for your question.  I think Mr. Latif would like to address the question first of all.

>> ANANG LATIF:  I think the question should be to the other speakers here, but let me clarify.  From my experience from the government to handle this one, effectiveness cannot walk alone.

We have to make fake news as our common enemies.

It's important but much more important, I agree that how to educate people, how to qualify as well as the people understanding about the what is internet actually. 

And we have to involve all stakeholder here to combat fake news.  What we have done to establish SIBERKREASI to involve any stakeholder here from the researcher, from the economic, from the non‑government organization.  This is because government believe here, we cannot reach all part of a country. 

We have to know that the internet itself, the information, not only can kill people but also can to destroy the country.  This is very dangerous.  We have to aware of what it's like. 

Government has to be serious, provide some serious organization to combat this and face this situation.  Thank you.

>> MODERATOR:  Thank you, Mr. Latif for your clarification as well as feedback.  And I think we have another addresser.

>>  ANANG LATIF:  I think on the international conference rights, which Indonesia read ‑‑ states, their restrictions on freedoms of expression must be provided by law and necessary and proportionate to fulfill and specify legitimate links.  And the part of provision that really is relevant here is the provided by law requirement.

And in national human rights jurisprudence, it's clear that it just can't simply be law on paper.  So you can't simply say because these sanctions are legally mandated, therefore they fulfill their requirement.  Human rights requires that laws have certain qualities in order for it to be considered a legitimate legal standard on the ICCPR. 

So in this case, you know, they formulate the laws, have been rushed through parliament without appropriate public consultation raises a lot of red flags.  And on as complex an issue as false disinformation, you know, a simple kind of prohibition really doesn't necessarily do the work that states are obliged to do under the ICCPR.

>> MODERATOR:  Thank you.  And we would like to welcome Ms. Julie Ward for her attendance in this meeting as well.  Yes, Ms. Julie Ward, as I actually have previously announced to the forum that we are currently in the discussion session and waiting for you.  We would like to pause this discussion session first and proceed to your speech or would you prefer to keep at the discussion first?  Personally, I suggest to go with your speech first before we continue the discussion. 

>> JULIE WARD:  Well first of all thank you for inviting me to speak at this event, and I am very sorry I couldn't be with you for the whole of the discussion. 

It's an issue that I really care about for a number of reasons, not least because my country and other European countries, but also country in the developing world, we've seen have been the victims of targeted interference, fake news, distortion, a whole load of interferences with, frankly, interferences with democracy. 

So the result in my country being the Brexit referendum.  We know that the election in the U.S. was interfered with.  We now know that the election in Brazil was hugely interfered with via what's app. 

I actually do election observation for my parliament so the European parliament and the same methods of interference that were used in the states also in the French presidential elections, the last ones, we know that the same methods were used in Kenya as well.  And not many people know about this, but you should really look at what's been happening.  The same companies, the same people are using the same methods but increasingly making them very sophisticated. 

So I believe that actually what we're talking about is really fundamentally challenging our democracies.  So how can we make sure that changes in the media happen in the interest of democracy and pluralism and not the country? 

And I work in the education and culture committee for the European parliament, and for me it's ensuring that education for citizenship, including ecitizenship, is at the top of the education agenda. 

And the second important pillar for me is supporting through proper means quality journalism and media.  And we can't avoid the issue of the responsibility of social media and the platforms in combatting fake news and we are beginning to take measures regarding that in European parliament. 

So I'd like to talk a little bit about media literacy, because media literacy enables citizens to both use and create media content effectively and safely. 

Media literate people and so we should also say digitally literate people are able to exercise more informed choices.  They can understand the nature of the content and the services.  They can take advantage of the full range of opportunities offered by communication technologies.  And that's important because the internet should be a space for the common good.  It's not just a space for danger, for hate, for misinformation.  We have to reclaim the internet for the common good.  In that respect, media literacy and education for citizenship are not only important for learning about the tools but also to encourage a shift in political cultures and practices so that individuals cannot only be digital consumers but also active citizens in connected societies.

And media literacy, for me, it's not about being passive but it's about governance and participation.  And children and young people should be able to participate.  They need to be able to access information.  They need to understand how media works.  They need to develop critical thinking.  They've got to learn how to deal with a whole range of different opinions, how to react to online violence, hate speech, cyber bullying, and so on.

We talk a lot about skills that ‑‑ a skills gap in the European sphere, but a key skill that young people need to have in order to make a difference is digital curiosity and creative thinking.  So I think we have to support policy journalism and media pluralism, as I said before, and we should investigate what it takes to encourage the development of independent quality information sources online, what opportunities, edemocracy might offer in terms of renewing different forms of political participation and communication.

So because I do election observation, I've seen many, many different modules.  And we only do election observation in fragile democracies, frankly, but we ought to be doing it in some of our western democracies where we think we own the best modes of democracy.  Because frankly we've seen how we've fallen very, very short recently. 

We've got to support policy journalism, but we've also got to defend freedom of expression, and that must include defending attacked or imprisoned journalists. 

I've been participating in a project with the committee for protection of journalists, highlighting many of the journalists who are currently imprisoned.  I have to say huge numbers of them in turkey, we have ‑‑ we've been partnering with particular journalists, so that we can support them, raise up their cases, keep them in the public domain.  Because what happens is people are on the front page, then they're on the front page, then on the third page and then not in the newspapers at all but they're still in prison and still under attack. 

So the person who I have been supporting is Zara who is a Kurdish woman, feminist woman, editor, a really amazing feminist media collection.  She's been in prison for painting a picture depicting tanks ‑‑ Turkish tanks as beasts, as animals, in the city where the Turkish government bombarded civilian targets.  And I've been there so I know what I'm talking about.

So I talked about safety of online companies, and maybe just finally just a little bit about safety online.  I'm a children's rights campaigner so one of the discussions we have to have is about making the internet a safe space for children and young people, not taking away their rights but making sure that they're safe.  Also applies to vulnerable people. 

And one of the issues for us is that we see many platforms taking down pornographic content or dangerous content or hateful content but it doesn't stay down.  So a big issue for us is making sure that this ‑‑ that this content stays down, that it's completely removed.  Okay.  Thank you.

>> MODERATOR:  Thank you, Ms. Julie Ward and thank you also for your statement.  And we will proceed to the discussion session, as we have agreed previously.  I see some hand was raised and I would like to address the gentleman.

>>  AUDIENCE:  Thank you very much for the floor.  I wanted to touch upon additional thing that maybe hasn't been discussed so much yet, namely that there are indeed some governments and countries that can and indeed do produce misinformation and sponsor its production to forward their own agenda.  I will not name any particular countries, but I'm sure everyone can think of some of them.

And what do you do in this case?  I mean, they will not care about ad money, for example.  You can't really do any legal punishment for them, especially if they operate in another country. 

What is the solution in this kind of situations is what I would like to hear your views on this, because this is even more complex than if you have simply somebody who's doing it for money or for fun or for whatever the reasons are.  Thank you.

>> MODERATOR:  Thank you.  I think the question will be better addressed to Mr. Jake.  Yeah.

>> JAKE LUCCHI:  Yeah, this is a very good question.  So there are some things that you can do that are regardless of the origin of misinformation.  So for example we do a lot of work on trying to combat inauthentic accounts, so closing fake accounts, which is true regardless of whether they come from a source outside of government, within government, regardless. 

We also have policies around for example misleading metadata on a lot of our platforms as well, which can often allow videos to be struck basically with misleading users.  1 again, that doesn't matter the origin of the post.  It's delicate when governments can be the source of disinformation, but there's a lot the way we enforce our policies, approach these issues, that it doesn't matter so much whether it's government or non‑government, we're going to take the same action regardless of the source.  So that's what I would say on that.

But definitely recognize that's a challenging issue.

>> MODERATOR:  Thank you.  Our next panelist, we have to limit our response because we also have timeout limitation.  Therefore I would like to address the floor again because I still see many hands raised. 

Yes.

>>  AUDIENCE:  Thank you.  I am from Russia.  I am a representative of a civil society because I am a member of a civic chamber of Russian federation, and I have a question for Jake and maybe for David.  You know, Russian media have a lot of problems with such platforms as Facebook and Google.  A lot of posts of official Russian media and even the accounts were thrown away from Facebook, from Google, without explanations. 

And, you know, even all letters sent from Russian media to Google, they were ‑‑ so Russian media didn't receive any explanations of their bands and they didn't receive any letters at all.  So my question is: Who is those judges?  Where is the court?  Who decides what is fake news, what is hate speech?  For example, several days ago, I was in Washington, DC during the rally against Donald Trump near the White House and I heard those speeches from people against Trump.  It was almost hate speech, but Google and the YouTube promoted such media without any problems.

So we see that hate speech against Donald Trump, it is okay, and the news coming from Russia immediately become a hate speech or fake news.  So who decides? 

>> JAKE LUCCHI:  Yeah, very challenging issues that you're raising.  So a couple of things.  One is that we have many policies that we enforce globally that I mentioned earlier, but one function of being a company that has to operate in many jurisdictions is that we also have to comply with local law as it's written. 

That means if something is illegal in Russia for our local service that exist in Russia, we may be required by the government, pursuant to a valid legal order to remove content for our Russian version of our products.  So I can't comment on the specifics on the cases you're raising.  That is the reality of the company, you have to operate according to the laws of the country you're operating in.  And that would not be globally.  That's just for Russian versions of the problems, local versions.

That said, the same thing may not be illegal in other cases.  This enables us to be compliant country by country.  We do publish that every year.  So when something's been removed according to a valid legal order, the users have a way of understanding what's been removed, how many requests for what types of violations, all that's published on our website every year.  That's a way the public can be made aware of what's being removed on the basis of a legal request.

To your question about the whether we would not remove something because it's related to Donald Trump.  The answer is no our policies apply the same regardless.  But we have certain guidelines we use and we're thinking about enforcing, things that have educational, documentary, scientific, or artistic interest, sometimes we will view as less likely to be taken down because we think the public has an interest in knowing that. 

That means for example if there's a Donald Trump rally where there's a particular speech being conveyed in that rally that might be of interest to the public to understand what's happening in the political dimension, those videos may stay up because the public wants to know what's happening in a political context. 

For those types of videos, we might age gate them where people have to be 18 years or older to be able to view them, have a warning message.  But sometimes we will leave things up in the public interest because they have journalistic value.  So we use that to contexturalize our products.  These are very complex cases.  It is something that we think very carefully about and try to strike a good balance of what the public needs to know without also making sure that people feel safe and secure on our platforms.

>> MODERATOR:  Thank you. 

>> JAKE LUCCHI:  I don't know the specifics of the case.  I'm happy to take your contact.  I don't know about the specifics of that case though.  Sorry.

>> MODERATOR:  Thank you.  And I see still some hands.  1, 2, 3, and 4.  So I think I'll start with there first.

>>  AUDIENCE:  Hello, everyone.

>> MODERATOR:  Sorry, before you start your question, please briefly introduce and also briefly state your question because we have time limitation.  Thank you.

>>  AUDIENCE:  Okay.  My name is Emanuella.  I'm from Brazil and I'm a fellow.  My question goes to the Brazilian elections was a little crazy.  People use what's up a lot.  In our law, we have the guarantee of net neutrality and the practice of zero rating that is accepted in our country.  So people receive chains of text on what's up with fake news but they can't check it because most people don't have access to the internet.  So they only have what's up.  So I'd like some input.  How do you guys think we could change the situation consider zero rating considering the neutrality issues?  Thank you.

>> MODERATOR:  Thank you. 

>> JAC SM KEE:  That's a really great question.  Access to the internet and how we gain access to the internet, in terms of everything surrounding it, whether it's digital literacy or content skills and so forth is a really important part of this conversation that I think also often gets missed. 

These matters, but we're not directly making the legislation.  So what I would do is if it's ‑‑ and, you know, we have a dialogue through youth IGF already.  So I think we probably know more in terms of the detail, but I would really welcome anybody who has a concern to write to us because what we do is have the dialogue to listen, to raise concerns that are coming from citizens about any of the issues that will have, you know, will impact people.  So that's probably the best way.  I think my colleague here with me these two days heading our delegation here is working much more directly on this than I am.  So I'm really willing to take the questions and forward them and get answers for you.  Okay.  I'm not evading it.  It's just not my competence, okay?

>> MODERATOR:  Thank you, Ms. Julie.  I'm sorry we're out of time.  We more than welcome you to come later on with the panelists.  We have our concluding remarks now, delivered by our producer who are very passionate on digital issues from the works and activities.  Please, Ms. Marcella, the floor is yours.

>> MARCELLA ZALIANTY: Well, this issue, it's been ‑‑ I really care about this issue and the fact that the trait of fake news or we call hoax in Indonesia is massively affecting the younger generation, especially now that the internet access is getting easier.  And it's not balanced with the ability and also the literacy of users to observe and reach the information.

Young people is predicted to dominate the population in the 2030, according to the survey of the providers as a station in 2017, the young generation with the range of 13 until 34 years old is the largest internet user with the present of 66.2 percent.  Massive hoax will reach our young generation as the most active citizens in digital natives.  They need to be equipped with critical thinking skills and fact checking culture.  They need to be involved to manage these issues as the internet one of the major. 

Since Indonesia have a very big population with a different language, different religion, and different culture, this fake news and hate speech not only threaten our democracy, also threaten our unity. 

Society needs to be directed to a apply ethics in the real world when they are in the world wide web.  Often, they do not think far enough because they feel interaction with the world is not really face to face, even though the consequences are the same now, where Indonesia has an IT law that can prosecute speeches and behavior that violates the law in the internet. 

Now also the most important thing that we are focusing on digital literacy by creating this movement called SIBERKREASI.  I'm one of the initiator, and I think we all really agree that literacy is a very fundamental thing that we should do, like, right now, because we not only do the requisite thing but we are thinking about the long‑term to manage this issue.

SIBERKREASI is the national digital literacy movement, our multi‑stakeholder, multidiscipline collaboration forum consisting of multi‑stakeholder in many institution, communities, civil organization, academics, artists, content creator, media and the private sectors, also the minister of communication and inform at cu. 

And of course we have one common, namely to socialize the importance of digital literacy.  On regular basis, we get 92 sporting factors engaged in the health of education related to digital literacy.  Every week, there are at least 5 to 12 carried out by supporting our collaboratively and independently. 

Here, digital literacy is not only the ability to operate a smart phone or computer device but also the cognitive and emotional skills needed because the issues that must be reached by digital literacy program are very broad, starting from child, female exploitation, protection, handling negative content, cyber abuse, and all that. 

At SIBERKREASI we want to convince the public that a multi‑stakeholder role is needed in order the realize the internet as a safe and comfortable place, not only government, for example, the ministry of communication information or social media platforms, but also initiative from the community are needed to jointly against the negative affects of the internet.

One example of the collaboration partners is like ministry of communication and information access data of our program, primarily because there is a common and same vision and mission, such as developing human resource empowerment communities advocating for internet governance policies and to end digital content and informal education of political.  That's actually our target.

>> MODERATOR:  Thank you for your concluding comments.  On top of that, I would like to put that definition of falsehood that reminding what is true is not is that we still must have work on it.  And of course prohibition on content is not always the best part but also educating those who use the internet is also a matter of work we have to do in collaboration with community which that means we still have to work on multi‑stakeholder approach for to ensure that this information and fake news distribution may be prevented in the future.

As the moderator, I would like to thank all of the participants for all of your participation.  I would like to apologize for not being able to accommodate all of your questions, but of course the panelists would love to see you around.  If you have a question, you can raise it after this session is completed. 

The session is completed and thank you very much for your attention.