IGF 2019 – Day 1 – Convention Hall I-C – OF #44 Disinformation Online: Reducing Harm, Protecting Rights

The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> MODERATOR: If I can kindly ask you to take your seats, we will begin in a minute.

Okay.  If you would allow me to begin, good afternoon, and welcome to the panel on disinformation online, reducing harm, protecting rights.  I'm very happy to see so many of you, despite the lunch break has just started.

We have three brilliant speakers here.  And they are actually sitting in the alphabetical order, so we have Damian Tambini expert in media and communication regulation at London School of Economics.  We have Miranda Sissons from Facebook, where she is the new Director of human rights.  And we have senior expert from NATO Stratcomms Center of Excellence, Sebastian Bay.

And we have two main topics that we would like to address during this panel.

Oh, I should actually introduce myself as well.  My name is Jakub Kalensky.  I work in the Atlantic Council.  Previously, I worked for the European Union.  I have been covering Russian disinformation campaigns in both of the capacities.

We have two main topics for this panel.  The first is what are the emerging challenges and approaches to tackle in disinformation and the second topic would be what have we done so far, and what else do we need to do?

The way we will do this, I will have two rounds of questions for the panelists, I will give them five minutes each for their answers, and I hope we will have 15, 20 minutes for questions from the audience.  So please prepare your questions for this final part.

And since this panel is so short, just 60 Minutes, list us kickoff.  Damian, please tell us, what do you see as the emerging challenges in countering disinformation.  The emerging threats, what keeps you awake at night, as they say?

>> DAMIAN TAMBINI: I think one of the key challenges as we approach disinformation is to be absolutely clear that it is a very different problem in democracy to nondemocracies.  So everything I have to say today will be about how democracies deal with this particularly difficult problem.

And what I think my main difference is get away from the whack a mole approach.  And disinformation comes up along with problems of other undesirable content, the regulatory desire is to block it, stop it, and move on to the next problem.

I believe the intellectual change we need to make is think about it systematically.  Systemically.  And then the problem is rather, how in democracies have we evolved media systems that were optimized for truth and trust over the past previous centuries?

And my argument is really that in democracies, we have a number of positive interventions.  We have institutions which have evolved and we have a particular approach to freedom of expression in particular.  So this very interesting period that we're living, particularly in Europe, but also in other democracies, where we're trying to evolve new regulatory frameworks for tech platforms, for the various choke points and gatekeepers on the Internet.  This interesting period is replaying things with a deeper history.  Some specific things about what's happening right now in terms of policy.

Behind all this, I think philosophically, the reason we're having such a deep profound and prolonged debate about disinformation is because it is such a problem for democracies.  The three particular arguments for free speech are that the embodiment of truth from democracy and from self‑expression.

For various reasons we don't have time to go into the problem of disinformation poses deep problems for all.  As a result, legally, constitutionally in the way that laws on free expression are constructed, there are new justifiable restrictions for free speech.  On the basis of national security and other justifiable ways of restricting free speech in terms of the U.N. system under the universal declaration on human rights and universal convention on human rights.

There are clearly and constitutionally in human rights terms, justifiable restrictions.

So what is happening to that?  What are the dangers for free speech?  Very different approaches.  In some countries, new forms of offense are being evolved.  The law on the elections of France introduces new things that are illegal.  No categories of illegal content.  There are many other examples of attempts to bring in fake news.  The key problem there is obviously who can decide?  How is it independent from the state?  How do we avoid a Ministry of Truth.

The second way to deal with this is to introduce new procedural and military rules on how illegal or potentially harmful content is dealt with.  We can see in nets DG in Germany, systems of fines, time limits to tighten up restrictions and a wide debate looming on the exemptions of liability and e‑commerce objective and U.S. legislation as well.

So we have changes to the new forms of offense.  New procedures and we also have various forms of self‑regulatory approach, for example, the code of conduct on so‑called fake news in the European Union.

So Europe is emerging something of a laboratory of new policies.  But these engage a lot of very familiar and deep constitutional problems.  So what I would in my last minute call for is real action to actually try to bring together democracies to bottom out some of the real philosophical differences.  There are deep differences between European Union, European convention approaches to whether it is justified to positively intervene to promote truth and trust in the way I would argue systematically European media systems have done in the past.  But it also requires a linking up with competition policy in ways that we can potentially discuss.

So I think there are a number of things that democracies in particular need to do to come together to deal with these.  These are quite separate to how nondemocracies deal with it.  We should not overestimate the similarity between different democracies.  So in that context, the particular example, which I think is worthy of discussion, if people are interested in, is the U.K. approach.  The U.K. is currently discussing legislation, we're in an election cycle so there is no Government representative here.  I'm speaking in a personal capacity, but I can speak about what is happening in the U.K.

There is a very radical approach to the second problem I discussed.  Changing the approach to liability.  And introducing what is called a new duty of care for online platforms.  This is not strict liability.  This is not a shield from liability.  But it is an attempt to change the systemic incentives so the platforms have a different set of incentives ‑‑ all platforms.  They are regulated to ensure they reduce the incidents of harm on the platforms and a succession of fines.

We're in the policy cycle where it is discussed.  They all have positions, more or less and commitments to this, and we will legislation in the next year for quite a different approach to online harms in the U.K.

>> MODERATOR: Thank you very much.  Miranda?

>> MIRANDA SISSONS: Hello.  I wish I could echo that speech in its specificity in its insight 100%.  I have recently arrived as the Director of football and grappling with questions regarding to misinformation and disinformation.  To the perspective of someone who has been a long‑time human rights activist in a country where we don't just confront it in democracies but wherever the platform is used around the world.

In the last two years, Facebook has and needed to upped its game in the misinformation world.  I can, in a second question address what we are doing along the remove, reduce and inform framework.

And what is very clear, I think is somewhat difference from the generalized discussion of misinformation is we're hosting a number of different ecosystem, platform, Internet and incentive models under the Rubric of misinformation, which many of us know well.  And Facebook has certainly been trying to take a number of different kinds of actions to combat all of those.

Which ultimately requires deep partnerships in a variety of different ecosystems.  And a look at incentives overall.  So in the remove, reduce, inform approach we seek to block and remove fake accounts.  To remove the information that obviously violates our community standards or contributes to offline  ‑‑ violence.  And to remove bad actors.

In that work, this is an extremely adversarial space.  Some of the misinformation is contributed to financial incentives and contributing ecosystems that contribute significantly to disinformation as well as misinformation.  But where our systems are also being tested in the nature of security, misinformation and technical challenges evolves on a monthly basis.  If you look for example, at the most recent transparency report released two weeks ago, I think you will see significant rise, for example, in the number of fake accounts we are removing through AI before they are detected by users that has deeply accelerated in the last couple of months.

Which is I think a testament to how much people are trying to break our systems in this regard.  We remove ‑‑ we reduce the virility and spread of false news and disinformation through a number of transparency and contextual treatments.  There is a lot more to do in that space.  I think we as a company are learning a lot more about the cues users need to make good decisions to understand the context of information and to reduce friction of sharing.

You have seen some interventions in that space in WhatsApp, some interventions in contextual and understand treatments.  There are many more to come.

A net reinform.  Information which is often the first resort of us highly intellectual, highly right‑sensitive people where if we inform people we will make better choices.  That is an important treatment.  That is extremely important to the human rights space, but it is not effective ‑‑ as effective at the user level as systemwide interventions.

So I think if you asked me what is new in the space and what keeps me up at night, it is in fact that I hope that this conversation which is I think a very fruitful one, can be driven to the questions of prevalence and dynamics of incentives in ecosystems in a way that makes it actionable.

That bad actors are evolving and evolve much quicker than partnerships have, and we need all to do more to partner in this space, particularly the companies.

And that there is ‑‑ that there is a race to the bottom at the moment for legislation that is nominally about fake news or hate speech, that may or may not be good in intention.  It lacks rigor, if it gives any idea that it be used to combat this problem.  Instead of saying regulatory models, we may see at the EU level, other places we see a lot of bad faith initiatives.

For me, human rights points of view and thinking about how to be a race to the top.  Last night I was thinking about the right to truth, what about the Latin America examples and German examples, what about is there a call there to use to bring this up to protect against misinformation and also lift the vote for all rights.  I will stop there.

>> MODERATOR: Thank you very much.  And I actually have to agree with you that that is ‑‑ that I do see the bad actors are evolving their tactics in a much more quick way than we actually counter them, especially the actor I have focused the past few years.

Sebastian, what is the main challenge in countering disinformation?

>> SEBASTIAN BAY: Thank you, Damian, for saying we need to get away from the whack‑a‑mole approach.  And Miranda for talking about the antagonists and what they're doing.

 

I will focus on two points now.  Globalization disinformation and meta manipulation I think these are the main upcoming or already evolved problems we're seeing.  As we're studying the disinformation industry ‑‑ and I work for NATO Strategic Communications Center of Excellence where we study the malicious misuse of social media, mostly through harm's perspective, an antagonist view.

We see how it is evolving, certain countries specialize in developing software for social media manipulation.  We see other countries developing content creation.  Third countries that are specializing in crowdsourcing or even crowdfunding.  Social media manipulation.  We have seen how these things come together.

Just two days ago, I was experimenting with a Nigeria social media manipulation service provider.  Buying social media from them.  Seeing they were using a set of Russian software to deliver this manipulation.  And most probably using content creation from Southeast Asia.  The industry is coming together, more effective and we have every reason to believe this disinformation industry is growing.

Looking at actors in Europe, they're official companies.  The turnover increases from year‑to‑year.  Many doubling their profits in turnover.  It is available to anyone.  Not just available to state actor.  It is available to you and me.  That is a problem.  It is a problem also that it is so cheap and so readily available for everyone.  I think there is positive steps there.  I appreciate WhatsApp came out said they were going to sue any company that was set on undermining their platform.  I think that was an important step.

We have seen Facebook take action against some companies that are manipulating their platform, there is much more to be done.  Many of the companies are acting in the open and they're not really facing the consequences of this.

I think the most interesting way to look at how effective are this, just become a customer.  Buy 10 likes on something.  You get advertisements from the company and they tell you when they are enable to deliver services ‑‑ unable to deliver services.  The company said we can't buy likes ‑‑ we can't by followers on Instagram for this week, but we're working on a solution.  All other services are up and functioning.  No problem, just place your orders.

If you go to the service providers ‑‑ this leads me to the second point.  Meta manipulation.  What are they selling?  What are they promoting the social media manipulation service providers?  Well, they are promoting meta manipulation, that is triggering algorithms to show content.  We have done experiments to see how it works.  You buy views, this trigger algorithm mads the video trend and it gets authentic views.

Now, this is good from some perspective, from the antagonist side.  A, it is extremely difficult for researchers to see.  You cannot see who viewed a YouTube video, for example.

It is difficult to see that this is going on.  We have also seen through an experiment that we have done, that this is where the largest weakness is in the social media platforms are.

In about a week and a half, I will launch a report where we have done bias on all the four main platforms.  Using 16 different manipulation service providers.  And if there is one field to highlight now, it is all the platforms are consistently bad when it comes to blocking meta manipulation, views and so on.  You get 100% of what you buy all the time.  This is of course a problem.  This is a field where I think that antagonists are evolving and using because it is somewhere where you don't get caught very easily.

So that is where I will end.  Those are my two trends that I wish to highlight, the globalization of the disinformation industry and the problems that bring both for attribution but also for combating it and the problem of meta manipulation.  Because we cannot properly assess with the data access we have today what extent it is going on and what harm it is doing marked for identification thank you Sebastian and thank you for keeping the time.  I would underline the globalization problem not only we see increasingly more actors learning from each other and adopting each other's tactics but we see domestic actors are learning from the bad actors.  When you plan the disinformation as a foreign actor and have a local actor repeating it after you for purely domestic, political cynical reasons it white washes the action of the initial bad actor, which is a problem I increasingly see in Europe.

So we have the first challenge, how to actually challenge disinformation and not violate freedom of speech and similar rules.  How to evolve the partnerships for countering this information.  And the two problems that Sebastian just highlighted.  The globalization of the disinformation and increasing threat of meta manipulation.

Now, I would actually pick on this Miranda's point on how do we evolve the partnerships to counter disinformation.  And this brings me to the second question.

What have we done so far and what more do we need to be done?  We have representatives of three different sectors.  Sebastian, although NATO Stratcomms it is a research center, but you are probably the closest we have to the Government.  You would give the Government perspectives.  We have the platform, private business and research community.

I would like to ask each of you, what would you need to see from the other two segments, what would the research community like to see from the private business, from the social media platforms and from the Government?  Damian, if you would give us your perspective?

>> DAMIAN TAMBINI: Well, from the platforms, it is data.  Unconditionally, on our terms.  We don't need to go into that in too much detail.  It is an old debate.  But can I pick up, because we have been discussing this in terms of a security problem and speaking about adversaries, this is a point that is often missed and it is a fundamental point about deliberate disinformation and so‑called info wars.  This.

This is a war on democracies by nondemocracies.  It is asymmetrical.  If one of the participants in this conflict has genuine popular sovereignty, then it matters if the opinions of its citizens can be manipulated.  If the other participant does not have popular sovereignty, you can have exactly the same disinformation against that adversary with no impact at all on that nation state's ability to act and 80 to respond.  So it's a war by nondemocracies on all democracies and on democracy itself.

So when we consider how to respond to this in the role of platforms, the role of Governments, it is absolutely fundamental that any solution builds but does not undermine trust.  So just to pick up on for example, just a dominant platform like Facebook.

We need to understand there are quite clear links between questions of competition policy, how big is a platform like Facebook permitted to be?  And all of the questions of censorship and free speech.  If Facebook removes and reduces content, filters, blocks, downgrades, that may particularly in a European approach, be considered to be a form of censorship.

Because effectively, this is a dominant platform controlling speech.  It has more censorship‑like consequences.

But also it's entirely ‑‑ it is really very important that the trust that the public and Civil Society has in any of the processes involves information and involves genuine ownership by Civil Society of those mechanisms for filtering and taking down all of the mechanisms for responding what might be considered adversarial content by some and potentially not all of the citizens.

The issue of independence and ownership by Civil Society of the censorship‑like processes is fundamental for the ability of democracies to deal with what is a really fundamental challenge without damaging trust in democracy further.  We're in a very critical moment in democracies in terms of trust.  This is a key issue.  When we're developing platform‑based solutions, they really must do more to involve Civil Society and ensure that they're trusted.  Otherwise, their filtering is just another conspiracy.

>> MODERATOR: Thank you, I guess you might want to react to this.

>> MIRANDA SISSONS: I don't mind.  I'm happy to.  That is a very, very good comment.  And I expect people that have been at Facebook longer than I would say yeah, that is very important reason, one reason why we have put such effort into creating now, the Facebook oversight board.  Partly, which has had a very robust consultation process in the last year, to try and broaden the input in these issues across content that is related to community standards.

And that obviously Facebook's development of the remove, reduce, inform framework is relying very heavily on reduce and inform in order to minimize the censorship concerns.  It obviously doesn't sidestep or make up for them, but that it is not an approach where it is simply ‑‑ it is absolutely ‑‑ it is seeking deploying a number of tools to try and work on systemic issues such as coordinated inauthentic behavior which is a pretty good policy that looks at behavior rather than content to allow us to remove a great deal of inauthentic behavior.

The takedowns of which are publicly announced and the data of which is shared with the Atlantic Council and where we are also seeking, I think in an ideal world to expand a number of partnerships to lodge that data so it is more available to researchers.

I'm interested in the social 1 experiment that we're doing which is an attempt to bring a broad variety of data including under careful conditions and in a carefully structured manner available to the research community to study.

I'm not sure ‑‑ I mean, I think this question, though, is a fundamental one to do with trust.  Which is why one has to also look at the frameworks that engender the most trust and will work differently in different societies.  But in the human rights framework, even within freedom of expression, as the global direction of human rights, I don't just care about the freedom of expression, including all of the permissible limits of freedom of expression.  Which include the acknowledge of rights and responsibilities of others for public order but must meet certain tests and must meet lawfulness and legitimacy and necessary to protect rights in democracy and tests of proportionality.  And it is a little bit to the detriment of the human rights framework that the very good tests are kind of technically locked away in lots of jurisprudential boxes and it makes it very hard for broader agencies to engage with them.

In the first day of Facebook and policy on voter suppression.  And I said oh, restrict of speech.  Look at this, it is permitted under human rights framework to reduce the manipulative misinformation directed at voters.  Part of my job, in fact is to bring some of the strength in the framework into the rather arid debates of freedom much speech that have existed to date and bring the protective capacity to environments.

When I look at that, one of my greatest challenges is, is to some extent the breadth of the gap between the disinformation, misinformation, cybersecurity world and the digital rights world in general and broader human rights community that has worked and suffered from and been involved with debates around this stuff for 60 years.

And for me, as the human rights director, working with the cybersecurity passionately interested in disinformation and its impact in the most fragile violence where we see this Russian play book having so much damage.  We haven't surfaced here, it is important to surface in order to give, again, greater encouragement for the race to the top in all environments.

>> MODERATOR: Thank you very much.  I will actually have to have a question, but first I would like to ask Sebastian.

So what has been done so far?  What more do we actually need to be done.  What do you as a representative of the Government/research community, what would you like to see from the platforms from other researchers?

>> SEBASTIAN BAY: So I'm being a Government think tank, we have the opportunity to put pressure on Government as well.  I think a lot has been done.  Not the least in the sense that we shifted discussion from regulating content to regulating everything that surrounds this information.  Focusing on coordinated inauthentic behavior above all.  Of course, why we are interested in this is because antagonistic states are using the tools to undermine democracy.  But when we study this problem, we assess that as much as 90% of this manipulation is not named at states or democracy, but aimed at commercial interests, it is ad industry fraud, fraud aims at hotels.com, or Trip Advisor.  Any such thing.

Yes, I'm a strong believer in we need to regulate social media companies to force them to put more resources into combating this problem, but also make it more of a level playing field.  The report will launch one of the things that we'll show is all social media companies are not equally bad at this or equally good at this.  Rather, there is a big difference between different social media platforms even within platforms or between platforms owned by the same company.

So that we can see that there is a big difference here, and we need to make sure that there is a level playing field.

It can't be a race to the bottom from the perspective of who spends less, makes the greatest profit.  We need to make sure that there is a standard for how to combat these things online.

Another aspect of this, I already mentioned it is to regulate the market for social media regulation.  There is a company that is only job to manipulate social media platforms.  I have a hard time seeing the legitimate reasons for allowing the services to exist, but I think social media companies could have done more in the past to fight back against these companies.  None the least, we see the platforms use the social media manipulation service providers that use social media companies to market their services.

You find YouTube channels and tutorials how to buy fake likes.  Instagram channels marketing their services.  You have them buying Google and Bing ads to promote their services.  This is a little bit like the bank allowing robbers to do recruitment on the bank notice boards.

It is what you can do to make it more expensive to buy the manipulation services.  We need to take those steps.  I appreciate what WhatsApp has done in saying we're going to sue any company that offers the services, I think that is an important step.

I think we need to standardize in much larger content when it comes to terminology.  I think the Government should have a lead in this, if the industry isn't able to do that.  For example, each social media company reports on the number of blocked accounts.  There is no standardization for what accounts ‑‑ what is a blocked account.  It is very difficult to look at all the platforms and say who is better or worse at this.

Also, I think how this reporting is done, need to look at things that are more meaningful.  The number of blocked accounts is quite useless thing.  It is useful in the sense that it shows the extent of antagonists trying to manipulate and undermine the platform.  We don't want to know how many times the robbers tried to rob the bank.  We want to know if they succeeded?  Is the money still in the vault.  How many got past the gates how many are on the platform reeking havoc?  We don't have that sort of reporting because we don't have terminology and standard set relating to this.

I will allude back to what I mentioned.  We need to standardize and incentivize the amount of resources that social media platforms put into this.  We have seen this, technical know-how, strength of platforms how they were built, the resources put in by difference companies.  It makes a big difference.  It can make a difference of 50% with the platforms and ability to combat this industry.  IE, hold all the companies to the same standard.  That is probably a standard that needs to be set by Government.

I think it will help social media companies because it will level the playing field.  It will make it fair, it will make it in a sense, if every company have to put in the same amount of resources, it will make it easier to compete in this field.  I think that is a thing that will be mutually beneficial.

>> MODERATOR: Thank you very much.  Always grateful for a very specific recommendation.  Happy to hear that.

Before I open the floor for the questions for the audience, I will abuse my authority as the moderator.  Let me pose a question to the panelists.

While I do understand the concerns whether or not we violate the freedom of speech by certain measures on social media.  I think it is always useful to emphasize we already do have limitations to freedom of speech.  In most of the countries it is illegal to deny Holocaust.

In my country, I can't spread false alarm stories.  I can't call to this building and say there is a bomb, evacuate everyone.  And cause panic and chaos.

When you look at what they are trying to do.  The EU or Government is trying to attack its own population.  And trying to substitute Muslim migration and organizing because of that.

In my opinion we should think if the laws can't be adopted also in this case.

What do you think about labeling the notorious disinformation outlets, similar to the tobacco approach?  We don't ban tobacco, we say it will harm your health.

 

Frankly, I see no reason to see why YouTube should be recommending Russia Today videos about Robert Mueller's investigation saying he hasn't proven any Russian meddling into election.  It is a lie.  We know Russia lies.  Why should YouTube recommend a notorious lying outlet?  Shouldn't they label it as a toxic source of information similar to a tobacco product?

>> DAMIAN TAMBINI: There are a couple of problems with labeling.  In general, I think it is a reasonable thing to do.  If we assume it is going to be a solution, we might be making the wrong assumptions on the extent to which humans seek truth.

The research tends to suggest that when deciding whether to be affected by something, whether to share something, whether to like something, whether to read something, whether it is true or not, or labeled as such isn't a great incentive or doesn't make a huge amount of difference.

In fact, it can have perverse consequences.  It might feed the conspiracy theories, might make something, you know, something they're trying to hide from us again.  For a significant amount of the market.  That may in turn contribute to virility of content.  So in general, labeling is a good thing.  I think there is another set of questions around pushing the labels and whether labeling is something which is done by AI, and read by AI.  And I think that there are a number of cross‑cutting issues which apply to everything in this in terms of who is setting the standard, which would trigger the label.  Who is making the labeling decision, all of which have implications for censorship and trust and all of the things we're wrestling with.  We need to be careful.

It is not impossible.  It is not that I say don't do it.  I say beware of the challenges.

>> MODERATOR: Just very short reaction.  In Slovakia, they have a company called (speaking non‑English language)  They have a board with media professionals, academics, et cetera, et cetera.  A group of like 40 people decides, yes, this outlet belongs to the list of disinformation spreading outlets and pressing companies not to advertise on the outlets.  There is already a precedence for this.

>> MIRANDA SISSONS: I was going to say we will find out a lot about labeling, because Facebook is labeling state media or is intending to this month or just launched it, using criteria of the editorial independence criteria in the Facebook news tab.

So I mean ‑‑ let's see.  I think we're in a position of knowing that there are many challenges and hoping it will be effective, and there is as I mentioned with our contextual treatments we like them to be more effective than they are because that is the first and easiest way of informing people that what they are seeing and wanting to share may not be worth sharing.  Let's see how that evolves.

>> SEBASTIAN BAY: One thing is that labeling has been discussed for some time is the bot issue or automated account is.  It would be great to label an automated account.  The problem is we are unable to identify what an automated account is now.  When we do identify and take them down we have figures that show on Twitter, an account may stay up for 5 or 6,000 tweets before it is down.

Labeling would be good.  It doesn't have to be the tobacco package labeling to say harmful.  It could be this media outlet is controlled by this state or controlled by this group of interests.  When it comes to technical labeling, labeling bots, that is an industry challenge because we are unable to differentiate between real and labeling accounts, to think we can label all the bots is not technically feasible.  That comes back to it is too easy to create fake accounts and comes back to credentials, authentication online and the need to rethink that for the future.

>> MODERATOR: Thank you very much.  We still have 17 minutes, left, I believe Stephanie who is going to walk here with the mic.  Do we have questions from the audience?  I see one here.

>> AUDIENCE: Couple of things, on labeling, I shared on Twitter an Article.

>> MODERATOR: Can you introduce yourself?

>> AUDIENCE: Courtney Rajim, with Community to Protect Journalists.  I did an analysis when YouTube first launched a test pilot with labeling media outlets, it is inconsistent.  It would be great if we knew media ownership, but having worked in the field with media development with colleagues here, oftentimes you done know who owns the media.  Labeling has all sorts of problems, I encourage us to invite Civil Society who is involved to be part of the discussions because there is no one up there.

One of the concerning things about the conversation is one of the most important antidotes in democracy is quality journalism, we haven't talked about how to encourage and support journalism given the dominance much the platforms and incumbency there.  And second of all, when we talk about looking at behavior and inauthentic behavior, we see all sorts of negative repercussions such as cashmere, Egypt, where accounts and content is being taken down that is not along the Government or political party in power's point of view.

So I think this is an important discussion, but when we talk too much about technical solutions, it is good to hear that we are seeing the discussion around competition and antitrust and these sort of things we talked about.  None of you mentioned, for example, the ability to target or micro target audiences.  That obviously has a role to play in the manipulation of people's point of view.  I think there are a lot more topics to discuss in here.  I would definitely encourage you to think about including maybe more fully full members of Civil Society and especially of the journalism community which is an important ‑‑ has an important contribution to discussions around disinformation.  Thank you.

>> MODERATOR: Thank you.  If we could take two more questions.  So we have one there.

>> AUDIENCE: Hello.  My name is Myoro Jack, I work with the Organized Crime and Reporting Project in a technical capacity.  I would like to see whenever Facebook is criticized for anything or faces a problem like this one, like disinformation.  We hear a lot about how the world is a big place, there is a lot of different, you know, regulatory environments and all of these places Facebook has to work, it is very difficult.  That is a fair statement.  However, I would like to point out that this is Facebook's decision to have become a monopolist and be this one company for everything.

I don't think it is fair to keep using this Trump card.  And so my question would be:  Would Facebook consider not using this argument in such the discussions.  Thank you.

>> MODERATOR: Thank you.  We have one more question here.

>> AUDIENCE: Hey.  I'm Habal from Colombia.  I have to get a little background of what is happening right now.  As you may or may not know Colombia is in some sort of turmoil with online protests.  On Saturday a curfew was announced in Bogota.  This was at 8:00 p.m.

At 8:00 p.m. WhatsApp chains were being sent massively around the city, saying they're about going to have massive robberies in residential complexes.  All over the city this was happening in real‑time misinformation.

So people ‑‑ a City of about 8 million people in the matter of two hours was in total chaos and fear because they thought they were being robbed.  People were getting out of their houses with stones and sticks when nothing was going to happen in fast.  How do we tackle real‑time information especially when it can put in danger a lot of people.  Thank you.

>> MODERATOR: Thanks a lot for your questions.  So we have a question on how do we support quality journalism because it is an essential element in countering this information.  We have a question on how nondemocratic societies abuse the rule for taking down content for nondemocratic purposes.  We have a question for you Miranda if you consider stopping using one of your arguments and how do we stop the real‑time campaigns?  It reminds me of something that happened earlier in India where mass spreading of WhatsApp messages led to violence and death.  Who would like to start?

>> SEBASTIAN BAY: I really appreciate the challenge of regulating in democrat state versus nondemocratic state.  I was in Nigeria, we discussed this issue.  Civil Society there is legitimately scared of any type of regulation because it might strike back from a state that is not democratic.  Does that mean in democratic state we cannot regulate.  I'm a Swedish citizen we like regulating.  I argue no there has to be a difference.  We have to recognize the challenge of writhing in nondemocratic states and not because we lack the capacity to do that in oversight.  We lack that in Sweden, Latvia or Europe in a large extent which makes it difficult to regulate, because it needed for regulation to be effective.

Another thing that came up is how to combat the real‑time disinformation.  Sitting here at IGF and thinking from the global perspective and responsibility to protect, it is easy to come back and see the situation where it goes out of control, social media platforms are no longer for sharing information but become command and control for those wanting to commit genocide.  This is a huge challenge.

One thing is the framework of the Christ church call.  We have reached a consensus when it comes to defining a problem.  With the solution, there is a lot to go.  We lift this problem so we need a different set of solutions for real‑time emergencies and spreading disinformation that might or might not be true.  I will leave it at that and maybe come back later.

>> MIRANDA SISSONS: One of the interventions that needs to be strengthened more that is a beginning intervention and environment to support quality journalism is one of the reduce and inform treatments we're using is to support and hopefully expand quality fact checking throughout many different parts of the world.  So using the quality standards of the fact checking, that is in its infancy because only 55 partners.  One of the things to be involved there is seeing how what we do can assist partner development and again, strengthen rather than weaken an ecosystem.  Which obviously the platform advertising business has significantly weakened.  That is when I talk about ecosystem, that is, you know, certainly some of us are looking, trying to look at that hard and saying okay, these are ecosystem system issues that we contributed to but need to solve.  It is not just a panacea to say they're large and complicated.  Not just an excuse to invoke to justify any inaction.

Rather, in fact instead of waiting for great European regulation, realities are dealing with Singapore, Vietnamese and Nigerian regulation.  That is three of them.  That is my reality every day.  In a regulatory sense, it is something to do thoughtful work in this sphere is greatly encouraged.  At the more universal standard, the better.

When I say race to the top, that is a think this I'm very interested.  It is happening.  It is happening in environments that are nondemocratic.  It does impact journalists and ecosystems in those environments.

Other things we're doing to limit spreading in real‑time are things like the trusted partner project and things like building up the on the ground networks so people can information of specific pieces of misinformation that can lead to real world harm.  They are active in a number of different environments.  They don't substitute for the key problems with Facebook Live in the crash cart call.

For example, we use those every day in Myanmar, Ethiopia, Turkey, Iraq, to take stuff down all over the world.  Those are in some ways the first step examples.  We have put out new policies on misinformation on harm.  We will be putting out a policy on unverifiable rumors.  Because a great deal of the information we see exists, not speaking about disinformation but exists on a spectrum that people may or may not be spreading information by its nature or causes fear or harm that people will share for a variety of motives.  We will begin our first teams to look at that from a policy level as well as design level, which is really about introducing fiction, such as in the WhatsApp sharing that you referred to, but in a variety of other sharing settings inside of several of the apps.  So there is more to come in this space, I think.

>> DAMIAN TAMBINI: Thank you very much.  I think we heard a lot which underlies the importance of having different standards, different policy standards for democracies and nondemocracies.  There is a take away from this session, or to give a recommendation, it may be that we need new spaces and forums to have a deeper debate about freedom of expression among democracies and one which accepts that regulation and freedom of expression are not inherently opposed.  That is all mature approaches to freedom of expression require justified regulation.

On journalism, absolutely agree that journalists need to be involved and journalism as something ‑‑ an ethic, a practice for truth seeking and serving democracy is something that absolutely needs to be at the table, but we need to consider it as a set of functions and not as a lobby.  As a lobby for an industry that simply assumes that more journalism is better.  Journalism is changing.  We need to see it as a set of functions.

Real‑time misinformation absolutely requires responses but in order to be trusted they need to be considered responses.

In relation to the press, any form of closure of speech or stopping of speech would require an independent involvement of a judge, an injunction.  And I would argue that Facebook and other social media platforms need to evolve fora and procedures that enable any emergency shutdowns and any emergency restrictions to involve an independent judge and be subject to high levels of transparency and post hoc oversight so these can be reviewed.

Because one person's misinformation is another person's legitimate protest.  So the attempt to shut down a WhatsApp group, a particular WhatsApp message or all of WhatsApp at a time of protest should be considered with a great deal of seriousness in any democracy.

Final point, I think to underline that, obviously this is not just about one company.  It also needs to involve not just a dominant or several dominant ‑‑ Facebook ‑‑ oops ‑‑ social media platforms.

It needs to involve all potential stakeholders and be trusted by all of those involved in that industry in any given democracy.

Facebook, I'm sorry.  You can't do it alone.

>> MODERATOR: We have three last minutes, maybe there is time for two more questions, one from each side.  Gentleman here.

>> AUDIENCE: Thank you.  Rasmussen from the Art Institute from University.  Miranda, thank you for being here.  It is important you are here and step up in this way.  Thanks for doing that.  You said in your opening remarks, Facebook is reducing virility and spread of disinformation and false news.  My question is ‑‑ I'm not questioning your good faith or the fact that the company from the very top wants to address these issues, but given the fact that you initially in 2016 went out saying this was a minimal problem if at all and also that you have a record ‑‑ I think we can say an imperfect record of your metrics and sharing of data, I suppose fundamentally why should people trust you to mark your own homework?  Isn't this like BP saying we're reducing carbon emissions or McDonald's saying they are improving public health with no independent oversight?

>> AUDIENCE: Thank you very much.  Miriam, [?] Thank you for the interesting contributions.  We're just responding to some of the comments of lack of journalism and media community in the debates by launching a new dynamic Coalition on the ability of journalism and news media tomorrow here at the IGF within the system and we hope to contribute to these conversations much more in the future.  I unfortunately have three questions.  One for Sebastian.  In relation to which you mention that 90% of manipulation comes from bad fraud and not malicious actors in the space of intentional disinformation.  There is an institute around 10 billion USD annually goes manipulation in the problematic ad space.  You said that there is little done to combat mathematic manipulation and identify bots.  All of the mechanisms did lead to this.  Do you think we could really address the problem of misinformation in the digital space without properly addressing this and without properly addressing incentives in the advertising space that make money for malicious actors.  If you are a small person who starts up a business, you would rather go into the producing content factories that make you money on certain platforms than going into journalism that doesn't make you any money.  My question is how do you address that.

I will skip the question for Miranda.  But to Damian, what are the policy initiatives that you see in terms of Affirmative Action for supporting not just journalism in news media but positive information spaces, maybe in Europe and what can be done in this regard?  What are the best practices that you have seen so far?  I have couple of questions for Miranda, but maybe we can do that later in the session tomorrow.  Thank you.

>> MIRANDA SISSONS: So I don't know that I have an answer, an easy answer, the pat answer for whether people should trust Facebook.  Of course, people shouldn't necessarily trust any particular company.  I think we are reporting, for example, misinformation edits under the European code.  We are reporting quite a lot and a totally take the point that we can standardize measures and look at what is meaningful, in the last 18 months, Facebook has begun to report quite a lot of different and continually improve data on take downs under community standards.  And in a different set of data for Facebook and now for Instagram.  And now take downs pursuant to Government requests.  The methodology for those has been independently taken out to and scrutinized by a particular Yale multistakeholder, academic group that probably has an impressive name that I can't remember, but to look at the data and the verifications were okay.

I can't convince you, you should, probably nobody should trust any particular company but we need to be judged on what we have done and what impact it has.

Given I think rightfully, I think all the public relation stuff for Facebook now says we were too slow to act.  And we didn't.  And now we need to act.  And that people should scrutinize that.

Test everyone they meet.  Test the data.  But in this, we need to frame what does an adequate response look like?  What is an effective response?  What are the deepest ecosystem interventions we can make?

[Audio skipping].

>> SEBASTIAN BAY: I saw this one notorious Russian outlet had used commercial services.  Used commercial services to strengthen tools, employ more developers because they make money from the ad industry fraud that is the news to leverage Tyranny on democracy.  If we can't get rid of the financial incentives, we will not get rid of the other part either.

>> DAMIAN TAMBINI: I started by asking for a system perspective.  We certainly have one on the panel.  We need to think about the business model on the entire political economy behind it in order to deal with that if we think about it systemically.  I was asked about policy initiatives.  In it that systemic sense, we have somehow evolved a business model which is for social media in particular, which is not been optimized for democracy.  It has been optimized for various forms of noise and virility and reinforcing of behavioral biases and things that are maybe not so good for democracy and any policy solutions need to address that at a systemic level.

I think we need to optimize in a sense for truth.  Whilst accepting that there are always going to be contestations about that, about how truth is arrived at and questions about who is deciding standards, et cetera.  But some of the things I think are now beginning to be spoken about.  There are interesting policy proposals out there.  I mentioned in the U.K. and online harm's white paper which may lead to legislation next year.  Many freedom organizations have come out against that, because it is seen as a centralized approach to deciding on what constitutes truth and information.  But I think it is possible to evolve structures and processes that are independent of Government and can achieve through transparency in Civil Society involvement sufficient with trust of the only all framework.  We're at a stage in the U.K. where it could go either way.

I think it is important to think about funding, funding the good stuff, including public service media and labeling illiteracy that we haven't spoken about a great deal.  There is an approach to literacy which is more radical and involves speaking more specifically about linking up filters with literacy, education, we could go into.  Other all of this needs to be linked to incentives, following the money, ad exchanges to change the models and overall antitrust framework.  If we look historically, particularly at the postwar period in the wake of authoritarian failure of democracy, we understand that there are periods for newspaper and for broadcasting, there were deep societal debates on how to evolve the institutions.  I think we are entering one of the processes.

It is going to take longer than a lot of people think about.  But it is going to need to address the entire business model and it is going to need to use all of the different policy levers.  Thank you.

>> MODERATOR: Thanks a lot, Damian.  Unfortunately, we're a few minutes over time.  I will have to close the panel here.  Thank you very much for your questions.  Thank you very much for your replies.  Please join me in a round of applause for the panelists.

[Applause]