You are here

IGF 2020 - Day 4 - OF25 Freedom Online Coalition Open Forum

The following are the outputs of the real-time captioning taken during the virtual Fifteenth Annual Meeting of the Internet Governance Forum (IGF), from 2 to 17 November 2020. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

 

>> MODERATOR: We have a couple minutes to go technically, and officially our panel kicks off in five minutes.  We'll give it another couple of minutes then we'll kick off.

>> TECH SUPPORT: Good afternoon, good morning, good evening.  I'm providing the tech support.  This meeting is being recorded and is hosted under the IGF code of conduct and the U.N. rules and regulations.  If anyone needs to speak, a person will have to unmute him or herself.  But if the person is having any issues the person can indicate that in the chat and I'll gladly do that.  That is all for now.  Please enjoy your session.

>> MODERATOR: Thank you, Raymond.  I hope everyone is clear on the rules.  We're now ready to go and start our panel.  I'd like to welcome all participants to this open forum session of the IGF of the Freedom Online Coalition.  We have an hour for our discussion, and my name is Lea Kaspar, and I'm moderating this conversation.

 [ Inaudible ]

Again, this time in a virtual format, so I hope we'll be able to ‑‑ some of the directivity the IGF known for us.  I'd like to encourage our participants to share questions with us.  There's a Q&A function that you should be ‑‑ you should have available.  You can use the ‑‑ that function to post your questions or you can use the function in the chat for general comments.  I'll be monitoring both and see if I could ‑‑ I'll try to integrate to the extent possible into our discussion.

So with that, let's get to the core of what we're meant to be here today.  For those of you who are uninitiated and unfamiliar with the Freedom Online Coalition, just a couple of words on what the FOC is.  The Freedom Online Coalition is an intergovernmental coalition of 32 members at the moment established 2011 at the inaugural conference at the Hague.  It's committed to advancing internet freedom, including free expression, free association and assembly and the right to privacy worldwide. Now, one of the key priorities of the FOC is to shape global norms in the way that supports its mission through joint action.  One of these actions is drafting and developing joint statements and then leveraging the language of these statements in key diplomatic forums as well as laterally among members and the broader community.

In the past the FOC has issued statements on a number of relevant topics for the work of the coalition, including network disruptions, internet censorship, digital inclusion and the protection of human rights online in the context of cybersecurity.  The most recent statement that was released by FOC was about COVID which was released in June.  The commission called for measures to addressed to be compliant with international human rights law.

One of the topics the commission highlighted was the topic of disinformation where it's highlighted its conversation with the spread of disinformation online and the activity it seeks to leverage the COVID‑19 pandemic with maligned intent.  I think we're going to use that as a starting point for this discussion.  What we've decided to talk about specifically is the way in which the coalition is working to address and tackle the issue of disinformation and misinformation.  There are other things the coalition is working on, but for the time that we have allocated today, we're going to focus on this particular topic.

What we ‑‑ what I'd like to do first is go to my panelists for a set of opening remarks and reflections on why the FOC has decided to deal with these ‑‑ with this topic and why it sees this as a timely conversation to be had in the context of the coalition, and specifically why the coalition has decided to actually develop a joint statement on the topic of disinformation.

Now, I have with me three panelists, I have two government speakers and a representative from the corporate sector Rauno Merisaari, we have Campbell Davis, the head of digital policy from the cyberpolicy department at the foreign Commonwealth and development office in the UK.  And Akua Gyekye, regional lead for strategic response.  Sorry, I did my best with the pronunciation there.  I hope I did not go too far off.

With that I'd like to turn to our government speakers first.  The reason for that is as these two governments are members of the coalition who have decided to take the lead in developing the statement to disinformation, and I'd like to kind of get their impressions and sense of why they've decided to do that and why they think this is an important topic to discuss in the context of the coalition.

Rauno, would you like to go first?  I want to come to you first as the incoming chair of the coalition for next year.  What was behind your government's decision to lead the process of developing this statement?  Rauno, the floor is yours.

>> RAUNO MERISAARI: Thank you, Lea, I hope you can hear me.  Yeah.  So when we conduct with our ‑‑ so we discussed really the need for this statement.  As we know, as Lea said, the Freedom Online Coalition's aim is to promote human rights online.  Well, disinformation itself, it's not a human rights violation, nor a crime.  It shouldn't be a crime because that ‑‑ the criminalization of disinformation would lead to silencing of critical voices and restrict the diversity of information, et cetera.

Disinformation, however, undermines and prevents enjoyment of many human rights, trusting democratic institutions.  And that is why the Freedom Online Coalition ‑‑ not only economic, but also in whole society, the right to seek information access to information is essential for development, prevent corruption, et cetera. Also we know that the factual information and right to information is also essential to protect public health and save lives as we have seen on the current COVID‑19 pandemic.  Lea said already, it's a productive statement about this issue.

And my government, we are worried about the discrimination and stigmas.  Disinformation can be used as a tool to incite hatred and violence.  Often the most vulnerable groups and persons are targets of this kind of net aggression.  So I think these are some reasons to start ‑‑ something from the British point of view.  Thank you.

>> MODERATOR: Thanks, Rauno.  Campbell, Rauno already mentioned the link between disinformation and misinformation campaigns as they relate to democracy, as they relate to public health and safety and potential implications for marginalized groups and how that might be another aspect ‑‑ why this ‑‑ why this issue is so important for the coalition to tackle. What was your ‑‑ what was the thinking behind your involvement on this topic?

>> CAMPBELL DAVIS: First of all, thank you for organizing this and allowing us to participate.  Rauno, thank you very much for those comments as well.

I think I'm enjoying a lot of the IGF sessions.  You see time and again disinformation features across the range of the work, which speaks to what important and salient topic it is.  Of course, it's not a new phenomenon.  Malicious rumors have always traveled faster than the truth.  However, now the change in media environment means that disinformation can now spread faster than ever and to more people than ever.  And the rise of disinformation and the multiple threats means that we collectively must respond to that. Of course, the COVID‑19 pandemic and the resulting increase in online activity, it just highlights that challenge even more clearly over the course of this year.  Of course, the election that is still ongoing in the United States in recent days is only the most recent disinformation that's been a topic of concern.

So the UK government is committed to ensuring that the people have access to accurate information.  And ultimately the public are equipped to make their own discussions with confidence in that information.  And so for us in the UK domestically we're developing a comprehensive policy response to tackle misinformation, working closely with civil society, academia, technology platforms and others.  We've developed a check list for members of the public to use to increase audience resiliency, educating and empowering people who see and potentially share information.

And we're working with strategic communication experts as well.  We're able to respond through communication campaigns where necessary.  And as I said, crucially, of course, engaging with technology platforms regularly, working closely with them to help identify and take action to remove incorrect, dangerous misleading information, which reaches the terms and conditions.  And to help us also understand that the spread of disinformation and misinformation on platforms and help steps they themselves are taking to address disinformation and misinformation. I think it's clear that more work is needed on the scale and the scope and the impact of online manipulation, including disinformation.  So the UK government is taking a wide ranging program research to explore those issues and engaging widely with academia and the civil society who have a huge area of expertise in that space.

Obviously a huge part of this response clearly isn't domestic, it's international.  And it involves working with many stakeholders to provide a better understanding of how the techniques are used as part of disinformation campaigns operate.  And best practice for how we respond and fundamentally disinformation is clearly a multinational and a multistate issue.  As such, an important one for the efficacy to address. That brings us really to the core of this work that we're doing together with Finland and the work we've been doing together on the statement.  Because the threat to the enjoyment of human rights is growing in reach and sophistication when we look at misinformation.  It affects all societies globally and it affects our democratic values.  It can affect public health as we've seen clearly throughout there COVID‑19 pandemic.

Importantly, disinformation has a potential to impact a range of human rights on freedom of opinion, expression, the right to take part in the conduct of public affairs, elections, protections against discrimination, the rights to health, right to education.  And crucially, while we domestically and working with international partners take action to address misinformation online, we'll ensure that freedom of expression and human rights online are protected.  Particularly freedom of expression.  The media are essential qualities for any functioning democracy.  People must be allowed to discuss and debate issues freely, and as such the UK is dedicated to protecting freedom of expression in alignment with our democratic values.

To sort of wrap up there, the threat of disinformation clearly requires a multinational and multistakeholder response.  It's not only about what governments should and should not do.  So the coalition is an important forum that brings together all of these different voices, government, industry, civil society, academia, et cetera, and brings a wealth of expertise from which my government and others and everybody collectively benefits. So that I think is a principle rationale for why we're doing this work and indeed for today's session here now for which, again, I thank you for organizing.  So I shall leave it there.

>> MODERATOR: Thank you, Campbell.  You mentioned a couple of things I'd like to come back to later in this discussion, including some of the measures that your government is taking in tackling this issue.  I think it will be interesting to dig a little bit deeper, specifically you mentioned digital literacy and how important that is and I think that's certainly part of the tool box of measures we need to keep in mind and implement.

Before we come back to measures and solutions, I still want to unpack a little bit about why we're even discussing this and why this is a timely topic. Akua, you are working with Facebook.  And for people's awareness, Facebook is a member of the FOC advisory network, which is a multistakeholder ‑‑ stakeholders who are nongovernmental who are engaging in the work with the coalition.  Facebook has been involved in the network.  I'd like to hear your views on why you think this is an important topic generally and also for the FOC to be involved in from your perspective.  Go ahead.

>> AKUA GYEKYE: Thank you so much.  I'm grateful for the opportunity to be part of this panel and give you a little bit of a context as to how Facebook is thinking about this very important topic. I think to put it a bit more into perspective we nowadays have over 3.1 billion people connect on Facebook on a monthly basis around the world.  My particular area of expertise covers Subsaharan Africa, as I lead our election integrity work covering Subsaharan Africa.  There we have over 176 million people who connect to their family and friends and communities on a regular basis.

I think the company recognizes as one of their values to ensure that people are safe when they connect online.  As much as the company also believes that free expression is fundamental to the company, we also have to recognize there are limitations to free speech and making sure that people can engage in a safe manner is really important.

I think part of that conversation, of course, is about how do we combat misinformation and false news so it can affect things like important elections or other conversations, so we're doing quite a bit.  I think it's important to keep in mind there's no silver bullet when it comes to combating misinformation.  There's quite a lot that the company has done, continues to do.  Not by itself, but really in collaboration with other partners.  So I'm grateful to hear, also, from our government partners about the approaches.  Because I think the beauty of such collaboration in panels is always to look for opportunities for how we can further collaborate and make sure that issues can be addressed and really get to people.

When it comes to misinformation more specifically, we have a policy that focuses on reduce, inform, as well as remove content.  When you think about the first part, which is removing content, it really is content that violates what we deem to be our community standards.  It's important to note that it's not the wild, wild west when it comes to Facebook.  We do have policies, we set community standards that say what's allowed and what's not allowed on Facebook, and it's the sort of values that are global.  They're not stagnant.  They continue to be updated.  And they really focus on authenticity, safety, privacy and dignity.

As you can tell, guided by international human rights and really relevant for this topic.  But when it comes to misinformation, we recognize that even though they don't violate our community standards per se, they undermine the healthy conversations that we want people to have.  We want people to see accurate information.  Fact checkers are organizations that day in and day out do that type of work.  They look for content online, they do the research and figure out is something true, partly true, or is it false. In partnership with them, they inform then Facebook, what content online should be deemed false, and then we do our part and label such content as false.  We have over 70 partners around the world in Subsaharan Africa.  We cover over 18 countries, and it's an ongoing program and we always look for more partners.

I've been excited.  I've been with Facebook for five years.  I've been excited to see that network grow, both in terms of countries but also local languages.  Sometimes you have to realize that, you know, false news might spread in English or French.  I'm from Ghana, so the more we can sort of expand the coverage there, I think we're working towards the right direction.  And what we've seen is that once we got these ‑‑ that information from the third party fact checkers, we place warning labels on content to let people know, hey, this has been fact checked by a third party fact checker.  You might not want to share this with your family and friends.

The good thing is that research has shown that the majority of people who see such warning labels then don't engage with the content and most importantly also don't contribute to sharing it.  Because nobody wants to be the source of false news, right?  And I think that's sort of a big part.

At the same time to the point that was raised earlier, digital literacy is an important component.  We do a lot of outreach on radio stations, oftentimes in native languages to make people aware and sort of think about, hey, let me think about what kind of content is it.  How is it making me feel?  What's the source?  If I feel strongly, should I really be sharing it with my family and friends?  Has it been forwarded by many people?  What's the original source? So getting people more in the habit of asking these questions, doing additional research, and being clear about content they read online or anywhere else, really, I think is another important aspect of sort of tackling misinformation.

Lastly, I would say from my perspective, which is at the moment we're quite busy, even though the U.S. has an election, Ghana, my own country is looking at an election.  We had elections in the Seychelles as well.  The sort of topic of misinformation is important to think about the election.  Based on feedback and conversations like we're having today, we updated our policy a while back to say that misinformation that has false voter information would be completely removed.  If somebody says the election is taking place on a Tuesday, even though it's taking place on a Wednesday, we remove content proactively.  Not waiting for people to report it to us because we deem that to violate our voter suppression policies.

We also look at misinformation has a high chance of leading to offline harm.  There are tensions in the country and excitement is there, emotions run quite high, which is not a unique issue to Subsaharan Africa.  We see this is an issue around the world.  We have teams internally, as well as external partners who make an assessment from a human rights perspective and otherwise to see if this information can lead to imminent harm to people.  We want to remove it from our platform.  We don't want to contribute to any kind of tensions or people getting hurt.  There I think safety will always trump freedom of expression.

I hope that gives you a bit of a context of how the company thinks about misinformation.  Why it's an important topic that we are involved in.  But also that we are relying on partnerships and making sure that we both educate people on how to better spot it, but also do our part to make sure that people don't continue to see and engage with the content.

>> MODERATOR: Thank you for that and for sharing kind of some of the measures that the companies are putting in place in this case, Facebook in particular.  There are others, obviously, there's a lot happening as well on Twitter.  Much more proactive in recent weeks.  We've seen ‑‑ I think everyone will agree we ever definitely seen a step change when it comes to proactive measures that companies, platforms are taking compared to a couple of years ago when this topic became kind of more prominent on the agenda.

I'd like to come back to that and the role of platforms more generally.  But I now kind of want ‑‑ I think listening to you, what I am thinking is there's a lot of agreement.  You know, there's a lot of good stuff happening.  Platforms are taking measures, governments are taking measures to tackle this.  We all agree this is an issue.  It would all ‑‑ you know, someone who is not familiar with maybe the debate would think, like, well, we all agree on everything.  What's really the problem?

And for those who are unaware, the coalition is in the process of developing a statement on this topic.  This has been going on for nearly a year.  So we've been in the process of developing language on this issue for over a year.  I'd like to check in on why ‑‑ what makes this topic really so challenging to find agreement on?  What is the crux of the issue at hand?  Is it just that it's ‑‑ you know, is it the scale?  It's not a new topic.  We've had disinformation campaigns around for a very long time. So I just want to get a sense, maybe first Campbell if you don't mind, I'd like to come to you for your reflections on why ‑‑ what are some of the difficulties in developing the statement on the FOC side and why is this topic so challenging?

>> CAMPBELL DAVIS: Thanks, Lea, it's a good question.  You referenced the statement that we've been working on for some time.  And dare I say, oftentimes government can take its time to produce things like this.  Specifically, when you look at the topic like disinformation, in part simply due to its relative novelty and the fact it's increased in scale so much in a relatively short space of time.  It is naturally difficult for consensus to be reached quickly, not only between different governments, stakeholders, even within organizations, government, stakeholders as to what exactly is the right approach and how to factor in all those various considerations that one needs to make.  It's complicating, it's evolving and challenging and reaching consensus can take some time.

If we look within my own organization from the UK government, well then, as I said earlier, we're developing a comprehensive response to this.  That involves bringing together parts of government, which work with cybersecurity and digital policy, online harms, media literacy, et cetera, et cetera.  There's a huge range of actors involved all with slightly different sets of interests and priorities. And so naturally, it takes time to work that out, even on a sort of single organizational level.  Of course, scaling that up to one that is multinational and multistakeholder involved further layer and level of challenge.

I feel there's quite a lot of challenge simply from the novelty, the scale and the range of factors that are included.  And as I say, I speak obviously from a government perspective, but, of course, as I said this is not just about governments.  I think we, taking the lead on this together, are keen that this does involve a very wide range of stakeholders.  Ultimately, not only is that key to determining the right approaches and the right ways to take this forward, it is key in delivering the outcomes that I think collectively we all want to see.  This is not something that simply any single government or any groups of governments can deliver entirely on their own.  It requires cooperation, collaboration amongst a huge range of stakeholders and our colleagues are amongst those leading the technology platforms.

So that, I think, helps to explain perhaps why it's not been a very rapid process, although I think it would be fair to say a lot of progress has been made on the statement.  Ultimately, you know, if we think about the statement and when we do eventually reach our consensus and we're able to put the statement together, I think what we're probably hoping to do through this is to begin to build global norms and consensus around these challenges and around the ways we begin to think about and take it forward. I think that in itself is going to be an important part in helping to structure this debate and begin to draw people together in ways that will be usual and fruitful in terms of responding to the challenge which again, we all recognize is there.

>> MODERATOR: Thank you.  Rauno or Akua would you like to come in on that?  Please go ahead.  I see Rauno, you're trying to intervene maybe, go ahead.

>> RAUNO MERISAARI: Thank you, Lea.  Yeah, just adding something to what Campbell said, I fully agree with what you said.  This really has been a wonderful year with this issue.  I've learned a lot.  I'm afraid that I'm everyday learning something new because this ‑‑ kind of moving target that ‑‑ so maybe it's one reason for this process has taken so long time.

I believe we have no fundamental ‑‑ the different approach is that this is ‑‑ so everybody understand that the spreading of disinformation does harm, and we know that we should combat disinformation.  As Campbell said, this is not any new issue.  For instance, in my country, we have some ‑‑ we witnessed the geopolitical situation.  We have witnessed the Russian propaganda and ‑‑ at least for more than 100 years.

What is somewhat new is that we are now understanding how much this phenomenon affects the enjoyment of human rights.  Not only how much it can harm reputation, not only that this could be and is ‑‑ a security issue, but how much it also affects for instance the right to health.  I'm not sure if this is the first intergovernmental trying to shape a kind of normative tool for this issue.

Then there are certain specific questions that are quite complicated.  For instance, metadata or algorithms, how much we can regulate the use of algorithms and how transparent the issue shall be, et cetera, so there are many questions.  Having said that, I'm fully confident we are all able to make this statement.  I'm qualified and it will be adopted by the member states.  Thank you, Lea.

>> MODERATOR: Thanks, Rauno.  Now I think it sounds like one of the things that is new and what makes this complicated is ‑‑ there are a couple things I heard there.  One of them is the role of the platforms, Akua.  That's not something that we've had in the past that presents part of the ‑‑ I think the challenge of dealing with this.  I kind of want to get your sense ‑‑ like, why this is a challenge and what's new about this.

As you've noted as well and we touched on before, the proactive measures that the platforms are now taking has been broadly welcomed by the community at large in terms of tackling this issue.  However, there are concerns that you hear around, you know, the mandate that the platforms have to do this and to police free speech.  That's certainly one question.  The other question that is raised is even if we agree that platforms do have a role in doing that, what about transparency, accountability and how do we ensure that those measures are implemented in a way that is in line with those principles and more generally with human rights law and standards.

I kind of want to get your sense ‑‑ sorry to put you on the spot, but I think it's good to have you here so we get that feedback and we have that conversation.  Do you think the role that the platforms are doing in this more proactive role that this is appropriate and what should ‑‑ what are ‑‑ would be some of your recommendations of what should be done better?

>> AKUA GYEKYE: I'm grateful for the questions.  A few things come to mind.  I heard someone ask me before why is it so easy for Facebook to remove COVID related misinformation as opposed to other fake news.  We have a source of authority dictating us.  What is in bounds and what is not in bounds when it comes to misinformation about COVID.  Right?  We get guidance from the WHO.

The same is not really true for political speech, so I think that's why we're calling for regulation to set these clear rules to the road.  So private companies don't have to make the call.  But I do think that having been with this company for five years, we're not a company that will sit on our hands and wait for things to happen.  I think in the meantime the company does have a responsibility and takes it seriously.  To the third-party fact checking program, we're sharing the responsibility.  We shouldn't be the arbiter of truth, but we work with organizations that are independent, they've been certified by the international fact checking network.  It's non‑biased.  It has relevant accreditation to do this type of research and to be able to look into which things are fake and which things are not.

I think one has to be really specific.  If you see an image that's used out of context because it was used in another country before, I think that's very easy for independent third-party fact checkers to review.  If it requires a bit more research, they might have to have more time to look into the story, do the fact checking before they can come back and let us know that a story is false, partly false ‑‑ sometimes we use humor and satire.  These are all things that make it a bit more complicated and I think there is no one easy fix.

In regards to the issue about transparency, I couldn't agree more.  One of the focus areas that we are also focusing on to tackle misinformation is taking down fake accounts.  Fake accounts we've noticed is one area where it's a lot of the source of mischief as I like to say, especially also in the context of misinformation. We're very public about the number of accounts we take down, both proactively based on our AI technology, but also relatively based on reports people tell us about.  There is no secret, it's the opposite.  There's transparency in regards to what we do.

Same is true for removing networks of bad actors.  Right?  Who is behind the misinformation?  Do we have a network of coordinated inauthentic behavior as we call it which is not allowed on Facebook.  Once our research teams ‑‑ we have threat intelligent, info ops and quite a few others that work with governments, security agencies, other tech companies to look for these inauthentic actors.  Once the network is identified, we post every single time on our newsroom site and there you find information about who is the network, what kind of content did they post, which countries were targeted in their attempt to undermine maybe healthy political debate.

So we are very transparent and I actually would recommend everyone to check out, you know, has my country been affected, what kind of content maybe has Facebook removed.  How did they fan out and take down related groups of pages or profiles.  I couldn't agree with you mere.  We definitely have a responsibility to let people know how we're approaching this.  We do so on an ongoing basis. Specifically for Africa, just last week we did a newsroom post, it's in my name, about how we go about supporting elections in Subsaharan Africa.  Tackling misinformation is a huge aspect of that.  We share information about who these third-party fact checkers are so that's no secret either as well as the body that certifies them.

The issues you raise are all very valid but I wanted to provide comfort that these are all things we're thinking about on a regular basis and that we're actually looking towards more guidance as to what else we should be doing in this space.  In the meantime, we're not sitting still.  We're doing as much as we can.  We invest a lot in terms of people.  Compared to when I started, we now have tripled our employees who focus on safety and security.  We have 35,000 people who do this day in and day out.  Out of those, 15,000 people are content reviewers.  They look at the type of information that is shared online in different languages 24/7 around the clock. So is that an easy fix?  We sort of say we're done now?  Of course not.  I think it shows how serious this company is taking the commitment to address false news.

>> MODERATOR: There are a couple of things I'd like to draw on there.  Thank you for sharing those views.  I see a couple of questions in the Q&A and one of them actually is asking about what the role of ‑‑ how this issue is being tackled across the world.

I think often when we look at this issue, because the platforms are based in the U.S. that there's a question of like, what are the ‑‑ what is the appropriate role and how this issue is being tackled in different regions.  So I'd like to come back to that question, but first I want to pick up on something you said about calling for government regulation.  I want to come to our government speakers here.

Just to reflect on something that hasn't been mentioned yet, that is okay, great, we're calling for more to be done for platforms, we're developing regulation across the world, countries are doing this.  What we've been seeing and one of the risks that has occurred and is very real for a number of ‑‑ in a number of countries is that in cases ‑‑ in more like I would say ‑‑ less democratically bent countries, we've seen efforts by governments to develop ‑‑ to use this information and use it as a justification to deal with political opponents.  To, you know, clamp down on free speech, independent journalists, et cetera.

I mean, all is well and good in asking for governments to put in place more regulatory and legal measures.  What if those measures end up undermining our rights?  I just want to see if Rauno or Campbell you have reflections on that.  How do we ensure ‑‑ when is the line to ensure that if we do put rules and regulations in place, how do we make sure they're rights respecting?

>> RAUNO MERISAARI: Yeah, thank you, Lea.  What we can do at the international level is we're trying to find some common ground.  Our approach is in some way based on the international human rights law.  We know that also this issue ‑‑ so, I mean, spreading information, they can be also ‑‑ on the basis of the existing international law and there are certain principles.  Things must be lawful.  There must be some kind of professionality if you're removing some information, et cetera.

But you're right.  Then there are differences in national levels.  Our tradition protects very strongly freedom of speech.  We have quite a few cases by the courts where for instance, some websites have been closed by the court decision.  Some neo Nazi websites, et cetera, et cetera.  We're following what happens in Europe because we know that in Europe, we have a kind of normative process, this issue.  But then we believe also that there must be ‑‑ this must happen so that the industry is really committed to do something.  Maybe that's ‑‑I'll stop here.

>> MODERATOR: I don't know, Campbell, if you wanted to comment on this.

>> CAMPBELL DAVIS: I absolutely agree with Rauno there.  I think this sort of fundamental starting point is that human rights online are the same as human rights offline and really, we shouldn't be drawing a distinction.  We should be working in the same way and upholding the same righting and freedoms.  After all, they are universal.  Online, offline, it shouldn't make a difference.

As Rauno says, in our work with our international partners, bilateral, unilaterally, we're clear about the importance, that work that's undertaken to counter disinformation is proportionate and doesn't have ‑‑ doesn't affect freedoms, human rights, freedom of expression, et cetera, and that the actions are taken are disproportionate.  I think you're right.  We do see some actors, perhaps using this as an excuse or cover to do that.  So we need to be clear in our own actions and in our conversations and in the work that we do, for example, looking at shaping norms and looking at shaping the kind of statements and dialogue that we have between countries and with our multistakeholder partners that this is a fundamental aspect of this and it shouldn't be used in that way as a cover for attempts to reduce freedoms.

>> MODERATOR: So what I'm hearing is a combination of things.  One is building international consensus around the topic, working with partners around the world but also setting good examples of how you do it nationally.  I think that that is something that a lot of countries ‑‑ what is happening, we see a lot of copycats.  So like once if Europe leads and starts implementing a regulatory approach, all of a sudden you have a disinformation law, fake news law in that country or this country.

Akua, before I come to you, I want to acknowledge a couple of questions.  I want to encourage people to post their questions into Q&A.  I have a couple that are actually addressed directly to I think to you, Akua, that are related to the role of the platforms.  We've already tackled a question from Leslie around what is Facebook doing when it comes to sharing transparency to more individuals but making the process accessible in more languages.

That relates not necessarily to directly to a question that Madeleine was posing, but something more differentiated regional approach to tackling disinformation.  I want to maybe ‑‑ maybe this is unfair, but I kind of want ‑‑ I want to merge those questions and to discuss a little bit about how we deal with this topic around the world and some of the challenges of, you know, of dealing with this in different ‑‑ across different jurisdictions.  I'd like to get your sense on that.

>> AKUA GYEKYE: Thank you for the question.  I can speak to our approach in Subsaharan Africa simply because that's my area of expertise and that's a whole continent.  I think it also tackles the broader point in that our approach to attacking misinformation is a global one.  There again, the partnership with our third-party fact checkers is critical.  We have them around the globe, right?  They're not just situated in the U.S.

I think your point holds true and that's why I think Facebook is not the right entity to see what is true and what is false online.  You need local context, understanding, something that is a joke in one country might be highly sensitive in another country.  In order for you to figure out what is true or false, you need to have an understanding of the local context.  And there the independent third-party fact checkers sit in those countries.  They speak those languages.  They cover ‑‑ they have networks with journalists and other academia and others from those countries or regions and organizations such as Africa Check, AFP, Friends 24 and a few more we're partnering with.  That's what they do day in and day out.  They think about who do we hire, who has expertise in the context of this country and how can we make sure we upgrade the teams with more people who have expertise on the local context.

So I think the point in the question is very well taken.  And that's exactly the sort of push I've worked on internally and other teams as well.  And I've seen the progress we've made in regards to starting out with a handful of countries, now we cover ‑‑ I said 18 before, it's actually over 20 countries in Subsaharan Africa and multiple languages as well.  You can be rest assured that the more we can do that type of work and find local partners we can actually work with, I think the more we can make sure that it works on a global scale.

I think the one constraint that we face is that we want these third-party fact checkers to be is certified by the international network I mentioned and that process takes a little bit of time.  It's a good process and rigorous process because you want to make sure the entity is trusts, non‑biased it adheres to code of conduct practices.  But our call to organizations to do that work is to start with that process.  Get certified by this network so we can partner with you and use you going forward in partnership to help us spot false news, let people know about it, and then also ‑‑ that's another great job these organizations do ‑‑ share related articles so people have more content about the news.  They can look more into what ‑‑ why somebody rated it as false.  And further educate themselves. So it's kind of a call to action almost, that if you're in this space, definitely look into this network and how your organization can become certified.

>> MODERATOR: Thanks for elaborating and taking that question on.  I have a couple other questions in the chat that I want to bring to the attention of the speakers.  One, she's asking can panelists explain in more detail what is in their opinion the right division of labor between governments and the private sector when it comes to addressing disinformation. This is kind of the ‑‑ goes to the question around roles and responsibilities of ‑‑ that we've kind of already touched upon.  But if anyone would like to think about that and comment, please do.

The other question I see ‑‑ it's an interesting one which is ‑‑ picks up on the question of proportionality.  We were discussing the ‑‑ where you draw the line and both Rauno and Campbell you mentioned proportionality as an important principle when it comes to assessing the limitations on some of our rights and specifically in Article 19 as ‑‑ which outlines the limitations on freedom of speech.

Certainly, this is not an absolute right and there are limitations being posed.  The way we check for this is we'll look at ‑‑ we have ‑‑ the three-part test and proportionality is one of the questions that we ask.  Are the measures in response to a problem we're trying to address proportionate to the potential harm that they could cause?

So I kind of want to get ‑‑ the question, maybe ‑‑ just to read it out to make sure everyone has it in front of them.  Are the provisions of proportionality ‑‑ that's one question and that's maybe a question to Facebook.  Like, do you ‑‑ is there any notes or elaboration on when content is taken down.  What's being considered.  And is it formalized in any way?

And then a comment on whether proportionality is being used in a matter that causes an undue limitation of rights.  I think the question is probably broader.  I'd be interested to hear, you know, how in the UK or Finland you've been dealing with this topic and anything to share and then we'll come back to Facebook to see if you have any reflections.  Taking into consideration the other question as well that we've had.  I don't know, Rauno or Campbell, if you want to come in.

>> RAUNO MERISAARI: I guess I start.  The responsibilities of governments and the private sector.  Of course, the governments they are state parties of the international governance, so we have judicial obligations. The case law is so far that's not so rich, would we say.  We have a case about the Russian sputnik radio station and how this information was used to violate one person's rights in Germany.  Finally, the restrictions will be some way tested in course and in legal terms.  We hope ‑‑ voluntary ways to use this proportionality.  That's the legal ‑‑ it's quite hard.

>> MODERATOR: Is there anything else you want to add?  Looking at the clock, we have six minutes to go.  I'll do another round just for closing.  But just on this topic, I just want to check whether you have quick thoughts on that before I go to Akua and then we go a round of wrap up.

>> CAMPBELL DAVIS: I'll be brief.  I thought what Rauno's explanation there the situation in Finland was interesting.  Absolutely right, the principle here should be of cooperation and collaboration, not only between governments and civil society, the right rules and responsibilities for this.

I think the immediate exact of the COVID‑19 pandemic is a good one.  You know, they make it a little more easy to work out what the rights and responsibilities are.  I think it's a useful blueprint.  Not only for thinking through those responsibilities but thinking through the ‑‑ and I think that has helped to set up a lot of very useful channels for communication that we can then hopefully elaborate more broadly into areas.  Akua mentioned the importance of governments setting regulation.  I won't go into this in depth, but last year the UK government published a white paper on our approach to online harms and we hope that in due course that will allow the UK to have a framework, which is putting us hopefully at the front of what governments around the world are doing to tackle a range of online harms.

One of the issues that's covered in that is misinformation which in due course there will be further work coming out.  That thinking is in the UK and internally with the work we do with platforms.  We have quite elaborated sort of processes and protocols that set out what, for us.  How we work together with platforms.  There has been a good dialogue with platforms on that again, specifically on COVID‑19 work right from the beginning of that.  Most recently yesterday our secretaries of state for digital culture media and sport and health and social care had a round table with the major platforms, the latest stage in that work.  It is a very ‑‑ again a very useful blueprint.  This has been where we have been working with the platforms to think of the best ways we help to shape those such they are delivering what we want to see here and what input is needed from government on that. Then that is a continuing process.  As I say, that specific example is a very good blueprint and a helpful way for us to begin to take this work forward.

>> MODERATOR: Thanks, Campbell.  I think, Akua, I'm going to come to you.  I think we don't have a lot of time left so this ‑‑ if you don't mind, also making this kind of your final comment and apologies for everyone if your question has not been addressed.  Thanks to everyone who posted questions in the chat.

I guess, the one remaining question I have is what should the FOC do next?  We're here in an FOC open forum.  What do you think the FOC should do?  I will come to Rauno for your thoughts as the incoming share on that.  But I kind of want to give you a chance to address a couple of points that have been raised in the discussion so far, and then I'm going to come to Rauno for closing.

>> AKUA GYEKYE: Thank you for that.  One thing one has to take into account is that the commitment of Facebook to free expression is paramount.  At the same time, it's really important that we keep people safe online.  There's an ultimate tension there and the company won't always get it right.  That being said, we are guided by a set of principles.  There are community standards, as I mentioned earlier.  I really would sort of ask people to have a look to better understand how do we think about what is allowed and what is not allowed.

How do we respond to information or content that is posted that violates these community standards?  We have reports on how much content we take down in regards to some of that content as well.  There's transparency in regards to that.  But I also think it's not a perfect system.  So I think based on that feedback, there have been multiple initiatives to help people appeal decisions.

If it's in regards to false news, as a publisher you have the right to approach that fact checker and ask for an appeal.  To make sure that they maybe have access to information they didn't have when they made that rating.  The company Facebook, we recently launched our independent oversight board.  Again, they're supposed to be thinking about the really tough questions of proportionality of content decisions that were made that might be difficult, especially to come up to that healthy balance between safety and free expression on a global scale.

And I think that's something, also, that has to be kept in account.  In regards to last words, I just want to say thank you for the opportunity to participate.  And my ask is always one of collaboration.  Right.  We can't do any of this work on our own.  We're always excited about additional partnerships, be that government, CSOs, NGOs, media.  So if there is more interest in what we're doing or how we can collaborate I'd be happy to follow up via the organizers to make sure people have access to that information.

>> MODERATOR: Thanks so much.  I know we're at the top of the hour.  Rauno, a quick round of what your plan is for the FOC and I'm going to kind of close the session after that.  Rauno?

>> RAUNO MERISAARI: Thanks.  The next step is to finish the statement.  We haven't made any decisions how to deliver it or use it.  Normally we have ‑‑ we use the language of the statements in international forums.  And we hope to do this as well.  I hope that we are able also to continue this kind of open forums sessions with all of you.  And then at the end of next year, Finland is going to host the next FOC conference.  Hopefully can see many of you in person when you can.  Thank you.

>> MODERATOR: Thanks, Rauno.  Campbell, apologies, I think that was the closing remarks I think on behalf of the FOC, from Rauno there.  I would like to thank everyone, all my speakers for their frank and kind of open remarks and joining us in this discussion and to all the attendees for their questions and comments.  Apologies to everyone whose questions perhaps weren't answered but we hope this is just the beginning of this conversation.  We'll look forward to continuing the dialogue and thank you very much, the host, the IGF for having us here. So with that, thank you, everyone, and have a lovely rest of the day.  

 

Contact Information

United Nations
Secretariat of the Internet Governance Forum (IGF)

Villa Le Bocage
Palais des Nations,
CH-1211 Geneva 10
Switzerland

igf [at] un [dot] org
+41 (0) 229 173 411