IGF 2019 – Day 2 – Saal Europa – WS #177 Tackling hate speech online: ensuring human rights for all

The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> Good morning, everyone.  Thank you very much for joining us today.  We'll start with our session now.  My name is Martha Stickings, and I work for the European Union Agency for Fundamental Rights.  I'm really delighted to be co-organizing this session on tackling hate speech online, ensuring human rights for all, with our friends from the Council of Europe, and I'm delighted to have Charlotte Altenhöner-Dion as the co-moderator for the session.

This is actually the first of two joint workshops for our organizations today.  There's another on AI rights and wrongs who is responsible this afternoon at 3:00 in Convention hall 1D.  So we would also encourage you to come along to that session if you would be interested.

The European Union Agency for Fundamental Rights or FRA for short, is an EU agency tasked with collecting data and evidence on the situation of fundamental rights in the European Union.  We provide evidence based advice to EU institutions and Member States.  We have a wide mandate covering the full range of rights that are set out in the EU Charter.  But of course, like many organizations, we see that digital issues including of course, the protection and promotion of fundamental rights online are increasingly cutting across our work.  So as an example, we currently have a large-scale project looking at artificial intelligence, big data, and fundamental rights where we're trying to collect use cases across different sectors.  Whether that be health, insurance, public authorities, law enforcement, financial services.  To assess the positive and negative impact of the use of algorithms for decision-making.

Today, in fact, we're also launching a report on the use of facial recognition technology by law enforcement agencies.  We have some copies of that report at our stand in the IGF Village, and it can also be downloaded from our website.

We've also looked at some other issues in this area.  Earlier this year, we published at the request of the European parliament, a legal opinion on the proposed EU regulation on preventing the dissemination of terrorist content online.

And we also collect data on experiences of hate speech and the enjoyment of fundamental rights online through our large-scale surveys.

Turning to the topic of today's session.  Why are we focusing on tackling hate speech online?  I think there are three main reasons for that.

The first is, of course, that online hate speech is a profound problem that many, many people are the targets of abuse and whether that be through language, images, video, online.

And that could be racist, zone phobic, homophobic, transphobic, sexist, ageist, or Persons with Disabilities.

Our violence against women survey in 2012, we saw that 1 in 10 women had experienced cyber harassment since the age of 15.  And that figure rises to 1 in 5 women between the ages of 18-29.

Our survey looking at the experiences of Jewish people in the EU showed that antisemitism is most commonly expressed online, especially through social media.  And nine in ten respondents said that expressions of antisemitism on the Internet have increased in the past five years.

The second reason is that online hate speech is such an important issue from a fundamental rights perspective.  In terms of rights themselves, it really engages the full range of fundamental rights.  Striking at the core of human dignity, but also, at its most extreme it can threaten the right to life itself.

Of course, it also engages other rights from privacy, data protection, freedom of expression, protection against discrimination, as well as rights related to remedies and re-dress.

But it also presents challenges about how we try to up hold those rights, and that's some of the discussion that we'll be having this morning in terms of effective instruments in a global environment about the respective roles and responsibilities of different actors in this process.

And about the question of scale and how to deal with the huge volume of data that is available and shared online.  And thirdly, this is an area where there are already a huge number of initiatives that are either recently completed, currently underway, or are planned for the near future.

Whether that be at the UN level, the Council of Europe, the European Union, the national level, with Civil Society and with business.  And we'll hear about some of those initiatives from our speakers in a minute.  We really hope that this discussion can contribute to some of those initiatives.

And that leads me on to introduce the speakers this morning.  We really are delighted to have such eminent speakers and I have to say, such eminent female speakers who are involved in these issues in different ways.

I will start introducing actually our male speaker first.  As he's the only one.  And that is Matthias Kettemann, who since the 15th of January this year has been head of the research program on regulatory structures and the emergence of rules in online spaces at the Institute for Media Research.

Then we also have to my left, Louisa Klingvall, who works for director of general justice and consumers at the European Commission and has been working in part of the team managing the code of conduct on illegal online hate speech that the European Commission has developed.

We also have Saloua Ghazouani Oueslati, who is the regional director of the Article 19 MENA office and has been working in non-governmental multilateral organizations on these issues for a long time.

We also have Alex Walden, who is the global policy lead for human rights and free expression at Google.

And then last but certainly not least, our final speaker is unfortunately not in the room with us but is participating online.  And that is Laëtitia Avia, who is a member of the French parliament and is the Rapporteur for the legislation on regulating online hate that is currently proceeding through the French Parliament.

Just before I hand over to the first of our speakers, just to say two quick words about how the session will work.  We will have quick introductory interventions from each of the speakers, and then we will open up the discussion to the audience, which will be moderated by Charlotte Altenhöner-Dion.  We encourage you to contribute to the discussion and ask questions of our panelists.  Without further ado, I would like to hand over to Matthias who will start us off with his intervention.

>> MATTHIAS KETTEMANN: Thank you very much.  Fighting hate speech online is perhaps the most challenging aspect of online regulation.  It is because it invokes both individuals’ rights and social cohesion.  The two key dimensions that we need to secure to build a sustainable online communications sphere.

What we have seen over the last years is that private communication spaces have become increasingly important for public discourse.  In those spaces, primarily private rules apply.

However, even if these rules are slowly getting more accountable, we need to ensure that fundamental values that we as international society have agreed on are ensured in private online spaces.

One approach that the Council of Europe has taken to provide for norms ensuring that is a recommendation on intermediaries and their roles and responsibilities.

The approach that the Council of Europe took was to ensure that we based all our policies on the primary responsibility of all states to respect, protect, and ensure human rights for all of their citizens, be they offline or online.  But that alone is not enough.

The recommendation also refers to the important role offer intermediaries as providers of online communication spaces.  In doing so, and commensurate to the role, their importance for public discourse, we have to make sure that intermediaries also respect human rights.  We can do that by either applying as a frame, the public laws we have, the fundamental rights guarantees.  As a backdrop and additionally, by engaging actively with all stakeholders to develop nuanced rules that answer to the peculiarities of online discourses, which are different than offline discourses, just as the European court of human rights has determined in its jurisprudence.  The reality of online communication may necessitate a more nuanced application of human rights in online spaces.

What is essential, however, is to make sure that all activities that states take towards regulating intermediaries are based on law.  Which would be very critical of activities to whitewash policies by relying solely on voluntary standards without a clear backdrop of normativity.  A clear backdrop of fundamental rights guarantees.  If we are able to achieve that, I'm sure that we can solve the big problem of fighting hate speech, of ensuring individual human rights and providing for a space where social cohesion is supported and not endangered.  Thank you.

>> MODERATOR: Thank you very much, Matthias.  And now hopefully the technological things are working properly, then I would like to give the floor to Ms. Avia joining by video link.

>> LAËTITIA AVIA: Can you hear me?

>> MODERATOR: We hear you very well.

>> LAËTITIA AVIA: Thank you.  So I will say that before, I believe there is a real need and a real emergency when it comes to fighting hate speech.  And as representatives, we have two requirements.  First to make sure that freedom of speech is protected, and the second point is to ensure the protection of every citizen.  And in particular, all of the Internet user regarding hateful or illegal content.

So in France, we took the initiative to build a new law.  This is a law that is mainly inspired by what the Germans did with some overmechanism.  So we try to have a very, very balanced mechanism to protect freedom of speech and protect people who are targeted by hateful content.

So the law that is debated right now at the parliament, which was voted on in July, and is going to be at the Senate in a few weeks.  It has two main points.  The first one, the heart of the law, is the obligation for the big platforms to take down hateful content within 24 hours.  This is a new crime that we created in the French law.

So if the platform, major platform, which is the one who has some virality in its content.  If this platform doesn't take down the content, can have a fine that can go up to 1.2 million Euros.

The second aspect is that we draw central means that each platform has to take to make sure they create all the conditions so there will not be any hateful content on the platform.  So the obligation to have enough moderators, the obligation to have transparency regarding the means, the laws we use.  The obligation to cooperate with the states for law enforcement so then we can pursue the people who have all the hateful content.

And some of our obligation of means, which have been supervised by our Internet regulator, that is CSA (phonetic), which can fine up to 4% of the global income.

So these are the two steps of this regulation.  It comes with broader regulation which has two aspects.  The first one is to make platforms more responsible.  The point of this law is to say that there are not only holders of content; they have no passivity.  They have an action where they will create the virality around the content.  The second aspect is then to make people more liable for what they do online, which requires to find who they are physically.

And the third aspect is more to make all the society more aware of this situation and the fact that it usually starts with hateful content online and it ends with hateful and terrorist acts on what people call the real life.

This is for my introductory -- introducing statement.

>> MODERATOR: Thank you very much.  And we hope you'll be able to stay with us also, because I'm sure there will be questions from audience concerning your work in France.  And now I would like to turn to Saloua, please.

>> SALOUA GHAZOUANI OUESLATI: Thank you very much for having me to also bring the MENA perspective to this debate.  So let me first of all start, how hate speech is defined and understood in the context of my region.

So hate speech is not understood and defined in the laws and policies in place as speech that attacks protected attributes for people or for groups or individuals like national origin or gender identity or disability and so on and so forth.

It is rather understood as a speech that attacks the security of the state.  The national identity of the state.  The security forces.  The military forces.  And this is why the regulation in place are not effective to counter hate speech.  And they are very dangerous for free speech.  Freedom of expression in the MENA region is not recognized as right to everyone.  And without any precision in the laws, which are with vague notions, we are using vague notions in the laws to put people in jail without any guarantees to their rights.

So here, this -- I have to say that, these governments’ proposals to regulate online speech in the north, in the global north, are inspiring.  Can inspire countries in the global south with unfortunately, maybe the bad aspects of those regulation or regulatory instruments.  So the result will be to restrict more and more freedom of expression.

And we will not be able to protect those individuals and those groups whose rights to freedom of expression is already not recognized.

So they are already silenced by the society.  So this is our concern.  Article 19 has one of that, those government proposals taking place in the European countries are threatening freedom of expression.  So when it comes to countries where freedom of expression is not recognized at all.

So this is more and more concerning for us.  What I would also like to add.  So now the national regulations on place in the MENA countries the first regulatory instrument is justice.  National codes have little knowledge about how online platform are working, and they have little cooperation with online platform.  They don't know how they work and how they can have a dialogue with them.

If we look to digital platforms, we will see a little number of requests from the national (?) in the MENA counties compared to the global north.

So no knowledge on platform, how they can request to remove content when it is illegal.

The other thing I would like to add also.  I think that there is no -- so there is a lack of transparency about how the rules adopted by digital platform are applied everywhere.

So I think that there is inequality in applying the rules.  So my impression -- my understanding that in the MENA counties where we open dialogues with the members or the representative of digital platforms.  We understand that they are not the people who can take decisions.  And so we have a lack of clarity how the rules are applied and who takes decision.  How content moderation is ruled.

And this is also something that concerns us, because we need to have the digital platforms themselves accountable, responsible, transparent.  But also we need governments to be also responsible to their obligations to protect the right of freedom of expression of people and then educate them about illegal content and how we can define this illegal content and then restrict their right to freedom of expression.  We can't start by restricting or putting rules for illegal content before educating the society and recognizing their rights to freedom of expression.

>> MODERATOR: Thank you very much, Saloua.  And now I would like to turn to Louisa Klingvall.

>> LOUISA KLINGVALL: Thank you very much.  I was asked in this session to touch on two topics in particular.  First of all, the role of voluntary collaborations to ensure the removal of illegal hate speech.  And secondly, how do we provide remedies when things go wrong.  So if illegal hate speech is not removed or if legal speech is removed by mistake.

So first, on voluntary collaboration.  In 2016 we adopted a code of conduct on tackling hate speech together with Facebook, Twitter, YouTube, and Microsoft.

And through this code of conduct, the companies have committed to reviewing notifications on their hate speech, not only against their own terms of service.

So the rules that are applied in terms of the content that is allowed on the platform, but also against national laws implementing the EU framework decision on combating certain forms of racism, xenophobia.  Member States to -- violence and hatred based on certain protected characteristics such as ethnicity, race or religion.

We work with 30 NGOs all over Europe.  In regular intervals they test whether these commitments are respected.

They issue a number of notifications and then they see what the platform do with them.  Does it lead to removal, how long does it take to assess the content, what feedback do they get?

According to our latest evaluation that we published in January this year, the IT companies removed around 72% of the notifications that were -- of the speech that was notified to them.

And this is quite good progress comparing to two and a half years ago when only 28% of the notified content led to removal.

So the result of the code of conduct shows that hard law is not the only way to make progress in this area.  Soft measures can also help to address big societal problems like illegal hate speech.

And there's also certain advantages to this process.  I've already touched upon it.  And the advantage is the collaboration that it can trigger between Civil Society and the IT platforms.

And why is that so important?  Well, of course when it comes to illegal hate speech, the legal demarcation line between protected speech and illegal hate speech is a fine one to balance.  And it requires not only knowledge about the law.  It also requires knowledge about the historical, semantic, and regional context in which the hate speech is.

And all these NGOs working on racism and Xenophobia have knowledge.  So bringing them together with the IT platform have been really, really good thing to enhance the quality of content moderation policies.  So we now see that they meet regularly and after each monitoring session, they get together and bilaterally discuss individual cases that were difficult or that met with differences in interpretation.

So the second theme, what are the remedies.  What happens when illegal content is not removed or when legal content is removed.  So how do we ensure that protected speech is not removed?  Well, I think first of all, governments or regulators have to make sure that whether using soft measures like codes of conduct or regulation, they have to be defined in the way that doesn't trigger the companies to over remove content that is also legal.  To ensure this distinction.  The code of conduct is asking platform to only remove content that is illegal and incitement to violence and hate.

But additional safeguards can be put in place to make sure we don't get over removals.  The commission's horizontal response to tackling illegal content, there's always been a dual objective.  First and foremost to ensure of course, effective measures to tackle illegal content.  But also to ensure production of fundamental human rights including freedom of expression.  I think this dual objective is very clear in a recommendation on illegal content that we issued in March of 2018.

Firstly, the recommendation underlines the general call for platform to act diligently in their content management policy.  Such diligence is particularly important when companies use alternate means for detecting illegal content.  So essentially, filters.

And there the recommendation really asks the platforms to ensure that they have a human in the loop.  Human verification of the content that is detected.

Secondly, the recommendation also urges the companies to put in place systems for complaints.  So-called counternotice systems, which would allow users to complain if they are having their content removed wrongly.

Thirdly we also ask that the companies have transparency reports in place.  That they show the public how they use their content moderation policies.

In terms of victims, and I will stop with this.  I would also like to recall that of course, codes of conducts, removal of hate speech.  This is just a complementary tool to law enforcement.  I mean, for victims of online hate speech, of course, they should first and foremost be encouraged to report the offense to the police.

And that the crime is actually investigated.  And of course, also through this system, you can obtain an injunction for the removal of the content should this be necessary.  Thank you very much.

>> MODERATOR: Thank you, Louisa.  Last but not least, I would like to hand over to Alex.

>> ALEX WALDEN: Thanks.  And thanks for including us in the conversation.  I'm Alex Walden.  My role at Google is to work on -- to ensure and advise the company on our commitments to human rights and free expression.

And especially doing that in the context of preserving human rights and free expression in the context of these complicated issues including hate speech, terrorism, et cetera.

So I spend a lot of time trying to engage with our product teams and our policy teams to ensure that our approach to these issues respects human rights while we are insuring we are being mindful of the way these issues are taking place in the real world.

Broadly speaking the Internet has been a force for good in the world.  We see that it has increased access to learning and creativity and information, and it supports the free flow of ideas.  It's democratized who stories and how stories get told.

That's something we want to preserve in our approach in how we deal with these issues.

We know that platform can be abused so it's incumbent upon us to ensure we're dealing with these issues effectively.  That's why we're guided by local law and we have a set of community guidelines in place that all of our users have to follow.  Those community guidelines set the rules of the road for what is and is not allowed on our platform.  And all Google hosted products in place that prohibit hate speech against individuals in groups based on their attributes, as well as harassment policies.  We view all of those as grave social ills, and we want our policies to respond to them in the ways that the law requires.  All that being said, we have to figure out how to operationalize these commitments into ways we're addressing the 500 hours of content that get uploaded on YouTube every day.

How do we do that?  The first way is that we have enforcement systems that start at the point in which a user uploads a video.  Once a video is uploaded, we have technology that can detect whether or not content meets some of the specifications through which we've trained it on hate speech.  We can have users that flag that, and then content goes to a reviewer for them to evaluate whether or not that content meets our standards for hate.

So investment in technology is an important aspect of the way that we address this content, but ultimately we believe the combination of humans and machines is the best way to approach hate speech online.

We know hate speech is oftentimes context specific and oftentimes includes imagery, but sometimes it's words.  And some of those things are easier to train technology on.

So we have to be clear that technology can help us address the volume of content, but it's not going to get us all the way.  So we have thousands of our viewers around the world who are evaluating this content 24 hours a day, 7 day as week, to ensure that we're removing it as quickly as possible.

We also have to ensure that we are training those reviewers so that they're taking into account the context of where the content is being posted.  That they are accounting for the potential educational documentary, artistic, scientific value of the content, so that we can be clear that we're leaving up content that has value but taking down content that would otherwise constitute hate speech or harassment, et cetera.

This approach is I think similar approaches have been taken up by the various of the large companies in the industry.  But kind of beyond that, we have all recognized that there are ways we can learn from one another.  So we've long collaborated across the industry on controversial content issues, hate, terrorism, child sexual exploitation, et cetera.  To Harkin back to the EU hate speech code of conduct.  That's a mechanism to respect the law in countries we do business and learn from it in the ways we operationalize our own policies

And then lastly, I'll just say -- just to pick up on what someone else said.  It is important for all of us who are working on these issues to recognize that when we create models in this space, both from the company side as well as from the government side in terms of legislation, or statutes, that we are creating examples that other governments anywhere in the world or companies will pick up on.  So to be mindful in the ways in which we are creating incentives and where there can be unintended consequences for that.

>> MODERATOR: Thank you very much.  We've had.

>> Thank you very much.  We've had fascinating speakers on a very important topic.  I'm Charlotte Altenhöner-Dion, from the Council of Europe.  I lead a small team on Internet Governance there, and the Council of Europe has for decades tried to hardwire human rights into the decisions we are taking on content moderating issues.

I think we are all in agreement in the room that we want to ensure that freedom of expression and other rights that were mentioned are ensured, that users can use the Internet in safety.  But we have also heard a lot of different perspectives here in terms of how difficult it is to put that into practice.

And I look forward to opening up the discussion here.  Please feel free, for those of you who sit behind to come to the desks.  You can also use the microphone.

Last point, we have heard the different views here from governments that are indeed working to tackle this issue from a rule of law, from a law point of view, which is very important.  We've heard about soft law approaches that are important.

We have heard about important, indeed, concerns, from the side of the Civil Society, and we have learned about the importance of collaboration.  The Council of Europe of course has tried to put that into rule of law framework that puts the clear obligations of states on the one side and the responsibilities for companies on the other.

And I want to just put in the room the question which I think came up with your last point, Alex.  We have to -- we want to make sure that we comply with the law.  I think we must also be aware that the Internet, which hasn't always been a great force for good.

As certain aspects in its business models that support the propagation of sensationalist content, often of hate, of incitement to violence, et cetera.  There is also that aspect of the Internet.

And therefore, indeed, our responses have to be very forceful and effective.  Can I invite the room to open up the debate, please.

Very good.  I will go down from here please.

>> Good afternoon.  I'm a partner from IT for Change, which is an NGO that works in Bangalore.  I'm part of a project working on gender based hate speech online within the Indian context.

And my question is how do we bring about, as Charlotte just mentioned, how do we create disincentives for the harmful ways in which virality is used on the Internet to make hateful speech available across media?  How do we create disincentives and strong disincentives without creating mechanisms that hold social media platform to account as publishers?  Thank you.

>> CHARLOTTE ALTENHÖNER-DION: Thank you very much.  Very good question.  I would like to take five questions if I can please.  Those that we have seen and then we'll give it back to the panelists and then take the next five questions.  I saw you in the back.  Thank you.

>> I'm a faculty at center for policy studies from IT Bombay India, I work on AI policy.

So my question to the panel is about the more and more usage of machine learning technologies to moderate speech and to sort of enforce standards by certain social media platforms.  But what has often been seen is that by the use of machine learning to detect certain speech patterns, there is sort of attempt to not have corporate human responsibility.

And what often gets lasted with these algorithms is nuance leading to interesting social phenomena.  For example, brigading.  People who are politically activated on social media, can often in a crowd report a dissident, and this has happened a lot on, say, Twitter.  And their accounts get suspended.  And we have seen this multiple times.

Also what often happens is that you sometimes have extremely toxic speech not detected by these algorithms because they are looking more at syntax than semantics, and this happens a lot in LLP.

And because of a lack of trained human moderation, extremely toxic speech passes.  But what might be genuine free speech often gets suspended, especially if -- as panelists said, if the state itself has malicious intent to go after certain speech.

My question is how do we hold these companies responsible?  While they have their claims of moderation, that there should be sufficient trained humans in the loop, and it should not be a techno solutionist manner of doing things.  Thanks.

>> CHARLOTTE ALTENHÖNER-DION: Thank you.  Please go ahead.

>> My name is Alexander, chairman of media commission in section of Russian Federation.  I want to mention the hate speech because we think that it's always -- it always goes as an instrument of censorship on all social platform.  And other people said something about this.

We need as a law, for all those Internet platform to publish all rules and definitions with concrete words about hate speech.  Concrete words which we cannot use in our content, in our posts.  Not to be banned.

Because nowadays we know that everybody can be suspended, everybody can be thrown away from Twitter, YouTube, Facebook.  Because of so-called hate speech.  Without any red lines.  Without any concrete definitions.

For example, moderator decided that this post is hate speech.  But we have to know all words that we cannot use.

And now, the so-called hate speech is just censorship to throw away those people with views that are not correct for moderators of those platforms.

For example, speaking about Twitter.  A month ago, a new media project appeared on this platform called good news from Russia.  And after 6 million views, it was thrown away from Twitter without any explanation.  Why?

Because of hate speech?  No, of course.  But all those platform, Twitter, Google, Facebook, they didn't answer for thousands and hundred of thousands letters from suspended users, because they're too big and so great, they don't want to have a correspondence with those who are banned.

So we need special laws that will put all those giants on their right place.

>> CHARLOTTE ALTENHÖNER-DION: Thank you very much.  Two concerns in a row regarding the transparency and way of decision-making in terms of hate speech.  Please, I would have the gentleman there and then, yes, you.  And then we will turn back to the panel.  Thank you.

And my apologies.  May I ask everyone please to limit themselves to the most essential part of their message.  Thank you.

>> From the European broadcasting union.  I have a vested interest here.  It concerns the media, traditional media, legacy media or how you want to call.

When it comes to the takedown exercise, we have many problems specifically in this.  And many problems with the possibility to recourse against the decision.

Just to give you some elements.  BBC take an update number of items that BBC produced that has been removed by the platforms and that are not any more accessible through search engines, for instance.

None of this items have ever been notified to BBC, because they were removed of a website of somebody for any reason, and the media that was originally producing that was never notified for that.

So there is a case of censorship, because thousands of items that have been produced by traditional media that are accountable to the public, become invisible to a large mass of the citizens.

Second problem we have about cyber harassment for journalists.  We have more and more case of attack, especially against women journalists, all around Europe, but not only in Europe.  Around the world.  Where through the cyber attacks you try to silence the journalist.

So there are misinformation and sometimes this goes in the physical sphere, in the real sphere.  And there, the platform are not engaged in solving this issue.

And the law, as Matthias mentioned correctly, is absolutely important.  For instance, in one country, in Finland, last year, there's been the first -- the beginning of this year, the first condemnation, trial for cyber harassment or journalists with the aggravation that it was a journalist.  That they were targeting that person because that person was defending the interests of the citizen to be informed.

Last point that I want to raise is about the economics.  If now we are going to take down approach, then explain to me why traditional media.  They are still -- if you publish the information of (?) on printed media, then I can go to jail.  I can be fined, according to the legislation by this country.  If I publish the information of Elena on Facebook or Google, then nothing happens.  Simply I have to take it down in reasonable terms of time?  Why?  Let me explain why.  And then when I talk with colleagues from Google and Facebook, they say this costs too much.

But then I have 100,000 journalists in Europe that are paid exactly to do this every day in their work.  And it costs a lot.  The business model of the media is in danger because of that.  Why the platform not have the same responsibility?  So three simple question, I hope to get three simple answers.

>> CHARLOTTE ALTENHÖNER-DION: Thank you.  Lots of good questions.  Please.

>> Good morning, my name is Deborah Barletta -- campaign initiated by the Council of Europe, so I'm familiar with your framework.  My question is mostly to Ms. Walden because she's representing a company.  Because when she said that they tried to ensure that they train their staff to recognize what hate speech is.  If you're talking about training so, education.  My question is after which standards -- similar to the question for the law requesting for the words.

What are the models you follow when you provide these trainings; what are your standards?  What is your understanding of hate speech?  Because also we heard from MENA countries, for example, that if there is not awareness, if there is not education, even if we have the law, the way in which these are implemented and the way in which these affect the population can be more than beneficial, actually problematic.

So my question is what are your standards and what are your resources to provide this training and this education for your systems and for your employees that are working on this topic.  Thank you.

>> CHARLOTTE ALTENHÖNER-DION: Thank you very much for this question and we are giving back to the panelists.  We have one specific question.  Ms. Avia, I hope you're still with us and please let us know if you would like to get back to any of these questions.

>> LAËTITIA AVIA: Thank you for all these questions.  I think the debate is very interesting and it really puts the finger on where the major point is.  When you ask why platform do not have the responsibility, the liability, the same as the publisher or journalist.  I think it's at the heart of the discussion.

The legislation we have today is one from the European directive from 2000, which says that the platforms are simply hosts.

But in 2000, Facebook did not exist yet.  We didn't have any hashtag.  We didn't have any tweet.

It is a legislation that is not up to date at all.  And it is a legislation that does not answer the question of all those platform who have an action in creating virality.  Because they decide the way they organize content, they decide the way they can accelerate the content, they decide the way they can push the content.  And that's why their model is a model that will create virality, and that's why they need to be more responsible.

I believe in it to have three types of actors.  Publishers, the hosts, and between them, what I call the accelerators.  And those ones we need to reinforce the liability.

There were some questions about moderation.  I've been working on this subject for almost two years now, and the fact is that I don't have any specific answer to this question.

Because the first question is who are the moderators; where are they; how do they work.  I can tell you that I can spend days and days with the platforms asking them to give me real number and certified numbers, also have different answers.  I can't tell you how many people moderate for the platform, where they are and how they are trained.

I asked for all the platforms to give me their training books so I will know which elements are used by the moderators to define what is illegal or not.  And I never had these documents.  That's why in the French law we have this provision that says it will have full transparency for the regulator and have all the information about regulators.

Who they are, how they work, how they use (?)  Because then fact is today, they are the ones deciding what is legal or not.  And in France, we have a very strong law about hateful content which really defines that.

And I can see that it is not this law that is -- where I think the states have -- where they can create regulation that will really fit to the definition of hateful content that is within the national law.  And then the regulation should be organized around that with full transparency of information from the platforms.

>> CHARLOTTE ALTENHÖNER-DION: Thank you very much.  That was very helpful.  I would like to give then to you and then maybe also to Matthias.

>> OK.  I will try to keep it short so I can touch on the variety of questions that relate specifically to YouTube or just to industry generally.

>> ALEX WALDEN: The first question about addressing virality.  I will leave it to others to comment on what statutory proposals they think might be best.

But I will say from the product side, it's important for products to be designed in ways that we can address those within the product itself.

So we have made changes over the past year to ensure that we are removing content that is borderline, there is misinformation.  That kind of content from our recommendation algorithm to ensure that we are not amplifying content that is potentially problematic for these reasons, even if it doesn't violate our community guidelines.  That's the product side.  I'll leave it to others to comment on the regulatory aspect.

With respect to the question about humans in the loop.  I will say again from our perspective, the premise of our approach to combating hateful content online is that the best way to do that is a combination of humans and machines.

So that means that we use content classifiers to help us identify content, but determining whether or not content is removed is something that happens when a human reviewer takes a look at the content.  And that is especially the case when we're talking about hateful content where context matters in terms of how you interpret whether the content is violative or something else.

The question about publishing rules and specific terms that should be banned to have clarity for users.  I think we certainly agree that we want to have as much clarity as possible and transparency for users around what our guidelines are for what they can and cannot post on the site.

The challenge around that is bad actors are uniquely motivated to create ever-changing ways of having dialogue around hate and extremism online.  So pick the five words that you can think of that any hateful group is using, and they will have modified and evolved those terms a week from now.

So it just -- I think specific terms is not the way we are going to create clarity for users.  Ultimately those terms will inform the standards that we use to enforce policies across the platform.  But a list of terms is not, I think won't get us all the way there.

I had all the questions written down and I lost it.  Got it.

There was a question about transparency when content was removed from media.  Any content that is removed from YouTube pursuant to community guidelines, the user receives notice from us.  I'm not sure what happened in that particular case, but it was the standards policy across the platform that when a piece of content is removed the uploader is notified that it is.  That's a different question of whether someone else is linking to somebody else's page.  That's a next concentric circle out, but the uploader is notified when content is removed.

With respect to cyber harassment of journalists, I just want to flag two things.  So one is that across our products, harassment is prohibited on the platform, and that includes harassment against journalists.  So the enforcement of that policy will not catch all of the ways in which journalists are harassed, in ways that are cross-platform.  But it's a way for us to get at one piece of it.

And other ways we take this problem into account, we have something called the Google news initiative that has a variety of ways that it supports the traditional media ecosystem.

One of the things we do is promote use of the advanced protection program that we have, which is the extra secure Gmail for users who are most likely to be vulnerable.  And that includes folks like political candidates as well as journalists.

People who know that they are likely to be targeted.  So that's another way to kind of ensure that they have digital security as well.

And then the last piece about what standards are used to train reviewers.  I just wanted to ensure that we're clear about the way that we enforce or policies.  So ultimately the 10,000 people around the world who are human reviewers of the content that we remove, they're applying the YouTube community guidelines.

So they are trained on Google's enforcement policies of our guidelines.  That's a separate matter from the ways in which the law is enforced across our platform.  So content that is illegal and is having an evaluation of its legality is evaluated by a different set of folks internally who are part of a legal removal team.

So just to kind of -- I want to underscore that there is a relationship between -- there is certainly a relationship between the content that is illegal and the content that violates our platform, but they are sometimes separate.

>> CHARLOTTE ALTENHÖNER-DION: Thank you.  So before I give to Matthias, I think we have two new human rights themes just mentioned.  One is of course the huge and growing group of a new work class, which is the content reviewers.  Who live often under rather difficult circumstances, and we all know that.  This is not only in Google's headquarters, but it is also often in parts of the world where you need to hire people who know the local language, who know the specificities of the local context.

So that is one of the issues.  And the other is of course the transparency.  You responded to the transparency request saying yes, someone is notified when something is taken down.  They are not notified of the reasons.  They are not given -- there is -- because it doesn't comply with the terms and conditions.

>> ALEX WALDEN: With the community guidelines and it will site the specific community guideline.  It also provides information about how to appeal the decision.

>> CHARLOTTE ALTENHÖNER-DION: OK.  Good.  Fair enough.  So a side discussion regarding transparency, which of course is an element of the rule of law as well.  But Matthias.

>> MATTHIAS KETTEMANN: Thank you.  Thanks for the great questions.  Allow me two brief comments on algorithm content regulation and possibility of (?) content.

First of all, I am convinced that the current discussion on algorithms used in content (Inaudible) is very important.  We have to know more on the data use of what they are trained and what these algorithms are optimized for.  If they are optimized for attention, they will usually push edgy content.  Companies have realized that it's not a sustainable business model to push edgy content, because that in the long run will make people happy with being in that social sphere.

So we need to come to a societal conclusion what the algorithms should optimize for.  And I think they should be optimized for sustainable dignity based, enabling communication spheres.

European regulators, including the German Ethics Commission have called for a new media order ensuring online diversity of opinion just like with public broadcasters.

That will be very difficult to implement because of constitutional and because of practical reasons, but this is at the horizon, and this is the backdrop against which we need to talk about what automatic decision-making, which are essential for alleviating the task of human moderators, what they are optimized for.

It is also not true that you have no recourse against private moderation practices.  At least in certain jurisdictions.

German courts have at least for the last two years, offered the possibility to have content reinstated by applying fundamental rights directly between private parties.  Between Facebook, and users, for example, in May of this year the German constitutional court confirmed Facebook had to reinstate the profile of a right-wing party.  Similarly, in the U.S., the first case, the first amendment institute versus Trump has confirmed that Trump cannot block users from his Twitter comment section, because this emerged into a public sphere and would be governmental censorship.

But what’s really important is national court orders and judgments, when they are well reasoned and respect the rule of law, needs to be implemented by intermediaries.  Particularly for politicians.  Politicians had their accounts suspended in Germany because of jokes they made in relation to ballots.

Jokes still need to be passable even if a -- to ensure between automation and human decision-making, errors are avoided.  There will always be errors and always be overblocking.  Once a court order exists it does not become big intermediaries to society and supports unfounded fears of online lawlessness.

The last time politicians in Germany were unhappy with the practices of social media companies we got the network enforcement act.  I'm not sure -- it is always easy to criticize the practices of the big companies like Facebook and Google from -- right now active in U.S. and Europe, because we know so much about them.

But what do you really know about the content moderation standards used for companies like tick talk?  First leak has provided some insights into that, and this has not provided us with a very pretty picture.  So yes, let's talk about all intermediary companies and how to improve their accountability.

>> CHARLOTTE ALTENHÖNER-DION: Thank you very much.

>> I would like to come back to the question on how to hold social media companies accountable without them being publishers.  And indeed, this is the situation that we are in.  Looking at the European perspective, e-commerce directive, it is recognized if you are a hosting service provider, you should benefit from a liability exception.

It means that you're not liable for the content that you host on behalf of someone else on your platform.  This does, however, not mean that you can't responsibilize platforms.  Because of course, being liable for content and having responsibilities to do certain specific things.  For instance, if you are notified about the existence of a particular piece of illegal content, is different.  And I think those are the areas where the regulator can work through -- I mean either through laws or through self-regulation.

So from that point of view, I think that the balance in the e-commerce directive, which provides this possibility of the liability exemption is still very valid.  We have to remember that for the growth of this sector, I think this liability exemption has been absolutely crucial.  Furthermore, I would like to comment on the issue of hate speech and harassment against journalists.  Of course, this is a big problem.

But here again, I think that we have to remember that removal of such content is not the only remedy here.  First and foremost, we have to make sure that states enforce their laws against the actual offenders online.  I mean when there is real threats against journalists in the online spheres, it's for the police and the prosecution services to investigate the case, and prosecute and sentence should this be necessary.  And this is something that we're working very actively with in the EU to ensure that Member States enforce the framework decision on racism and xenophobia.  Removing hate speech on the platforms is really a complementary measure to this.  Thank you.

>> I would like to stress the importance of awareness raising of the society.  Adequate trainings for government officials but also for the reviewers of the platforms themselves on how to distinguish between free speech and hate speech, which is a very complex issue.

We know that there are UN standards and international standards.  But we know also the complexity of the national context.  So it is not an issue of words or images.  It is more complex than that.

So the education of the society also, which is the responsibility and obligation of the governments, but also maybe the digital platforms, big digital platforms, have also the responsibility to contribute to this enforcement at the national level to educate people.  At least about how their moderating content and what are the mechanisms in place.  I think this is also an obligation for the digital platforms themselves.

The other thing I would also like to stress on is we know that judiciary systems, judges at the national level, in the national courts, that judgment are political judgments.

So they are not -- they lack independence.  So they are not incorporating human rights standards in their roles and in their work and their judgment.  They are, rather, applying close -- following the interpretation of these laws by the political powers.

So this is why they need also trainings how they can incorporate human rights standards in their work.

So to finish very quickly.  Article 19 has developed this social media council, which is a model for social media platforms.  And addition of this self-regulatory, voluntary mechanism is that it can bring at the national or regional level, the main actors and the stakeholders that can have a common understanding about the context.  And they can speak the native language.

If not -- so having also maybe different and multi-regulation systems to online content can contribute, in my opinion, to the fragmentation of the Internet.  Which is not the aim.  So we would like to have one Internet.

So also having different systems of regulations based on their willingness of the different governments.  This also will contribute to deep inequality, unfortunately.  Thank you.

>> CHARLOTTE ALTENHÖNER-DION: Thank you very much to the speakers for their thoughtful and very useful responses.  We have one -- yes.  Can I take -- we have to be really brief now, because we do want to come to some sort of conclusions.  May I give the four people who now raised the floor one minute each, please.  Thank you.  Please go ahead.

>> Hi, my name is Maria.  I work for an organization called woman kind.  We're a global women's rights organization.  I'll be very brief.  I think two of the early questions highlighted the gender nature of abuse.  I think my colleague from India here and my colleague that talked about harassment of female journalists.  Just to make the point that men and women experience hate speech differently and other gender identities as well.  It would be useful to hear from the panel, perhaps from Alex in terms of kind of the companies and the platforms, what we can do to ensure that the response to gendered abuse is actually gendered and that the people perhaps moderating actually have a good understanding of the gendered nature of abuse, but also in terms of policymaking, how that can be more gendered to ensure that the right solution is in place.

>> A thank you very much.  My name is Samuel George, I'm a member of parliament from Ghana.  And it gives me great pleasure to join this session.  I'm here with the first deputy speaker of Ghana's parliament, because this is a very important issue to us in Ghana.  Listening to the conversation around the room, I realize that it's extremely European and North American centered.

You're actually not looking at the African context, and we're talking about one world, one net, one vision, and the context doesn't take Africa as an entire continent into perspective.

You realize that you're going to have a proliferation of network enforcement laws coming out of Africa.  Because when we're looking at hate speech, and we're looking at misinformation and the issues of the Internet and content and the platforms themselves.  Are not compliant to our local laws, because basically many of them are based in North America.  And they would cite the First Amendment and freedom of expression.  And those things -- some of the issues that come up in our continent or in our countries, infringe on our local rights or local laws.  But then the community engagement standards do not take cognizance of our local laws.  Let me be very clear.  Ghana is a very free country.  In fact we're hosting the next freedom online coalition conference.  We're very free, open respect human rights.  However, you need to be aware of local nuances when you pass your community engagement standards.  Even for law enforcement purposes when we try to get in touch with the platforms.  I hear 24 hours.  Wow, official requests go and seven days later, not even an acknowledgment of those requests.  So how do we then have one world, one net, and one vision.  Thank you.

>> CHARLOTTE ALTENHÖNER-DION: Thank you.  There was someone -- I know you and someone else was here.

>> Thank you.  I'm from the center of advanced Internet studies in Germany, and I want just add an aspect to the possibilities of soft remedies to the problem.

If you look at the offline world how do we solve the problem of hate speech?  Mostly by social norms.  And if you look at the mechanism of social norms, how they work in the reality, it's a very complicated mechanism.  So it's not a surprise that we do not have this mechanism in the Internet world.

But we have to think to be creative, to think about possibilities, how we can create a context and with a mechanism of social norms can also work in the Internet space.  And I think today, there was nothing in this direction discussed.  I think we have to discuss more in this direction.  Thank you.

>> CHARLOTTE ALTENHÖNER-DION: Very brief, please.

>> In Germany, we have a problem, we the capacity of the staff, all around the (?) and so on, because they have -- to do -- with the clan, for instance, the clan crime internationally in Germany, with Lebanese, and Turkey and Arabic countries who have emigrated to Germany and go back and so on.  And in that case, I have heard that many of this legal low court (?) move away from this -- and going for instance to the stakeholder like big private companies like Cappi (phonetic) MG, and so on.  So in that case, now it's more work in future, what you think that the extent, the staff situation in the low court area.  That they can also sanction crime in the Internet sphere.

>> CHARLOTTE ALTENHÖNER-DION: Thank you very much.  Did we have one last question over there?  OK.

And then one minute each for the speakers.  Thank you.

>> Very quickly.  And I would like to make maybe a connection with one of the previous questions about the gender specific dimension of online hate speech, et cetera.

And I have a question for the colleague from the European Commission.  I understand the code of conduct that was drafted in 2016 is fairly broad.  But it does not cover gender-related hate, if I'm correct.

Is this something that might be addressed in the future in the light of new developments or new awareness of the extent of the problem?  Thank you.

>> CHARLOTTE ALTENHÖNER-DION: Good.  Who from the speakers would like to respond to any of this?  With a very brief, one minute, yes, Alex and then Louisa.

>> ALEX WALDEN: Just to start of the gendered nature of abuse online.  This is sort of a -- it's a common understanding across the industry and the stakeholders that we work with that this is an ongoing problem across platforms.  So we continue to work on ways to improve both our policies and the ways that we respond to them.  We announced earlier this year that we were undertaking a review of our harassment policy on YouTube.  These are things that folks should look for.  These are certainly issues that come up in the context of the conversation for us.

The gentleman from Ghana.  Point taken.  The ways in which we apply our community guidelines are -- we continue to struggle with ensuring and communicating and demonstrating how we are taking into account local contexts when we are enforcing our global community guidelines.

An important way that we improve that over time is through programs like our trusted flagger program, where we engage with experts on the ground in countries around the world.  With people who are experts in policy areas where we have community guidelines, to help inform us in the way that we do enforcement, to ensure that over time we continue to improve in the ways we think about and understand these problems and the way they manifest around the world.

>> CHARLOTTE ALTENHÖNER-DION: I would like to give Ms. Avia a possibility.  If you would like to please come back, please be ready for one minute in a moment.  First Louisa.

>> LAËTITIA AVIA:

>> LOUISA KLINGVALL: The social norms that are not existent on line, I think this is a very interesting question.  Indeed, I think we lack a lot of understanding of the ecosystem of hate online.  Where does it come from; what are the mechanisms that turn people into hate speakers in the online sphere, and this kind of online inhibition that seem to fester.

This is something that we are trying to support research in, because we think that this is also very important in order to see more kind of sustainable responses to this societal problem.  So I think this in itself is a very, very important question to continue discussing.

On the issue of gender-based hate speech, it was true.  The code of conduct is very much focused on racist hate speech for the simple framework that the – racism, xenophobia addresses those grounds.  My colleagues in the gender equality unit are currently looking more at how we can work also with the issue of gender.

So stay tuned with what the commission is doing in this field.  They are looking at it.  Thank you.

>> CHARLOTTE ALTENHÖNER-DION: Thank you very much.  Ms. Avia.

>> LAËTITIA AVIA: Yes.  What I wanted to say is that in the French law, we decided to protect everything that is within the dignity of anyone.

So the gender is within the framework, but also everything that is linked to race, religion, sexual orientation, gender, handicap.  So everything that will attack someone not for what they will say, because it's not the idea to withdraw opinions, but because of who they are.  And that's really the major point of the law.  Is to protect people who are attacked because of who they are.

What I wanted also to say, maybe also for me as a conclusion point is that everything I can hear here around this table, but also in the various conferences I attended is the idea that everyone wants to tackle hate speech.  That we have to say that is not an option.  Because there are laws.  There is the European directive.  The platform should be responsible for not taking the right steps to ensure that everyone is protected.

And here we are.  We all see that it's not working.  So there's something to do.  There's a real action to be taken.

And there are national laws.  There's one on -- but also a more global action to take.  Yes, there is Europe, there is U.S., there is the rest of the world.  But I guess we now have to address the question do we still keep the self-regulation, or do we all take our responsibilities.  And I think when we see the number of suicidal attempts linked to hate speech, we know we have to take responsibilities now.

>> CHARLOTTE ALTENHÖNER-DION: Thank you very much.  Thank you very much to everyone.  I think this has been a very interesting discussion.  The microphone is tired, I suppose.

(Echoing)

(Laughter)

I would just like to draw some brief conclusions for me were important in this discussion.  I think a lot of very good points were raised.  Clearly in this also, last pitch from Ms. Avia.  Yes, we have to do something, we want to do something all together.  What we have done so far is not yet effective.

In what we do, we must keep the business model in mind.  We must keep in mind that algorithms that do play a massive role in content moderation and in the moderation of hate speech, are optimized for virality and profit and not for dignity diversity, and public interest content.  And we also heard that there is still a problem with the limited liability of hosts and the comments directive and there are discussions of a possible addressing of that.  Yes, the comments directive has of course prompted a lot of growth in this sector, but it has of course, also prompted a growing problem.  So we have to possibly also look at that.

Finally, I think a point was made that if there is a court judgment, and if there is a decision taken by government authority, then that of course, must be implemented.  We simply have to do it.  We are no longer then asking for voluntary contributions, but we want them to implement that, and of course, collaborate with law enforcement.  Which we also heard from the European Commission.  On that note, I think collaboration in the end, again is one of the key words.  We need to keep in mind that what happens in our context here is going to be copied in other parts of the world.  And we also need to keep in mind that we are happy and blessed to be cooperating closely with some companies, but that not all companies we are cooperating with.  And a lot is happening that we do not yet fully understand even.

And finally the role of training and awareness-raising.  Which I think has been absolutely present in all points, and I think this is -- if I may end from my side, and then I will quickly hand over to our co-organizers.  But I think training and awareness.  This is really something where we all, every single one of us can contribute.  We can each speak to our counterparts and within our networks and we can inform about the different opportunities that exist.  Also about the question which I thought was very good.  The point that we have problems in addressing hate speech, also in the offline world.  Let's be honest that hate speech is a problem generally, not just online.

And with this, I hand over to Martha.  Thank you very much.

>> MARTHA STICKINGS: Thank you very much.  I realize I stand between you and lunch so, I will be very brief.  Mostly just to say thank you very much to all of our speakers.  Thank you to Charlotte.  Thank you to Ms. Avia for making the technological system work and also the technical staff here for sorting that out.

I just want to conclude by stressing again, the importance of taking a human rights-based approach to dealing with the problem of hate speech online.  And in doing so also, it's really important that we put the victims of online hate speech really at the center of the action that we take.  And that just as we ought to remember that human right apply online just as much as they do offline.  But remember that actions that happen online can have offline, real-world consequences.  We have to protect and support victims to ensure that they are also able to take advantage of all the possibilities and opportunities that are given by the Internet.

So thank you very much for participating.  And enjoy the rest of your IGF.

(Applause)