IGF 2021 – Day 2 – WS #198 The Challenges of Online Harms: Can AI moderate Hate Speech?

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> SAFRA ANVER:  Hello, everyone.  We will be starting shortly. 

   >> RAASHI RAXENA:  Maybe we could ask everyone to introduce for the first five minutes until we figure the aspect out. 

   >> SAFRA ANVER:  If you want to just say hi on the chat, introduce where you are from, let us know a little bit about you and what you want to learn from this session, feel free to add it on the chat and we would love to answer it as we go along this conversation. 

   >> We are all live in a digital world.  We all need it to be open and safe.  We all want to trust.

   >> And to be trusted.

   >> We all despise control.

   >> And desire freedom. 

   >> We are all united. 

   >> SAFRA ANVER:  Thank you so much for that amazing intro.  If you want to introduce yourselves, feel free to use the chat.  We will wait a few minutes for everyone to join online so we can start the productive conversation.     

   >> Hi.  This is Naz.  I'm a Member of Parliament from Pakistan.  Also I'm a Standing Committee member for information technology and very keen to know and learn how the world is dealing with the day‑to‑day issues and challenges, pertaining to the cybersecurities and different issues related to Internet usage.  Thank you. 

   >> SAFRA ANVER:  Thank you so much.  It is lovely to have you here.  And I hope we can answer your questions throughout the session today.  And if you have any questions, feel free to put it in the chat or let us know during the Q and A sessions that we have for the audience as well.   

   >> Sure.  Thank you.  Thank you so much. 

   >> Hi everyone.  This is Babba.  I'm a member of legal in the Ministry of IT and Telecom, Government of Pakistan.  So if I'm not mistaken, the subject of ‑‑ topic of this session is involving online harms, am I correct on that? 

   >> SAFRA ANVER:  Yes, it is. 

   >> Yes.  So let's see how you guys will start the ball rolling.  And then I have a few questions.  I'm sure that everyone is struggling on this path with respect to harm the rules or social media is becoming a menace also. 

   >> SAFRA ANVER:  Well, we hope with the right control and, of course, this discussion that we can work out a way that it doesn't end up too much of a menace but something we can use productively as well. 

   >> Agreed.  Yes.  It has both sides, yeah. 

   >> SAFRA ANVER:  Anyone else from the audience who would like to introduce themselves? 

   >> So let me ask one question, without keeping in mind Global North and Global South, in other words, I'm not interested to enter in to this debate what are the differences and issues attached to Global North and Global South.  Do you think there are equal treatments with respect to all parts of the world when we are dealing with the social media and social media platforms as for harm to their societies are concerned?  If not, then are we thinking to have some uniform principles on which we all should be treated equally, squarely, equitably, justly? 

   >> SAFRA ANVER:  Thank you.  Does any one of you want to answer that question? 

   >> No, it is my question to your good self. 

   >> SAFRA ANVER:  So basically, what we're looking at is an equitable solution.  We want this degree to kind of bring out.  Of course, we would have to talk about the differences between the north and the south.  But, of course, we want something where everyone has a level playing field, that they have the same access that the resources in terms of AI would be able to help mitigate the menace it could become.  So I feel like this conversation could be a standing point to how we can help address that issue across the world and not just define it to the north and south. 

   >> I see.  When we talk about AI, then that is contingent about a logarithm.  So if that is uniform, then there is no difference between Global South and Global North.  That is the issue. 

   >> SAFRA ANVER:  True.  So what we will do is we will go through this session and we will try to address your questions as well throughout today's point as well as any of the audience's questions as well.  So with that we go in to our main conversation.  So as you all know before we get started I'd like to go over a few housekeeping items so you know how to participate in today's event.  At any time during this conversation feel free to put in any questions or bring it up during the Q and A sessions if you are live at the IGF session as well.  We would love to answer them all throughout today's session. 

    So with that welcome to the session The Challenges of Online Harms:  Can AI Moderate Hate Speech.  As you all know digital technologies have brought a myriad of benefits for society transforming how people connect and communicate with each other.  I would like to first invite one of the organizers for today's session to give us a little insight before we start our main conversation. 

   >> ZEERAK WASEEM:  Hi.  Sorry, I'm just putting on these woolly socks because it is really cold in this apartment.  My name is Zeerak.  I work on the foundations and limitations of machine learning of AI through the perspective on content moderation technologies. 

    So to start off with I'd like you all to think about how the reinforcement of cultural and linguistic hegemony and what influences it.  As a point here you can consider the impacts of minoritized people and speech and communicative patterns.  And how this would be impacted by regressing towards a hegemony.  And I ask this because as will argue this is what machine learning does and machine learning for content moderation. 

    Machine learning on cultural content such as images, texts and speech can be thought of as a very advanced way to identify the average or mean of a concept.  The foundational question of machine learning is how can we find an approximate of our data that maps well in to an unknown space of data that is identically and independently distributed.  And these three identically and independently making questions how you can create methods to map on to something that we have already optimized systems for.  So when we develop machine learning, base content moderation technologies what we are developing are machines that identify the means, given some instruction. 

    What this then means what is marginal and marked and minoritized is deeply impacted by this regressive move towards mean.  So the question is if ‑‑ and the question is when we regress towards a cultural mean, what cultural mean are we regressing to.  Given that data ‑‑ that the data and emphasis is very much on the economic north, specifically north of United States of America, given the social media platforms used as session 230 as a foundational value, how this mean is probably very much focused on the American experience of the Internet and the American values of speech. 

    These values are very transgressive towards minoritized and marginalized folks.  What we are seeing in these content moderation systems because of their foundational methods as well as their foundational ‑‑ and their foundational values from the social media companies who are implementing these systems, we end up having systems of marginalization, systems that inherently are developed to marginalized.  And while we are saying that and pretending that these systems also have some sort of benefit in that they remove content. 

    Now if we actually stop down and look at the content that's removed, well, then we see exactly this thing that is minoritized and marginalized removed a lot over the time.  Whereas some of the more concerning and culturally harmful things such as white supremacist speech and violent ideology are being pushed are not removed because they communicate in a manner that is deemed as polite or acceptable by these ‑‑ by the content moderation technologies, by the values of the companies, and by the systems ‑‑ sorry, by society at large and by society at large, I mean very specifically the American society or the society in the U.S. 

    And it is important here to note that machine learning removes decensus.  It removes the notion of debate.  And contesting ‑‑ and contestation.  So what we end up with is that all of the responses to or all the various firm responses that we've seen over the past few years to the white supremacist insurgence in the U.S. that's removed from the content moderation system because these are regressing towards the most common and the most salient mean. 

    Cool.  That's what I have to say.  Thank you. 

   >> SAFRA ANVER:  Thank you so much, Zeerak, for that.  I'm sure everyone has a little bit of insight as we begin our conversation as well. 

    So with that we go in to our first panel discussion, which is, of course, categorizing understanding and regulating hate speech using AI.  So with this panel we have three members who are Lucien Castex and a researcher at University Sorbonne.  Can I please have him to do the first presentation for us?  Lucien?  Yeah, I think he is having technical difficulty.  So we will move on to Giovanni De Gregorio, and policy at the Center for Social Legal Studies.  The floor is yours. 

   >> GIOVANNI DE GREGORIO:  Thank you so much.  Such a pleasure to be here with all of you and talking about this topic.  I mean just as a general let's say introduction before, you know, to start with this panel discussion and the questions.  So the question of this panel is about whether AI is able to resolve or to solve the issue of hate speech, I will not just say hate speech, but it is also a problem for disinformation and also a problem for other harmful content because it is not just hate speech outside when we talk about social media.  So it is very important to understand what are the challenges when AI tries to detect content.  But again the most important thing to stress at the very beginning is that this is not just a technological question.  It is also a social question. 

    It is also a social question because it involves also actors implementing these technologies, what are the incentives that the actors serves in behaving better and doing better for moderating hate speech or tackling hate speech.  It is not about how AI can solve the issue of hate speech, detecting or removing hate speech.  But it is about what the actors are doing when they want ‑‑ with their policy or with implementing technology to the tactile speech. 

    So there are these two layers, this is important to stress before start any conversation about whether AI can solve the issue of aligning hate speech first of all.  The second thing I would like to stress is if we look from a technological problem, what we can see is that generally the problem of AI is a problem of detection of different context.  It is not a matter of context.  It is also a matter of language.  Because also language belongs to the realm of context as you can imagine. 

    So the question is also a usual about what AI, detect.  It is not just about whether AI can solve detection.  If sometimes AI cannot even detect hate speech.  And just to provide a small example, talking also with computer scientists, also this clear when there is some languages where there is no training for AI, for example.  Some areas of the world.  For example, with our research here in Oxford that looks particularly in Africa with the confidence project, we are looking in particular at some countries in Eastern Africa where actually even in Southern Africa where interest is now actually moderation of content.  There are small projects running up translating some content, for example, some piece of let's say of information, radiocommunication in to data to train AI in a certain language.  So the most important thing we should not think about whether AI could solve hate speech, whether in the world there are areas without moderation. 

    So the question is AI could solve the issue of hate speech, probably Myanmar, or whatever around the world.  The question is yes.  It depends what AI can detect.  There are areas of the world where there are no incentives, it is not why it is technological.  There are no incentives for the actors to develop this technology, to develop this technology because probably the advertising market is not so developed.  It is also a question of the business model of these actors and incentive they have to implement better or to invest more resources in algorithmic technologies able to develop AI in different languages.  It is not just a problem, for example, one country, inside of different countries because even in one country, there are even multiple languages. 

So, you know, this is not just a problem for sure of AI dealing with the questions about language.  But language is just one of the other problems.  Then there is also the problem of understanding the context.  And, of course, the different degree of protection of free speech in different countries, but we are really complicating the debate now.  It is a brief overview and also the discussion just to say that this is just ‑‑ this is the tip of the iceberg of the problem.  But the question should be framed in a different way.  It is kind of technological and social at the same time.  Thank you. 

   >> SAFRA ANVER:  Thank you so much for that.  We go in to Vincent Hoffmann.  He is probably of the regulatory structures and lead institute of media research and also part of the AI and society project at the Humboldt Institute for Internet and Society in Berlin.  

   >> VINCENT HOFFMANN:  Thank you.  After what Giovanni mentioned, the problem of the language I would like to draw the attention to the legal perspective of online hate speech moderation.  And that's both procedural and fundamental rights part of content moderation.  So when you look at the company's content moderating you have to also consider that this is a decision that is ‑‑ has a fundamental impact on the fundamental rights of the users.  And especially when it comes to political speech that is being moderated by the private companies, you always have a ‑‑ not even on the freedom of speech, but you also have a strong impact on the political debate and on the political freedoms that are granted when it comes to content moderation. 

    So what was ‑‑ the highest court in Germany held a decision this year that was on the removal of content.  And it said that the first thing it said that Facebook can in their private rules, so in their terms of usage they can actually moderate more content than is prohibited by law.  So they are allowed over a wider space.  But what they also said because they have such a strong impact on these fundamental rights of users and especially on political parties or members of Parliament, you have to have procedural rights granted in these decisions. 

And I'm coming back to AI decisions, makes it in my understanding necessary to both explain what has been decided, and give the possibility for the person confronted with a decision to legally challenge that decision.  Meaning that you have to have an explanation of this AI decision, making it sensible and understandable to the user confronted with it so then he or she can go to court or just another instance of the private company with a human involved and then raise and complain about the decision that has been made. 

    Even if the AI is used and I think there is prove me wrong on this panel, but I think with the mess of online speech it is impossible to moderate without it, even at least in the first stage of moderation. 

    That you have to have a look at the second part which means then the procedure afterwards for the users in the system, especially necessary in countries where Giovanni mentioned where there is a moderation team that does not speak the mother tongue and not familiar with the local ‑‑ with the local way of communication, because it's just a U.S. based moderation team that focuses on English language or maybe other major languages in the word but does not get defined notes in between. 

    So then even in the second stage this cultural language problem still exists.  So those were my two points.  The fundamental basis right of online hate speech and the explanation and procedural rights of those confronted with those decisions.  Thank you. 

   >> SAFRA ANVER:  Thank you so much.  I want to check with Lucien if you are okay.  Perfect.  So as I said before, Lucien Castex is the Secretary‑General of the Internet Society France.  The floor is yours. 

   >> LUCIEN CASTEX:  Thank you for giving me the floor.  I'm in Katowice but I'm in my hotel.  And I had some WiFi quirks.  So I moved around and I found a better one. 

    So that was quite interesting.  And I agree with my colleagues, this is a key topic as a question of balance between fundamental rights, with a big impact on freedom of speech and privacy, when speaking about content moderation. 

    I am a member of the French National Human Rights Commission.  And at the French level we conducted extensive interviews on online hate speech last year.  The Human Rights Commission is an independent national institution for Human Rights.  It was established in 1947.  And basically in the French landscape at that time, that's an interesting use case, a bill was introduced in March 2019 to try tackling online hate speech.  The bill was relying on private actors like digital platforms to carry on taking down online content.  But there is a number of drawbacks that can be applied.  There was a risk of mass removal of content in the gray area, leading to a big race considering the threat of large fines, combined with a very short window to evaluate the content, with most likely online platform providers over to take down content. 

    And it is obviously harmful to freedom of speech.  The proposed legislation was to pull down content within 24 hours, which does not actually provide enough time to adequately evaluate content.  And as my colleagues were saying, just a minute ago, well, there is massive impact when you talked about content of the context. 

    And indeed it is true in French, well, different languages.  In French we are French, but also languages is also context plus language makes it very difficult to actually be effective when using automated tools. 

    Another interesting point obviously we noted during the interviews that there were massive views of Artificial Intelligence and of nontransparent algorithms.  And then a risk of having a black box, and not investing why some content were flagged by the machine which resulted, you know, in a problem society wide. 

    And as a result there were also a risk of reinforcing dominate position to the detriment of small actors and to reinforce bias in moderating content.  Hopefully, the commission at the time launched two opinions.  And the French Constitutional Council did strike down most of the key provisions of the bill, making it quite, well, basically the provision I was mentioning, resulted in enacting an observatory to study hate speech. 

    And obviously it is a problem that we're mentioning.  And another interesting point is there is a clear need to have Moderators understanding the context and the language when moderating and not only obviously coming from nonnative speakers. 

    That is a need also for a national action plan for digital education in citizenship. 

    And on another note, the Commission today is conducting extensive work on AI and Human Rights following the work on hate speech with publication of this expected in March 2022. 

    Thank you. 

   >> SAFRA ANVER:  Thank you so much.  So I'm going to add all of you on to the main panel so that we can have questions as well.  I hope Lucien the connection is okay for that.   

   >> LUCIEN CASTEX:  Sure. 

   >> SAFRA ANVER:  So if anyone from the audience has any questions at this point. 

   >> This is Babba.  My question to the Honorable last speaker is, when we talk about nontransparent logarithm.  So the concept that man behind the machine, how we will solve this problem when there is no uniform formula, there is no uniform principle, then who will be having hands on the wheel?  He will be deciding where to go.  And the Gulf between Global South and Global North will remain there.  Thank you. 

   >> SAFRA ANVER:  Lucien. 

   >> LUCIEN CASTEX:  Yes.  It is an interesting question.  Indeed it is quite complicated to tackle online and to just have humans in the mix.  One of the ‑‑ one of the reflections we had when conducting and still conducting studies on the topic was basically to have a strong team of Moderators, human Moderators, from the country, from the area speaking natively the language, of course, not a perfect solution, because you will need a lot of them.  Understanding the cultural context and then to answer out obviously of user flagging the content.  And a process relying on judicial system, on the judicial system, to after time evaluate the content with a few obviously ‑‑ a few exceptions when regarding, you know, terrorism or other high risk content. 

    Also another point is to make sure to put in place rules that are transparent, and obligations for digital platforms to comply with such transparency rules.  So to have unexpected Committees able to evaluate algorithms used in content moderation.  And basically have a mix of measures that might enable us to understand the moderation. 

   >> My name is Naz.  I'm a parliamentarian from Pakistan.  And my question is regarding the hate speech.  What I wanted to ask you all, it is actually what might be hate speech in my part of the world, might not be considered a hate speech in some other land.  So definitely the law of the land applies in every particular country.  My question is that many a times it happens that we feel basic that individual right of a person has been breached or some hate speech has taken place in some other part of the world, but when we complain these sort of cases and issues to the social media providers, we most of the time get the reply that it is not considered as a hate speech in that part of the country where the social media providers exist. 

    So the compliance that you are talking of, we have a lot of issues in that.  So how do you think is the best way to deal because the earlier representative, I'm missing the name actually, was discussing about the language issues.  Many times people are using language that is not understandable to other countries or social media providers.  We face these issues on a day‑to‑day basis.  How do you think we can tackle these issues?  Thank you. 

   >> SAFRA ANVER:  I am going to add Zeerak here as well so he can give his opinion as well and then we are going to the other panelists. 

   >> ZEERAK WASEEM:  I would invite Giovanni to talk about this because he went more in to depth on the challenges there. 

   >> GIOVANNI DE GREGORIO:  Yeah.  I mean, thank you, thank you for that.  I mean I'm definitely not an expert on that.  It is part of our outcome on, even partial outcome that we got from the research that we are conducting.  It is not easy to answer all these questions because it depends a lot on the context in which we are focusing. 

    So sorry, just to repeat the consequences it is important.  When we look at the moderation of AI, for example, in Europe or in the U.S. or whatever, in which this moderation is conducted is different also than the way it is conducted in other areas of the world for many reasons.  Also some of them have already mentioned that.  And the language is only ‑‑ I would like to just add this point, the language is only just one of the reasons why there is a kind ‑‑ we are trying to have inequality in the way that conduct is moderated around the world.  These are not just countries that are traditionally marked as being in the Global South, but it concerns the incentives the platforms have to moderate content in different areas of the world.  It is not easy to answer just this question in a straight way. 

    But, of course, the problem of language is just one of them.  Then there is also the problem that has been mentioned, the role of humans in this field and role of human Moderators.  If you think about no one knows where this human Moderators are around the world.  And probably no matter whether you live in a consolidated Democracy or not, it is important to know who is managing and taking decisions on this content, whether it is just AI, whether it is a human Moderator, in Japan, in the U.S., Latin America and the Philippines.  So it is important to know that.  Because also the decision of ‑‑ in your content on hate speech, also the degree of the culture and understanding of free speech is taken by different people around the world with very different backgrounds and expertise. 

The question is about who should be human Moderators.  They can play a very important role in the field.  They mitigate the risk of AI, just leaving AI doing its job.  And we have seen during the pandemic when some platforms have decided just to rely on AI, leaving human Moderators at home because of COVID.  We have a problem of spreading misinformation and blocking of accounts of the human Moderators are important.  If you look around most of the human Moderators are usually deployed, actually where there is no AI able to moderate content. 

    It is not by chance.  If you look there are researchers where some human Moderators are usually in countries where there is no possibility to use AI because AI cannot understand content or language.  So you use humans.  It is not by chance that even this country ‑‑ usually countries where platforms can even outsource this services and pay very little amount of money for this service rather than investing so much money on AI.  So these are the questions that are around how AI could take or opinionate speech.  It is more about the political content of moderation. 

   >> SAFRA ANVER:  Does any of the others want to add on to that? 

   >> VINCENT HOFFMANN:  I might add a bit from a German perspective since, sorry, the ‑‑ there was a law introduced in 2017.  So slightly before France, Lucien, that was focusing on the removal of content from the platform.  So that was basically addressing that problem that the platforms were managing after their own rules and the state law that ‑‑ of the country that the content was published in was not enforced on the platforms.  So they made the platforms responsible for the content reported to them.  And enforced removal within 7 days as the maximum.  And if it is obviously illegal, which is also a term that's highly discussed within 24 hours, so quite fast.  That it has to be made on that platform.  But that was the German approach of making the local law applicable on the platforms by making them responsible for the removal. 

   >> SAFRA ANVER:  Thank you.  Any of the others? 

   >> Hi.  I had a question and then I will pass on the mic.  I wondered and I'm potentially broadening the scape here, if we are looking at AI can moderate hate speech, whether we can avoid talking about data protection.  In order for the algorithms to be more effective or the machine learning to be more effective it requires data.  And given the nuances that are involved in making decisions on particular types of content it requires perhaps more sensitive data in order to provide the appropriate context.  So I just wondered whether the panel had any reflections on the privacy side of providing data to help moderate this content, and if there is challenges that have come out of any of your research.  Thanks. 

   >> SAFRA ANVER:  Giovanni? 

   >> GIOVANNI DE GREGORIO:  Yeah, thank you so much.  I mean this is another huge question, the boundaries and the ‑‑ between privacy and data and content are very close to each other, you know.  So, of course, to train AI, of course, it is important just to general ‑‑ just generalizing a little bit.  It is important to have large amounts of data.  So, of course, there is a connection.  Because also to train AI you need to not use nonpersonal data but also personal data, data, of course, that are involved also sensitive information, rational origins.  You can imagine if you have an AI clustering and removing hate speech because they are races.  You need to train AI to understand what's racism is.  You need to use speech and data to train on AI to learn what is racism and what is gender or whatever. 

    So, of course, these are really personal data.  You cannot train AI to detect racism using wider data.  So it would be quite impossible.  So the problem of data is very well connected to the problem of content moderation.  Also another point because also when human Moderators or AI process content to remove, of course, those content includes data inside because the detection of AI whether there is a pattern concerning the color of the skin, whatever, it could be a rational character, whatever.  This is actually a kind of data that is processed to remove a certain content. 

The question of data, it is really relevant to be honest.  I'm trying to push more the research and the focus about the intersection between online content and data, personal data because there is not so much research sometimes about the intersection between the two systems.  So about how data, of course, are content at the same time.  And also important to train AI.  This is really important to stress.  But the safeguards are really not there.  If you think about the law of the legal safeguards they are not really there.  When we look in Europe let's think about the very famous GDPR, it doesn't say so much about protecting data in the feel of content if you think about it. 

   >> SAFRA ANVER:  Thank you.  Can we have Zeerak and then Lucien? 

   >> ZEERAK WASEEM:  Yes, being classically trained as a machine learner or a computer scientist working with machine learning, there is a fundamental conflict between data protection and machine learning, especially in the question of when ‑‑ especially when we are acquiring context.  If we think about machine learning about this massive machine that takes a lot of data and tries to figure out some realms within which people exist that's going to need the data of the people.  And even if we don't explicitly give the machines information about our race and gender and so on, they do pick up these things as latent variables. 

    So even when we don't provide them with disinformation, they pick up on it.  And we need to ‑‑ I think like this is very question because we need to think more carefully about how we can address this conflict of privacy and fundamental rights while developing machines for content moderation.  That take in to account the cultural ‑‑ the cultural complexities and specificities of each geographic region and each cultural group for whom the content moderation technologies are acting on. 

   >> SAFRA ANVER:  Thank you.  Lucien, I know you have been very active on the chat.

   >> LUCIEN CASTEX:  Sure, sure.  Regulating such a topic is quite interesting to reflect on.  The ongoing dynamic in Europe in regulating content and including hate speech and disinformation is focusing on the balance between the freedom of expression and basically cybersecurity and what we sometimes call the resilience of the society. 

I wanted to reflect on the ‑‑ on my German's colleague example as we had in France also low fighting information disorder in 2018.  And what was quite interesting because Zolo was enacted in December 2018 and creating a new range of duties online.  To cooperate with the regulator and to develop accessible and visible reporting system.  So that's quite ‑‑ that's quite interesting. 

    And also ‑‑ also implement measures such as transparency of algorithms.  So to do so, to enforce that kind of compliance obligation, was interested to the French audiovisual regulator (speaking in a non‑English language) which put together a project team, which is now becoming a direction within the regulator.  And also composed, an expected Committee composed of 18 experts from different backgrounds just to be able to reflect on the topic that helps a regulator understand what is an information disorder, which, you know, as you know in France we have a Presidential election coming up in early 2022. 

    And Fake News and hate speech basically information disorder online are a key topic and growing.  So interesting points is that ‑‑ it is two years of extensive studies.  And basically the regulatory is conducting each year an extensive questionnaire to cover reporting mechanism, transparency of algorithm and information provided to end users of digital platforms. 

    And to do so is a first step in my opinion to be just able to understand how moderation is operated and obviously enforced such transparent obligations. 

   >> SAFRA ANVER:  Sorry.  Thank you, Lucien.  We have one question here in the chat.  How can AI handle hate speech when people are using special signs, instead of actual letters and words?  Can AI read these graphic pictures and compare it to black listed words instead of actual words to navigate it better? 

   >> ZEERAK WASEEM:  Thanks for this question.  So modern language technology methods actually split the words in to the longest recognized string.  So if you have the word actual with a  "@" sign instead of the "a", what it will recognize is that very first character and then it will split that with the rest of the word. 

    So it should recognize that while this is probably a lot closer to the word actual than it is to this symbolic representation.  But as ‑‑ so the more replacements people use in their characters, the harder it gets.  And this also doesn't get to the issue of replacing words with some other words. 

    So you can have some benign words.  Like at one point Google was used to refer to black people.  So that kind of a replacement is going to be a lot harder to recognize. 

   >> SAFRA ANVER:  Thank you.  Fredrica, you raised your hand.  Do you want to voice out your question or do you want me to read it? 

   >> Hello.  Yes, I wrote it.  But mainly thank you for the opportunity.  I'm still learning a lot about Artificial Intelligence.  But as I said in the chat, what scares me the most is like the variety and diversity, for example, of languages and slangs.  For example, me I live in the Caribbean.  Latin America and Caribbean, we are very big and Spanish is used.  Can maybe have different meaning than in Chile or in Argentina.  So it's hard I think when algorithms are developed, like to take ‑‑ to consider all these little things that are in the same context.  So I cannot imagine how hard it could be to do it worldwide. 

    So that's what I really wanted to ask, what do you think that could be done because the diversity itself in regions is very huge.  So me I'm not ‑‑ I'm still learning about the AI.  But this is just like the first question I have.  And I still cannot find like a good and efficient solution maybe.  Thank you. 

   >> SAFRA ANVER:  Do you want to ask that question to a specific person or you want to open it up? 

   >> It is like everyone can ‑‑ it doesn't matter.  Everyone can answer it.  It's okay. 

   >> SAFRA ANVER:  Vincent, do you want to go? 

   >> VINCENT HOFFMANN:  Yes, I'm having a little WiFi issues.  That's why I didn't get everything but I hope when you hear me I would like to speak to that.  Yes.  The problem of that nuances of the language that we mentioned at the beginning.  Also I think one solution that might be suitable is local Councils of the platforms.  So it is ‑‑ many further questions that you have to ask before introducing these Councils, like how much power you want to give to the private companies is the private ruling adequate solution.  So that's ‑‑ you have a lot of questions to discuss before coming to that.  But I think that once you ‑‑ once you come to this point that you have to have regional or local Councils at the end of the decision that decide on removal or not that was predecided by AI or by another content moderation team might be one solution.  Of course, it is ‑‑ it comes with a lot of work.  And, of course, they have to be paid by those Councils.  So a lot of money has to be spent on that.  But that could be one solution to address those regional or local differences. 

   >> Thank you very much. 

   >> SAFRA ANVER:  Does anyone want to add on before we close off this panel? 

   >> I would like to ask like one last question.  My name is Danielle.  I run a mobile product development company based in Pakistan.  I would like the panel to elaborate a little bit about ‑‑ a little bit on the data points that are either currently being used to train such AI or they would like this AI to be trained on the machine learning. 

So, for example, you know, can, for example, sentiment mining through social media, like Twitter, Facebook, Instagram and stuff, imagine recognition through newspapers, right?  Or even by impact of declared terrorist organizations.  So currently what do ‑‑ just shorten it, currently what data points or datasets are being used?  And secondly where does the onus lie?  Will Governments be involved to hand over these data points, for example?  And if so, when we talk about Governments, you have Democratic Governments and then you have authoritative regimes.  I would like the panel to elaborate a little bit on that. 

   >> SAFRA ANVER:  Thank you for a great question.  Lucien, you want to answer that?  You are on mute just in case. 

   >> LUCIEN CASTEX:  Yeah.  I was struggling with the mute button.  Sorry.  Can you hear me correctly? 

   >> SAFRA ANVER:  Yes, we can. 

   >> LUCIEN CASTEX:  Excellent.  Yeah.  That's a problem indeed.  Just saying when you have a Democratic regime, as the speaker said, it is a totally different question.  And when you ‑‑

   >> SAFRA ANVER:  Lucien?  I think he got disconnected.  So what we will do is in the interest of time as well because we have a separate panel after this, we will try to answer this question at the very end of the session as well including everyone who is part of that panel as well. 

    So first off, thank you, Vincent, Lucien and Giovanni and Zeerak.  I know I put you on the spot.  Thank you for accepting as well.  We go in to our next panel, where there are three more individuals who are working on this on the ground.  And we'll definitely learn a lot from what they have to say. 

    So the theme for this part of the session is called Categorizing, Understanding and Regulating Hate Speech Using AI, Tackling Conflicts and Ethical Challenges in the Global South and Middle East.  This will give us a very interesting perspective.  To start off I will now invite Neema, the founder and director of policy, based in Kampala, Uganda. 

   >> NEEMA IYER:  Hi everyone.  Can you hear me well? 

   >> SAFRA ANVER:  Yes. 

   >> NEEMA IYER:  I work for policy which is a civic tech feminist collective based in Uganda, but we have staff members all across Africa.  We do work that impacts society in the African context because we tend to be a traditional ignored part of the world, especially as it comes to tech development.  I'm very interested in topics, like feminist data futures can be looking at things like online harms against women.  Today I was invited to talk about a study that we did earlier this year.  It is called amplified abuse.  And it is a study that was looking at hate speech and online violence against women politicians in this year's Ugandan general elections that were held in January 2021.  We wanted to understand a couple of questions.  How do women in politics particularly use AI.  Not only those who are participating in politics but also people who influence politics. 

    And we wanted to look at how it defers from men to women.  And we wanted to see how hate speech and online violence manifests online.  We scraped Twitter and Facebook.  We identified 200 accounts, 100 men and 100 women and we scraped Twitter and Facebook.  And we did sentiments on all the content.  But we decided to focus on two languages, which are English and Ugandan.  And it is important to know that we have over 50 languages in to Uganda.  If you take in to account the smaller dialects that goes upwards of 100 languages. 

    Then what we did once we did sentiment analysis we classified the hate speech in to six different categories.  And basically what we saw is that men and women use Twitter and Facebook quite differently.  The abuse that women get is quite gendered and sexualized. 

    And, of course, this has been found in many different contexts.  Women get to targeted based on their personal lives.  So you are not married.  So you are not capable.  You don't have children.  You can't be a good leader.  Versus men are targeted more about the politics, like I don't agree with that policy that you are trying to bring about.  And there were many changes based on age, based on party, frequency of use.  But I want to get in to the lexicon and the AI.  And it is interesting how this panel is divided in to Global South and the rest. 

When we had to build a lexicon we had almost nothing to go on for Uganda.  We brought Civil Society actors together.  For the English part we were able to use some databases that already existed such as hate based.  And we were able to combine them together, but it ended up being a very resource intensive process. 

On the one hand it was shocking to see that Ugandan politicians were not using social media, even when we had elections that people couldn't campaign online.  But still it was very little social media usage.  And so putting all this information together to identify the types of hate speech we had to hire a ton of people and they had to manually look at this information because as somebody asked these databases don't exist.  And they don't exist for the biggest language in Uganda.  And definitely don't exist for any of the other languages that might be spoken by smaller populations. 

    And one of the things that I like to say is that, you know, if you think about languages like Uganda or Swahili they are spoken by more people than Dutch.  It is a business case.  It is like it doesn't make business sense for social media companies to invest in some of these ‑‑ it doesn't reach their bottom line. 

    There was a case where women were being attacked in Kenya, but instead of text because it was detecting the text, they were sent pictures of machetes.  And that was getting throughout the AI and because who was moderating oh, it is a picture of a machete.  The context is not understood by content Moderators.  There is not enough funding being given to content Moderators.  And many companies wouldn't even tell you how many there are across these different countries. 

    And then the other things like some African language because of colonial histories tend to be more oral in nature.  Words can have different spellings when you go from oral to written.  And then there is also the bias, this is something that we come across often and yes, we talk about bias all the time. 

    But thinking about the way that this bias is in these algorithmic systems in the way that it impacts women, Africans, People of Color, it is completely different because this bias is baked in.  We spoke to a bunch of women who said they tend to be shadow banned if they talk about queer or racism.  If you can't talk about these issues because you get flagged as hate speech.  You can't even talk about political issues that impact you. 

    And then lastly I wanted to add that ‑‑ okay, there is two more things I wanted to add.  We actually met with local politicians.  And we asked the women, you know, which aren't you using social media.  And many of them said ‑‑ we spoke to MPs and local counselors.  And they felt very unsafe and that platforms were not protecting them in any way, in closed platforms. 

So I think it is okay when we come here and talk about Twitter and Facebook, but we cannot at all regulate what happens on closed groups like WhatsApp or telegram.  I wanted to add it really scares me when we speak a lot about regulation of AI in the African context because due to some issues that happened in January around content moderation in Uganda Facebook has been banned.  Because it is a long form of communication and they felt ‑‑ they felt safer on Facebook compared to Twitter. 

So I think when we talk about this kind of content moderation, when we talk about how platforms deal with Governments or disrespect Governments, however you want to frame it.  So this biggest social media platform that Ugandans use are now banned.  It is sort of like what now.  And I think that we have to tread carefully with these questions.  And I would like to hear from other people on how do you deal with big platforms, meddling in the politics of weaker geopolitical countries.  When Germany passes a law these companies have to be like yes, we are going to follow your laws.  Geopolitically do things they are very likely to be ignored.  I will close with that.  And I would love to hear from the rest of our speakers. 

   >> SAFRA ANVER:  Thank you so much.  It was definitely a very interesting perspective and one that ‑‑ I'm sure there will be a lot of questions from the audience based on what you just said. 

    So with that we go to Rotem Medzini.  He is a research fellow, cyber law program.  Just going to bring him up on to the screen so that he can share his presentation.  Over to you. 

   >> ROTEM MEDZINI:  Hi.  Can you hear me? 

   >> SAFRA ANVER:  Yes. 

   >> ROTEM MEDZINI:  I'm showing a slide, because one thing we are going to try to do here at the Israel Democracy Institute is a work we did with the Adven.  It is a work that we started dealing with anti‑semitism, but we broaden it more with hate speech.  It is a work by my colleague and myself.  It is kind of try to, first of all, guide companies and online social platforms and also kind of visualize for them the way to think about how to deal with hate speech at the same time balance freedom of expression.  What we kind of try to do is have a model that is separated in to two parts.  First of all, kind of a common criteria.  Some way for us to make the balance and also think about it.  And at the same time kind of help us also have a procedural method of dealing within the company with the issue of hate speech and kind of how we get notifications, how we make a decision on specific issues.  How we respond to it.  And finally also how we kind of make it more transparent and accountable. 

    So I'm going to show basically the model in broad scale, but what we did we kind of tried to make those common criteria in each ‑‑ and these questions about how we can define hate speech and scale it from one side, which we'll have more lenient ‑‑ sorry, options, and the other side more conservative and critical.  If you look at our slides on the one side, you will find things, some kind of criteria are maximizing freedom of expression.  On the other hand, if we make decisions on the other side of the scale we will minimize freedom of expression.  And if we, for example, go for ‑‑ a manager of a company, or a director of a company, and have to make a decision, how to visualize and how to make ‑‑ how I want to decide of which policies to create for the company, I can do that using these kind of models. 

So what we do, we took a model offered by Andrew Sellers from the Berkman Klein Center and we made it in to five basic questions.  Those questions answered how we can define hate speech.  So first of all, we make a decision of which kind of groups we want to protect.  So the speech needs to target a group or individuals, a member of the group.  Which group do we protect.  Only race, ethical or religious groups or do we go more and broaden the scope of the groups that we protect to create political professionals in other groups. 

Which kind of ‑‑ which kind of expression do we protect?  Do we stick to those kind of closed list of definitions?  There was an example of slangs or do we use slurs but we keep only to those terms.  Or do we try to kind of broaden the scope of the terminology that we adopt.  Here is a mixed appropriate we can adopt AI to kind of learn more using NOP or supervised learning mechanism, kind of label the data and then kind of learn using AI new terminologies that might not be available in closed lists. 

    Then we kind of ask which kind of speech we want to ‑‑ which kind of ‑‑ do we stick only to posts that call for physical violence.  Or again do we broaden the scope to also direct mental harm or nonphysical or indirect mental harm, knowing that if we broaden the scope of the calls that we want to take down or address, we know we basically minimize the content on platforms.  Then we kind of ask whether there is a harm involved in the ‑‑ whether there is an intent involved in the statement. 

    And here do we look for only explicit intent.  If it is only me calling for something or also ‑‑ or do we also look for things like implicit intent.  And things which type of violence do we call to ‑‑ in the debate.  Again so if we do this, one thing we kind of saw is that, for example, we took Twitter's policies and we noticed how they placed their policies across the criteria.  In some cases we saw that Twitter has a very clear statement around, for example, only explicit intent and again only around violence. 

But in other cases we can find that they were less specific about issues.  So in some issues they stick to the closed list definitions and in other issues they kind of were broadening the scope to context based approach.  And the one thing that helps us do is illustrate.  But we can kind of also balance and think through where we want to put policies around, if we want to move aside.  If we want to compare within companies we can also use the model to do that. 

    And then the other side and which is based ‑‑ I want to be sure to kind of go through, kind of help the company develop a policy within themselves.  So first of all, create the corporate policy to reflect decisions regarding the scales we just saw, but then also kind of guide them in to how they should make a decision about how to handle content.  So, first of all, recognizing that the ‑‑ that the publication chart of the platform or post, because platform variant between having fully public statements, like Twitter, and in closed groups, we have sometimes in Facebook.  And then also only cases where the private messaging, like WhatsApp.  So we ‑‑ the platform have to kind of decide what fits in their decisions.  The other thing that we kind of said that algorithms can flag content, but they shouldn't be allowed to use ‑‑ to make the decision about censoring content. 

   >> SAFRA ANVER:  Rotem?  You got disconnected for a second.  If you could repeat the last few sentences that you just spoke.

   >> ROTEM MEDZINI:  Yeah, the two things we said about is making the ‑‑ recognizing the publication characteristic of the relevant platform.  Understanding whether you kind of accept all public statements, like Twitter or do you concentrate on closed groups like Facebook does or whether the post appears in private methods.  So you might make the same decisions there you were in a public statement.  And another thing is that algorithms can be used to flag content for reviewers but not to make actually decisions about censoring content. 

    And so ‑‑ and so that was a very important issue.  The other thing that we kind of understand that there is to be an issue about notification and understanding who is the actor that actually notifies about the problematic content.  So you might have different schemes for national contact points, trusted reporters and users.  And for ‑‑ with national actors you might say well, I understand what you are saying, but I will more likely use geo blocking.  I will make a decision concentrated on your country and not broadening it up to a bigger world.  But if I get the same notice from a trusted reporter or Civil Society or an NGO, then they might consider broadening the scope of the scale of where I'm taking down the content. 

And when it comes to users, I might do another thing like, for example, we will want to have ‑‑ make sure that the other side have the ability to respond because sometimes people make decisions and that are kind of personal and not actually based on objective criteria.  The other end if someone is flagged too much false positive you might be sanctioned in a way for making sure that we want to do ‑‑ won't do that anymore. 

    So you can have different schemes for different actors.  When it comes to the actual decision, sometimes we said that well, the company doesn't have to automatically offer a permanent decision.  So it can consider, for example, in the beginning just to limit the virality of the post before it makes a form of decision and then scale it up and allow the user to delete the post before saying we are going to sanction you. 

Then we can allow the platform to take down the post and decide on whether we will suspend the user or not.  But in a way we said these scale, we don't have to automatically go to the permanent speech suspension and automatically delete the post.  We can, first of all, offer the user, kind of understand what they did wrong and kind of amend the action. 

    And the basic most thing they can do is limit the virality of the problematic post until it makes a final decision.  Lastly was an issue of transparency and oversight.  Allowing and being transparent about how the platform addresses the problem and the oversight by managers and directors kind of again going back to the skill and thinking through what they decided the right policy for the company.  And if something doesn't fit they can ‑‑ managers and directors can rebalance and go back to the scale and see what we did wrong that changed the policy once again.  This is kind of a way of guiding and visualizing for managers and directors how they can address and balance between hate speech and freedom of expression.  That's it.  Thank you. 

   >> SAFRA ANVER:  Thank you.  That was definitely a lot to take in.  And I hope all of you took notes.  Because it is definitely a structure that we can try implementing in our countries and manipulating to suit our local narratives.  So with that we go to Raashi.  She is the global project coordinator for HateBase at The Sentinel Project.  Let me spotlight her. 

   >> RAASHI SAXENA:  Can everyone hear me? 

   >> SAFRA ANVER:  Yes. 

   >> RAASHI SAXENA:  Hi everyone.  I hope you are doing well.  I have been trying to listen to all these fascinating discussions and perspectives.  And I'm going to talk about what we at The Sentinel Project do.  We are an organization based out of Canada that is using technologies.  We work on a lot of conflict zones.  We do a lot of work in South Sudan, DRC and some other parts of Africa.  And we do work in Asia and Myanmar and Sri Lanka.  And we work around misinformation management and prevention of election violence in Kenya.  But yeah, for the sake of this presentation today, we would be talking about how we address online hate speech. 

    Let's move on to the next slide, please.  Pardon me, I do have a slight cold.  Just to give you a little bit of a disclaimer that there might be hate speech terms that might come up while we are having this conversation, which might be profane based on the context, but going to try to make sure that we don't use too many.  Next slide, please. 

    So I wanted to talk a little bit and stress about what we are doing at HateBase.  We are essentially a monitor platform to monitor online hate speech across the world.  We built it for a better understanding of how the online world influences the offline world.  To have a better understanding of dynamics on the ground.  In some terms a lot of violence you see offline are consequences of hate speech are not necessarily things that happen in a day, but it is a lot of coordination activities that over a period of time actually lead to offline violence.  As more of a very nuanced understanding of how we can kind of understand both worlds which is not so great at the moment. 

    So we ‑‑ and we also define it because we also use sort of rudimentary analysis to be able to monitor hate speech.  So we categorize it for our platforms and for our human Moderators back here.  We base it on ethnicity, nationality, religion and class.  And we also kind of remove the offensiveness aspect of it because we are having offensive rating to understand the different social dynamics that are at play here. 

    Yeah.  Maybe we can move on to the next one.  Yeah.  I wanted to kind of move back in to the more historical aspects of how hate speech kind of proliferated and also perhaps also want all of us to ponder that this isn't something that has happened or has been created by the event of tech.  We have had hate speech and incidences of Genocide in the past.  Some of the audience members are mentioned we don't have a universally accepted definition of hate speech.  But maybe the way information sort of flowed in earlier, maybe go back to the genocide in Armenia, required a lot of institutional infrastructure to come together.  And kind of, you know, push your own agenda and propaganda.  Now the rate of dissemination is much faster because it is difficult to identify the source. 

    You can have a chief mobile device with good Internet which can reach out to your target audience a lot faster.  And I think yeah, earlier genocides required a lot of financial resources.  That's a little bit about the historical aspects. 

    Next slide, please.  Yeah.  So we ‑‑ some of the ‑‑ I think we have had a lot ‑‑ my other copanelists have mentioned this.  Some of the ethical and social and technical challenges is we ‑‑ we moved on to automation because as a lot of people mentioned human Moderators are really fully paid.  And also it takes a massive ‑‑ looking at it from a social aspect a massive toll on their mental health and well‑being.  You can imagine human Moderators looking at 10 to 12 hours of content online.  Being fully paid.  So that's ‑‑ yeah, I think there has to be a better way for that career or industry.  And also there is a massive issue of strong linguistic coverage. 

    As people mentioned we have different dialects.  I'm from India and we have 20 different dialects.  It is nuanced in a way that the Hindis spoke in my particular state might be different that it spoke in and depends on who you are talking to.  It can be ‑‑ if you are talking to a particular minority group versus a particular majority group, the English or other slangs spoken in ‑‑ in the UK are very different from Canada.  So those nuances are very, very hard to capture using automation.  But we kind of ‑‑ we realize that automation has not reached to a point where it is accurate or reflective.  We do have human moderation in spite of, you know, kind of a basic rudimentary sentiment analysis but it has some advantages.  One is that there is ‑‑ it is really hard to cover the information that we have seen, especially with the pandemic.  We had more people coming in online.  And pardon me, moderation is also a little more accurate.  Automation is a little more accurate than human moderation. 

    So yes, those are some of the aspects that I would kind of agree with when it comes to why we use automation. 

    And next slide.  Yes.  So we also ‑‑ I mean coming from the Global South, we have seen a lot of ostracization from a lot of countries. 

    And yeah.  Many of them have been incredibly broad and domestic and they dare to kind of ‑‑ so personally, opposition of hate speech but as I mentioned earlier.  We are specifically a monitoring tool.  We also look at working and collaborating with people across the world to be able to understand this.  So yeah, we ‑‑ this is a kind of ‑‑ we don't support the censorship of mobilization of speech in any form.  It depends on scenarios.  But we do think that in many cases censorship also reinforces the belief of a lot of groups that are there and it removes the term and not the hate. 

    Yeah, this is just a little bit about HateBase which like I mentioned the monitoring tool we have.  These are the different categorizations.  And you will see a little bit on the map about some of the terms that have been presented.  And I can perhaps also move on.  I think maybe move on to the next slide and talk a little bit about the languages. 

So yeah, we are ‑‑ we currently have 98 languages across 178 countries.  We work with different Universities.  We also work ‑‑ our API is also freely available and open sourced.  So I think yeah, as mentioned earlier has used the HateBase API for her research.  Yeah, we ‑‑ yeah.  So we basically advocate for hate as a service.  So we also ‑‑ we also used datasets of different organizations.  Although as Neema some of them are too broad in understanding.  So we accept terms.  We also accept associated terms around hate speech that are commonly used for categorization.  The category is pretty broad in approach. 

    And we are looking at always expanding new terms.  If there is anyone here and yeah, I think if we hear the ‑‑ we started something within HateBase called a citizen language lab so we can have citizens.  And you don't have to be a professional linguistic.  Anyone across the world can contribute to our platform.  We are slowly moving open source.  So you don't have to register.  Earlier you had to do that.  We are trying to remove the barriers.  You ‑‑ anyone who has a nuanced and comprehensive understanding of your landscape is allowed to submit a term and kind of help us reach out to as many and perhaps increase as many languages as possible.  So this is the current status at the moment.  Yeah. 

So this is just a little bit of a playbook and what we do, we kind of move and advocate on the side of transparency.  More awareness and monitoring.  We also side on towards education.  This is ‑‑ not a lot of countries have reliable information.  We do a lot of independent research and analysis.  And, of course, we do work with Governments across to create informed Government and nonpolicies that lead to kind of triaging of resources to highly impacted populations. 

    And I think I don't have any more to add.  I know we are running out of time.  I'm happy to take any more questions and thank you so much. 

   >> SAFRA ANVER:  Thank you so much.  Let me just add everyone on the panel, since we are short of time, does anyone have any questions?  Nothing at all.  Okay.  So one thing that I would like to ask all of you is, who do you think should be responsible for the development and enforcement of these policies to restrict hate speech and insightment to violence online?  And how should these be applied?  Maybe we can start with Neema and then go to Rotem and then Raashi. 

   >> NEEMA IYER:  That's a great question.  And I think it needs many, many different stakeholders.  So, of course, it starts with private companies because they oftentimes have the most money to fund different things.  But I think they need to work closely with Civil Society.  Everything from grassroots movements, women rights movements.  Just a very broad spectrum of people working within Civil Society and that they should be compensated for their work.  So I don't think companies that make tons of profits should rely on the volunteer work of, you know, Civil Society (cutting out). 

    Doing the research behind like linguistics and understanding hate speech, AI.  And then lastly, I would put Governments because ‑‑ in the African context what that looks like.  But I think for all of this to work out we really need transparency from the platforms themselves.  Share their data with a study they did.  We had to like build some script and do it in a very, very manual way which wasn't fair.  So I think there needs to be more cooperation and accountability and sharing back that feedback.  So if we wanted to do a study in content moderation, to be honest ‑‑ I think that many people are stuck behind MDAs and can't talk about the experiences.  So I would love to see private companies open up their platforms, open up their systems for especially (cutting out).  That's my bit.  And thank you so much for having me. 

   >> ROTEM MEDZINI:  Go ahead. 

   >> RAASHI SAXENA:  I agree with a lot of what Neema said.  We have an MDA.  We have worked with a lot of social media companies and I do believe there needs to be transparency and accountability.  They have such a wide reach and would be able to help us solve this issue.  But it comes down to enforcement and it comes down to service providers and social media companies and, of course, Governments.  So I think we need to have more of a development of policy which would ideally be done by a variety of stakeholders that kind of move towards some sort of standard practice.  And it would be nice to rely on the Civil Society that has a much deeper insight in to the ground realities. 

   >> SAFRA ANVER:  Rotem.

   >> ROTEM MEDZINI:  Yes.  So basically I agree with what Raashi and Neema said.  It is a core regulatory method of working together.  But what we thought is just that the online social provider is ‑‑ platform is the one that leads the initiative because in the end it is having the platform itself.  It controls it.  That's what I think I would have, the fact that it needs to lead.  And the other thing as I said earlier is that if we can have different schemes of collaboration between different actors.  We don't have to say law enforcement and state actors need to get the same treatment or note ‑‑ or response as let's say Civil Society actors.  As I said before, if state actors notify me I can say well, fine.  So I would ‑‑ I can consider whether I want to broaden the takedown to just to the entire world or just concentrated on the specific country.  On the other hand, for Civil Society I can decide different rules.  I can work with them in another way.  I can kind of train and collaborate with them in different aspects. 

    And lastly with users I can adopt a different approach.  I don't have to always say well, this is ‑‑ only one way of notification and only one response.  If I'm the online social provider and the manager I can kind of think through and think differently between the different actors and responses that I apply to each and every one of them. 

   >> SAFRA ANVER:  Thank you so much.  I know that we have a lot of questions.  But I have been told that we have a hard stop at 3:30 my time, of course.  So thank you so much to Neema, Rotem, Raashi as well as our previous panelists, Giovanni, Vincent, Zeerak as well as Lucien for being here.  I know that all of you have a lot of questions.  Please reach out to any of our panelists.  They do go through it.  They would love to answer any of the questions that you have and read up on them on their profiles on the IGF platform.  I hope you have a great day.  And feel free to question yourself on online hate as well as AI. 

   >> RAASHI SAXENA:  Thanks, everyone.  This has been wonderful. 

   >> ROTEM MEDZINI:  Bye.  Thank you very much for the panel.