IGF 2019 – Day 2 – Estrel Saal C – NRIs Collaborative Session On Harmful Content

The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> MODERATOR: Good afternoon, ladies and gentlemen.  Our session is NRI collaborative session on harmful content.  It is the security ‑‑ oh, no.  The harmful content. 

Just a minute, please. 

Toire.

NRI collaborative session on harmful content on the Internet.  We have a number of policy questions that we will be following.  And my suggestion is that we can follow the policy questions one by one.  Then the panelists can be commenting on each of them.  The first of the policy questions is how can risks of contact and content be addressed successfully by legal and regulatory approaches as well as by technical instruments and how can digital civility be increased?  Who can ‑‑ can we have ‑‑ let's start by ...

I hope we understand the question.  So can we start by Japan?  Yes? 

>> Japan:  Hi, my name is Toshiaki from Japan.  I'm the Vice‑Chair of the Japan Internet Society association.  For seven or six years ago, the first we installed a system only for child pornography. 

      So in Japan, the constitutional leader, the communication through the direction of that is strictly prohibited.  It is not easy.  Still, we are carrying the legal risks now.  But the child pornography is very ‑‑ how can I say?  Very important human rights.  So we have to do it for the future, the children. 

      So in this point of view, it is not easy to look at content such as pornography or other illegal or harmful information but we don't have any other measures in Japan, except for prohibits strictly child pornography.

      Lastly we have a big discussion about the private with the manga.  Japanese manga.  We are fighting against to make awe law to protect the property.  That's all for now. 



>> MODERATOR: Thank you, representative from Japan.  He made mention of the fact that Japan is making a point on the issue of Child Online Protection.  And also a number of regulations.  Can we have Armenia? 

>> Armenia:  Thank you, Diana negal speaking on behalf of the Armenia IGF.  I would like to thank all the colleagues for this opportunity to create a collaborative session.  This collaborative session we started two years back within the IGF scope.  The first year it turned out to be very successful.

      Last year, we faced that some of the sessions were not being that well attended or well organized.  But we thought that there would be a value still to continue with this, bringing the efforts of all the NRIs together and having the topics that are in common to share the experiences that we have in our own countries. 

      So this year, we have several topics in this and on this harmful content, I would like to thank all the participants of all the countries that have been get together throughout the whole year, we have coordinated several calls, discussed the policy issues, all the contents that we have here.  And sharing the practices.

      On the perspective of the Armenian experience of the harmful content, as the manager dealing with the ‑‑ not a blogging,s but we have a policy registry that the domain names which have content illegal or harmful that can be for the country or for the perspective of ethic unrests or the fighting there.  So in the policy it is written that it is not allowed to do such things in the backside in the content of the domain name. 

      This will give us a right to revoke the domain registration.  And also we can revoke the registration in case we have the ‑‑ we as the data provided by them, by the registrants incorrect or missing information.  So this will give us also right to revoke the registration and domain names. 

      But there is a group within the TLDs, within the registry which is a multistakeholder itself.  We do include it in this group, the participants ‑‑ the representatives of police, international security.  So those issues that comes against the rendition of national security thing, taking into account the registrations that are coming from the countries or bringing the racism thing, for instance, that is concerning the national security.  That instantly goes to that police department and national security department.

      This group, this alarms, these notifications coming and discussing within a group, we give them the right the registrant, a notification that there is something which is against the policy.  And we can stop the Delegation of the domain name.  We give them the time and they are taking actions, if they responding and taking actions then the domain name itself, it stays there as a registration.  But if ‑‑

      But if not, there are cases they don't respond, we give additional days in trying to find the registrants.  In some cases there are fake registrants, et cetera.  So we're stopping the Delegation of the domain name also in this way. 

      For the child pornography or child abuse that happens to be, that automatically goes to the police without discussion.  Et cetera, the domain name or registration is being blocked.  There is cases, when we had the adult content.  And we received throughout Europe, our practice, some cases and such cases that there are websites which we need to pay attention to.  But what we did in this group, it is not like just blocking or closing the domain name or revoking the Delegation, we reached out to the registrants and said that you need to put some barrier.  The first page that comes.  We do have adult content.  Confirm you are above 18 years old, et cetera, et cetera.  It is not closing the website, but putting a barrier so those who are adults and want to go to these websites, they still have that opportunity. 

      We do not block in this sense of happening. 

      For the many years, I say this model, working with this group was really helpful, and it doesn't make any complications to bringing the harmful content into existence.  So ... thank you.

>> MODERATOR: Thank you, Armenia.  A representative will give the brief experience of Armenia on harmful content. 

      And the way that the process that they're addressing the issue.  The next one is Bolivia.

>> Bolivia:  Yes, well, actually, we don't have, except for the traditional roles we have in our country, we don't have any particular regulation regarding harmful content.  In order to address it, actually, we're discussing about it since I will say a couple of years now.  And what we believe is that the main effort that we should do is empowering, building capacity in the vulnerable groups, particularly in children that actually receive this kind of content sometimes without anyone noticing.

      What we did ‑‑ not in a national level, but a local one, I was actually involved in writing the draft of a bill in La Paz city, which is our main city in Bolivia.  And this was aimed to regulate harmful content in the Internet cafes, Internet rooms we have all over the city.  And the idea is that it forced owners of the places to provide some sort of filters in order to prevent for the children to access this harmful content.  Not only pornography but biome content, things like that.  Some other cases when they wanted to allow the content for adult people, then the regulation will force them to have separate spaces in order to warrant the differential access for and protecting in that way the children. 

      So again, we believe one of the big steps we need to do is to build this capacity not only in the children, but in the families in all the community, in all else.  But the other side is a little bit more tricky.  We ‑‑ there is always going to be the controversy, and that is something I would like to know later about Armenia or other experiences.  It is going to be the controversy of who has the right ‑‑ who has the power, who ‑‑ which are the procedures in order to block any kind of content that of course should be ‑‑ or may be enforced by DNS blocking, IP blocking or any other technical possibilities.  But, again, we are worried about the procedure.  Because we don't want is that using that kind of procedure, then we will happen to have in the future the possibility to receive different other kinds of blocking.  Like I don't know political controversies brings up different content.  That could be also being blocked using the same procedure.

      That is something we need to discuss in order to have it all, face it together.  Thank you. 

>> MODERATOR: Thank you Bolivia for sharing your experience.  The next one is Nepal.  Do I have a representative from Nepal?  No?  Okay.  The next one is Cameroon.  Any representative from Cameroon IGF?  Then the next one is France.  Thank you. 

>> France:  Thank you, it is a pleasure to participate in this session.  This is a crucial debate for democratic systems which are continually trying to find balance between civil and fundamental rights there is a handful of 109, that is a key priority of the French and European digital agenda.

      I would like to introduce three key points in this session.  First, a critical look at French context.  At national level, we have a very proactive public policy and succession of [?].  In the flow of the desire to fortify international collaboration in this field.  So going about the focus of regulation. 

      Beyond content, we need to be able to regulate systems and algorithms to guarantee platform liability. 

      And my third point is how we can promote a civic approach among individual Internet users.  First, about the French context.  In the last two years, two learn to aim to fight harmful content has been presented.  In 2018 allow for the information during elections was enacted and aims to protect democracy against the difference ways in the disinformation is spread intentionally.

      And to tackle 109 is being discussed in French parliament.  This text asks platforms to fortify the mediation policies and increase transparency.  Criticism from Civil Society will ensure violation of freedom of expression because of three particular points.

      First, the broad and vague definition of the content included in these laws. 

      Sec, the window of time to remove this content.

      And third, the extent of the judge in the decision of removal. 

      At the international level, French wants a harmonization of content regulation.  French digital sector signed at the IGC event and European and global initiatives including the [?]. 

      This text call for more information from platforms and better cooperation between platforms.

      About my second point, which is the business model of sectors.  The business model is not optimized for democracy.  We have to think about the mechanisms behind the spread of content.  In particular, the ranking of content, the principle of user generic content and regulation.  And the processes regarding adds.  Moreover, the users of platforms are not fixed. 

      They evolve quickly.  Public authorities, Civil Society and researchers must be able to follow the developments.  For this, access to data and algorithms is a key point.

      My third point focuses behind the regulation and the need to improve society and individuals to bring a civil approach to the digital environment. 

      In France, [?] my think tank created the platform [?] ONG.  This is a tool and method to empower activists to face hateful online content.  [?] to it encourages them to be active and participate positively online by providing content and advice, the platform invites defenders and NGOs to intervene [?] without powering the debate.

      We need to work like this for citizens to provide civic values online.  Thank you. 

>> MODERATOR: Thank you, France, for sharing your experience with us.  I think from now on, we need to speed up in order to meet the time.  Because we are already about 19 minutes in our time.

      The next one is Lebanon. 

>> Lebanon:  Hello, my name is zinna.  I am presenting the Lebanese IGF.  I also work for the Lebanese incumbent telecom operator.  I want to just briefly describe what's happening in Lebanon and what are the different stakeholder taking as actions to remedy for the online harmful content. 

      As study conducted by a major university in Lebanon confirmed the increase in number of children who have access to online platform among which were gained social platforms and other.  The study showed the following figures.  66% of the students interviewed admitted having strangers on their social media platform. 

      23% admitted having met those strangers live.

      54% admitted having been subject to violent material.  19% of the students received messages from extremist groups.  So these numbers clearly highlight the risk that children are encountering through the Internet without the proper tools and ways to protect themselves.

      The figures clearly confirm the dangers that our children face online.  As navigating the space is an unsafe way can expose them to threats such as extremism, bullying, extortion and other.  Hence the importance of protecting them and educating them to act smartly online.  Many entities from different stakeholders can create initiatives in Lebanon. 

      I will tell you a little bit about these initiatives.  First, we have the higher council for childhood the manufacture of social affairs in collaboration with ISPs security agency and ourself as operator and Internet provider.  We're working on developing a tool for teachers in schools.  It consists of material to be used by teachers to spread awareness among students about online security. 

      The same stakeholders engaged in a common awareness campaign at schools in the different areas of Lebanon.  An awareness campaign was conducted in municipality, including in the rural areas to strengthen awareness of students of the risk that the children may encounter while surfing.  The incumbent operator in Lebanon and main Internet provider introduced the parental control service within its offerings.  When activated it can prevent access to sites that are not suitable for children such as violence, drugs, pornography.  We also are working on awareness campaign with different Lebanese schools and activity at our premises for children any age between 7 and 12 years old to raise awareness on the risks that they might face online. 

      One additional initiative that I would like to share with you is called (speaking non‑English language) which is in English, an evening for the family.  This has been initiated three years ago, and it is happening every year. 

      It is stressing the importance of family gatherings and discussions that can be undertaken and activities that can be carried out away from the digital tools annoyance.  It is like a digital detox, where schools support the schools in Lebanon by giving as homework for this evening only family activities as contribution to this meeting of clarity between the family members.  The family will meet, they will talk, exchange ideas.  Parents can discuss the problems of the children and children will be encouraged to talk about their fears and maybe they can tell their parents they can be more let's say safe to tell their parents if they are facing something that they don't feel good about online. 

      The last thing I would like to share is that there is an NGO in Lebanon called himmia.  It is developing a tool to be distributed to children.  It will be distributed for free during the awareness activities and events.  And it would be also distributed within the point of sales of the operators in Lebanon.  It is an adapt a little of the French tool to be ‑‑ to raise awareness about all the threats online during the summer vacation.  Thank you. 

>> MODERATOR: Thank you, Lebanon, for sharing your experience with us.  The next one is U.S.A.  Do we have ‑‑ yes?  Okay.  Thank you. 

>> U.S.A.:  Good afternoon, my name is Melinda Clem, I'm the coshare of IGF USAa.  I work for the domain affiliate, I can talk about what we do to remove harmful content as well.  At IGF‑USA we have been talking about harmful content from a number of perspectives over the last couple of years.  We had numerous sessions.

      Before I dive into that, I want to establish that there are two relevant laws that we're operating under in the United States.  The first is the First Amendment to the U.S. constitution that notes that our Congress cannot infringe and enact any laws that in any way abridge free speech.  There are noted exceptions to that.  Less or no protection is provided to harmful content.  Things, you know, that elicit imminent harm, child pornography, the types of harmful content we're talking here.

      There is definitely some limitations there.  The second relevant for the technology sector is section 230 of the communications decency act.  What this does is provides intermediary liability.  Let me breakdown what the two words mean.  I will put it in the context of my company that might make it a little bit easier.

      So as a domain registry or registrar, data center hosting companies, we aren't actually creating content, right?  We're just acting in some type of intermediary, a facilitator, but not actually generating content and some cases not even hosting it.  So what section 30 does, it gives us flexibility to make a judgment call and remove harmful content if we do find it or if it has been ‑‑ if we have been alerted to it by third parties and evaluated.  And it doesn't ‑‑ and then it doesn't allow us to have any sort of legal action taken against us for that.  As well, it doesn't require us to definitively act.

      So with that context, we talk about this at IGF ustusa.  And it is the general consensus over the last few years, we do not want any new or additional regulation.  That we would like to take care of this within industry and best practices and enhance the things that are done today.  One of the things that can be enhanced and be a lot better is transparency.  That especially when you get out of the infrastructure layer that I have been talking about, and you get into platform providers and social media companies, you especially need more transparency into processes, the decision‑making, why they are or aren't acting and how much?  You know, what sort of volume are they acting in?  That is what we mean by transparency.  We would like, in general, more good actors. 

      For example, afillias works with, as do several other technical members of IGF‑USA, a number of parties that we trust and like the IWF that we work with.  They can bring content to us.  We know that they have done the work, that they are a credible, valid source and we can act immediately to remove that domain name so all of the content can go away.  That is something we see with a number of U.S.‑based registries, registrars and hosting companies.  But certainly not all.  This is an area across the globe that more people that could work with organizations, these sort of trusted third parties is a great way to improve and violentate ‑‑ facilitate and expedite removing harmful content.

>> MODERATOR: Thank you, U.S. for sharing your experience.  The next one is Nigeria. 

>> Nigeria:  Thank you very much, moderator.  My name is wi‑Zetia, and I'm sitting next to my Chair mayor adruma.  I think the word "harmful content" is a bad pool of contents that are negative, which include pornography, child abuse, hate speech, disinformation, and so forth.  So an effective way to deal with that is to really separate each in its own docket.  So that you don't create confusion.  Or create a situation in which the freedom of speech and so forth can be abridged.

      In Nigeria, we have the cybercrime act, which was signed in 2015.  And it provides explicit provisions on the child pornography and other realitied crimes.  It provides specific punishment, including the jail time of 15 years and so forth.  And fine ranges from $500 to something upper. 

      However, it has not been very functional.  Perhaps because the coordinating body has not been actually constituted.  I don't think anybody has been approached in that context.  It has a provision on racism, hate speech, so forth.  Again, nobody has been tried on that.  Of course, a number of journalists have been arrested and detained on that basis.  That is why it is important to make separation.

      Now, over the last three years, there has been an attempt at the National Assembly to introduce a separate provision in the legislation to tackle hate speech.  This has been generally opposed note because people are not concerned about harmful content of the effect on consequences of hate speech. 

      But because people are suspicious about the reason for such legislation, there is also the fear that it is one way to deal with the demand by Government to control the social media.  Which again, there is a separate legislation that has been processed in the National Assembly to deal specifically with social media.  Now, one of the problem that people point about this proposed legislation is about the fact that we don't even have nationally accepted definition of what is hate speech?  What contributes to hate speech?  So without defining what the offense is, then it becomes problematic to provide legislations against what you haven't really defined.  Therefore, it can be subject to arbitrary prosecution. 

      People are also pointing out that in many countries where you have legislation, again, hate speech often.  It is the victims of hate speech that tend to be prosecuted, rather than the perpetrators of hate speech.  So this suspicions have more publicized opinion, and defeat the National Assembly to legislate on the matter. 

      As I said, currently, right now, there are two bills that are going in legislative processes in the country.  One to deal with the hate speech, and one to specifically about social media. 

      A number of the partners in the Nigeria IGF have also been doing a lot of work in terms of public awareness, particularly about child protection online.  And our Madam Chair has been addressing quite a number of those platforms.  Which is to raise awareness because often the problem is not just that people are exposed, but people are not clear, are not even aware of how they get into the danger of getting stuck by the other people.

      There are also a number of other NGOs that are a grand observatory, which is about monitoring, gender violence online.  I think this is an especially important component of the harmful content.  In the context of Nigeria, we know a number of youngsters that have been killed, not just been exposed to harassment and negative content.  But also lost their lives through said content.

      It is very important to pay attention to such negative content.  There are partners that are monitoring online, and respond to each other through reporting, and police and security agencies.

      Through raising a public verse globally can bring out those people, perpetrators.  Expose them, make them able to be prosecuted.  So forth.  Also other partners who engage in monitoring and countering hate speech.

      I think that this is that often, it is better to evoke more assumptions on people who perpetrate hate speech and so forth than subject them to legislative or legal instruments which might take many years to do so.  And often sometimes the consequences would have occurred and would be impossible to retry, because if hate speech result in violence ‑‑

>> MODERATOR: Sorry. 

>> Nigeria:  Just final point.  I think that generally, the direction that we move in the countries about increasing public awareness about this, and also working with the platform providers like Twitter, Facebook, to provide opportunity for people to report and to demand that harmful content be removed online.  Thank you. 

>> MODERATOR: Thank you for presenting the Nigerian side of the harmful content.

      The next one is EuroDIG. 

>> EuroDIG:  Thank you, good afternoon to everybody.  EuroDIG is the regional IGF for Europe.  But it is not only for the European Union, it is for the geographic Europe.  So we have like 50‑plus countries participating.  And that's why I can't really talk about legislation so much because legislation is very much a national affair.  But I give you an idea of the methods at EuroDIG in June issued after discussions and deliberations.

      Of course, in Europe, a lot of the talk of the harmful content, which is sort of [?] is focusing on interference from certain Governments into European elections and referenda.  It is also ‑‑ it was pointed out that EuroDIG that there is an interplay between pol larrism and technology.  And that means that the extremists and hateful online content keeps growing.

      These are some of the problems in Europe we try to tackle.  At EuroDIG, we actually had a couple of workshops, that led to the plenary on harmful content.  And the one plenary was about journalism in the forefront.  Against harmful content.  And it was ‑‑ it was noted that this is something that ‑‑ harmful content and disinformation needs to be tackled quickly.  We need quick methods and quick means to get encrypts with that.  There are rapid allowed systems to flag this information. 

      For instance, European Union has part of this external affairs, there is this unit that keeps track of various disinformation efforts in Europe.  And helps journal Is and others to tackle that. 

      One general observation was that disinformation, it cannot be tackled by one actor alone, but requires a multistakeholder approach.  This means fact‑checking activities by journalists and others and we should get more collaboration between media outlets and online platforms.

      And if you ‑‑ if you see the news during the last year, we actually see that the platforms are coming closer to understanding the problems of the traditional media.  And various ways, we are, I hope, approaching a situation where those better relations will lead to less disinformation on the online platforms.

      Going on, the second workshop was about media literacy, which is really the necessary counterpart to any efforts to tackle harmful content and disinformation.  What we did was actually ‑‑ we had a game ‑‑ a bad news game where participants were put in a role of creators of fake news, disinformation.  And they actually ‑‑ they learned how to create effective disinformation.  Which it was a good exercise because by doing this, by playing the role, you actually learn how to ‑‑ how to discover that sort of ‑‑ that sort of content. 

      Finally, it's at the plenary session, we talked about the regulation ‑‑ the regulatory approaches in various European countries and identified that the regulation minefield.  Because as everybody knows you have to walk a very thin line to avoid tackling freedom of expression.

      At the same time, you have to try to get the disinformation makers into order.

      It is also noted that the legislative efforts are going on in many European countries, but it is not only new laws, you also have to consider the existing regulation and human rights framework so that whether they could be used better.  They could be implemented better, because we are really in a new situation.  I think I stop here.

>> MODERATOR: Thank you so much EuroDIG for sharing your experience. 

      The next one is Italian IGF

>> Italy:  Okay.  Thank you.  How much time do we have? 

>> MODERATOR: We have only 12 minutes left.  We are just on one policy question.  So maybe I will give the panelist one more minute each. 

>> Italy:  I will try to keep it to three, four minutes possible.  Four?  Okay.  Then five. 

      So I'm here representing the Italian IGF, part of the organizing Committee, it was held in the end of October to intervene. 

      And on harmful content, there were several sessions allocated to that.  I want to reflect to you five of the points that were raised during the sessions.  It was also quite indicative to see how much the topic around the different IGFs during the years became more prominent, had more participate objects and action calls on the issues.  First, it was brought was cyberbullying law enacted in 2017.  It was something that was particularly targeting youth and the harassment, cyberbullying.  It was codified in the law with provisions to have 48 hours blocking within the platforms to block content within 48 hours.  They were reported through the enforcement officials.

      And this has been one of the most impactful measures that have been presented.  Another couple of events of measures were done by the regulatory agency for communication.  So the independent regulator of media and communication.  They had launched first in 2014 an observatory on hate speech.  To monitor hate speech online.  And this one also had two important elements in that.  Basically asked the platforms to do a quarterly report of their activities to counterhate ‑‑ counter hate speech online.  And then invited all the Internet actors to provide evidence of the campaigns and sort of literacy initiatives that will be done to counter hate speech.  It is regulatory from the agency.

      Probably the most interesting one that was presented in the Internet IGF was the process to reach to a law that codifies in the revenge porn as a crime.  That was done through a grass root initiative that started by young Internet users.  They started coalescing through Facebook groups and then they launch change.org campaign, which reach one hundred thousand be in number signatures.  That had so much ‑‑ they made so much noise through the media, also thanks to ‑‑ unfortunately, to some high‑profile cases, where victims of revenge porn that were harassed because private videos were shared on WhatsApp, on Facebook.  There were two high‑profile cases where the victims they committed suicide.  And that raised a lot of the stakes and attention to that. 

      This law has been passed in May, 2018.  With 491 supporting votes.  And it has very harsh provisions.  Charges go up to six years in jail.  Fines that are, you know, the fines increment every time that you share this content.  And even if you are receiving the content, you know, you can be liable for that.  So the law has been quite aggressive in addressing this issue. 

      The last initiative that is ‑‑ it is also grass root initiative, I think it is important to flag it here.  It is called hostile words.  In Italian, it would sound like which is (speaking non‑English language) which is basically words or styles.  It doesn't work effectively.  But basically, it is a platform that has involved over 300 journalists, social media, staff, managers, politicians, professors they have communication campaigns where they basically flag hate speech cases they use Twitter, have a Twitter, hashtag.  And they target anything.  Target from the politicians to the journals, Article.  To say, look, there is a fine line between style.  You may criticize, so it turns into harmful content, hate speech. 

      This is something that is working.  Sadly, they got a lot of profile attentions in the Italian media.  Even myself, as an Internet user, I come across, you know, when you read the headlines of newspaper or some news shared on Facebook, well, oftentimes, they're flagged or reposted by the platforms to point out, well, what is wrong with that?  And to have a discussion on that one?  So these are the four best practices brought up.

      The last two, revenge porn and all the words, they were done by basically under 30 Internet users.  Very young.  Some of them, the intervening IGF, they couldn't travel, they're 16, they have to go to school.  They joined remotely.  It was incredible to see how committed they were, how aware they were.  They didn't wait for a law.  They created the platform and then when they saw that actually the law was needed they campaigned and reached the signatures and went all the way through having a law enacted.  That's the report on the Italian give IGF.

>> MODERATOR: Thank you, Italy.  Unfortunately, the time allocated for this small.  How did we know it would be longer than this.  We have a lot of question areas, I will Marge the question area, I will give the panelists half a minute to minute to comment on each of them.  So the areas are diverse. 

      The question is what role should Internet platforms play in defining the standards for acceptable content in light of freedom of speech? 

      Then how can globally accepted standards be developed?  This one of the question.

      Then the next one, what kind of collaboration could be created among Internet platforms and media outlets to fight disinformation and fake news? 

      And the last question is where is the middle ground between increasing demands for proactive content policing by digital platforms and the necessary neutrality and legal certainty for the platforms? 

      So I'm joining the four questions together.  Let's start by Japan.  Please, do you kindly comment?  Give one minute or less.

>> I want to suggest that if you comment on one, you don't have to comment on the other, so it is not the same thing you comment on.  Just take one and comment on it, please. 

>> Japan:  In Japan, also we have argument something like (?) ‑‑ I understand that the education or something like that is very important.  But it takes too much time.  So we have to think about an aspect of that.  We already have a crisis in front of us today.  Sometimes child pornography or something that there is victims located in some suburbs. 

      So I think we need international cooperation to find out who make it, who located it. 

      In Japan, which is located in Japan, we can easy delete it, erase it.  In other countries, it is not easy.  Only who make it oh, who lend it is not easy to identify who it is. 

      At least, we can find out who is name, address, phone number, something, we can arrest him.  And make it ‑‑ sorry, delete the harmful or illegal content.  So hosting providers, content providers or data center operators should be cooperate to the things like that.  So before that, we need to system or something.  Kind of international data change, something.

>> MODERATOR: Thank you, Japan.  The model we will follow now, we don't need to go sequentially.  Anyone of us that has a comment on the question policy question can comment.  I have already mentioned the question areas.  Which includes what role should Internet platforms play and how can we develop acceptable global standards and what kind of collaboration could be created among Internet platforms.  Where in the middle between increasing demands and net neutrality and legal certainty for the platforms.  Anyone that feels you have more to comment, we don't need to go sequentially, because we don't have time.  Okay. 

>> France:  Constant international collaboration, we have to give a concrete work to Civil Society and to exit from [?] conversation between agreements and platforms.  The return of experience on [?] the charter of G7 was that Civil Society arrive at the final discretion.

>> MODERATOR: Thank you, France.  Anybody?  Okay.  Nigeria.

>> Nigeria:  I want to respond to the issue of platform providers.  I think that while they can develop technical solutions and also contribute through this own standard because I know a couple of them have already set up a standard.  But I think it is important to emphasize the point that we should not surrender the issue of defining what to sample and so forth by ‑‑ to them.  In particular, countries like Nigeria, where you have multicultural, multilinguistic thing, often, it is very difficult for someone who doesn't understand the culture, doesn't understand what some of the communication practices that are in place.

      Therefore, I think the important thing is to think about a much more inclusive process of addressing the issue than the sectoral strategy.  Thank you. 

>> MODERATOR: Thank you, Nigeria.  Anybody?  Yes, go ahead, yes. 

>> IGF‑USA:  This is Melinda from give i U.S.A.  I would like to go to the final point, coming to the United States where all the large platform companies are actually headquartered, I think it is safe for me to say that we do not want to take the role of the ultimate arbiter of defining what is content.  That that is not a responsibility that we want or believe that we should have. 

>> MODERATOR: Thank you, U.S.

Okay.  Go ahead, EuroDIG. 

>> EuroDIG:  Thank you.  What kind of collaboration?  Many kinds, but I would say that we have a very good example from this IGF actually.  The WW W contract proposed, already, if you look at the website, it is already probably hundreds of actors endorsing it.  Of course, this happens in the presence of Governments on a very high level. 

      So I think that that sort of really luminous initiatives are necessary to tackle not only harmful content, but actually many other ills that we have in the present system.  Thank you.

>> MODERATOR: Thank you, Italy, go ahead.  One minute or less.

>> Italy:  Okay.  Less than that.  Already the discussion, I think he flags an interesting landscape, so there is an asymmetry here.  On one side you have users across the global that they are all are the same level of users.  And then depending on where this user is connecting and using the services, there is a different level of protections.  And this asymmetry, we speak a lot about the digital divide.  It is actually digital divide itself.  This one.  This digital divide, they cross beyond the lines of rich and poor countries.  Because, you know, you have users that are very much exposed to these issues in rich countries and users as much as users in poor countries.

      So it is clear that, you know, if you are lucky enough, you were born in Italy now and you're 13 and your boyfriend shares pictures of you online.  Do you have more protection if your boyfriend or girlfriend does that if you were born, you know, in any other country, maybe the U.S.

      And I think this asymmetry, it is something that points to the need of the discussion here at the forum.  And sometimes you don't need a law to help you.  The case that the Italian IGF raised of grass root initiatives where they, you know, they basically set up the manifesto, identified what could be hate speech was very effective. 

      But sometimes, you know, to go across this asymmetry, you need to take the players at the global level and say, you know, you should probably do something to respect those standards, regardless of where you're operating.  Otherwise, there is a divide there. 

>> MODERATOR: Thank you, Italy.  Please, do we have any comment or any input from the audience?  To the panelists?  In the absence of any question to the panelist, do we have ‑‑ do we have any further comment? 

>> AUDIENCE: I don't know if there is any comment from online or questions online? 

>> MODERATOR: Okay.  Thank you so much the panelists, thank you the participants for giving me the maximum ‑‑ okay. 

>> AUDIENCE: Just one last comment.  I wanted to comment about our experience that we're going off in my country, Bolivia.  Many other countries, coming up from an election.  That is the perfect [lolivia]

that is the perfect place where misinformation.  Fake news comes up.  I think as a community we learned a lot about this process and we're ready to face it again with maybe with a different view in terms of actually convene everybody to have any sort of platform or strategy to combat to tackle all this fake news that we received a lot.  I think our local IGF is going to be the best place for us to continue the discussion and to come up with different new ideas to face this. 

>> MODERATOR: Thank you very much.  Okay. 

>> AUDIENCE: Thank you very much.  I want to thank everybody that commented.  I think we have to take three messages out.  From my own, what I recorded is first the grass root, second the definition, third the awareness.  And then the collaboration that everybody must join hand to see that we tackle hate speech online.  Thank you.

>> MODERATOR: Thank you so much.  The panel is closed down.  Thank you. 

>> AUDIENCE: In closing remarks, I would like to thank Anna for all the work.  She lost her voice and cannot step in to talk.  Thank you for the hard work you are doing.  Thank you very much.

>> Thank you for the makeshift moderator.  You have done so well.  Thank you very much. 





[Concluded]