IGF 2019 – Day 2 – Convention Hall II – Addressing Terrorist And Violent Extremist Content Online

The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 



>> JORDAN CARTER:  Good morning, ladies and gentlemen, if you'd please take your seats in the room.  Welcome to the panelists.  Welcome to you all in the room.  Welcome to this session on addressing terrorist and violent extremist content online.  My name is Jordan Carter.  I'm from Internet New Zealand, and it is my job today to moderate this esteemed panel, and we are going to do our best to make sure there's plenty of time for audience engagement and interaction.

And we have our final panelist just joining us, which is fantastic.

And I'll run you very briefly through.

I'll run you very briefly through to start the format of today's session.  We'll have a very brief intro where each of the panelists will introduce themselves.  We'll then see a video from the New Zealand Prime Minister contextualizing this session and two main blocks of content.  This first is on responsibilities and responses and that addresses two of the policy questions for today.

The question is:  What are the responsibilities of the various stakeholders have to each other?  And what responses are they making to the problem of violence and terrorist extremist content.  The second block is about rights and risks.  We'll start with human rights implications of this type of content and what risks there are from action to tackle it and then we have a block that we think will be about 20, 25 minutes for audience interventions and we'll close by asking each of the panelists to give us input on the two closing policy questions, what are your policy recommendations for dealing with this content going forward?  And what role with the IGF ecosystem play in making progress on these issues?

Of course the topic is terrorist and violent extremist content.  The Internet as we are probably mainly convinced in this room and this event is an amazing force for good.  And one of the things that it allows is rapid communication.

And there's been a huge proliferation of various social networking and other forms of platform that allow very quick dispersal of lots of content, user generated content, to large audiences at high‑speed.  And we have seen there are incidents of problematic content, and the focus here is on violence and terrorist extremist content.  As a New Zealander after the terrorist attacks on Christchurch, New Zealand.  In a country of 5 million people, that's the same death rate as the attacks in New York in September 2001.  So it had quite a profound impact on us and it saw the Internet being used in a new way, to disburse problematic content much more quickly at a much higher scale than had been done before.

So that's a piece of context that I have and carry with me and that's why I was interested in moderating this session today.  So if I could just run down the panel briefly and ask you to spend 10 or 20 seconds just introducing yourself to the audience and then we'll do the video and then come back to the substantive interventions.  So if you'd like to grab a microphone, and just say your name and organization, that would be great.

>> EDISON LANZA:  Okay.  Thank you.  My name is Edison Lanza, Special Rapporteur of Freedom of Expression in the Inter‑American Commission of human rights.

>> SHARRI CLARK:  I'm Sharri Clark, I'm from the U.S. Department of State, Bureau of counter‑terrorism and my topic is counter‑terrorism related Cyber Issues, particularly countering the use of the Internet for terrorist purposes.

>> BRIAN FISHMAN:  Hi, I'm Brian Fishman.  I lead Facebook's efforts against what we call dangerous organizations which are terrorist organizations, hate groups, large scale criminal organizations et cetera.  My background is as an academic studying terrorism and the people that became Isis in particular.

>> YUDHANJAYA WIJERATNE:  Hi, I'm Yudhanjaya Wijeratne, a data scientist.  I'm the Founder of Watchdog, a fact checker launched in the wake of the recent attacks.

>> PAUL ASH:  I'm Paul Ash.  I'm from New Zealand.  The Prime Minister you're about to hear from, I work for.  I work with National Security policy with a particular focus on cybersecurity and Digital National Security issues.  Thank you.

>> COURTNEY GREGOIRE:  Good afternoon.  My name is Courtney Gregoire, I serve as the Chief Digital Safety Officer for Microsoft, setting the policy to address illegal and harmful content across our products and services.

>> EUNPIL CHOI:  My name is Eunpil Choi, from South Korea, Kakao.  Today I will talk about Korea and our company's policy for that we cooperate for today's agenda.  Thank you.

>> GERT BILLEN:  Good morning.  My name is Gert Billen.  I'm State Secretary in the Ministry of Justice and Consumer Protection.  One of my duties is to monitor social media activists and try to prevent terrorism but also to combat terrorism and I think it's an important discussion today that you've organized.

>> KYUNG SIN PARK:  I'm K.S. Park, Law Professor at Korea University, studying freedom of speech.  Also Director at open net Korea.  I was one of the main drafters of the Manila principles for intermediary liability which I think will be relevant for this discussion.

>> JORDAN CARTER:  Thank you to the panelists.  My job to make sure we stick to time so if we get caught up in the rush I might come and loom behind you.  I may put my hand gently on your shoulder to remained you to stop speaking and then I'll make a cutting noise and your microphone will be cut off.  It's an escalation process, clear terms of service, community centered and so on.

So we will open New Zealand Prime Minister Jacinda Ardern.  We invited her to come to this panel in person.  She couldn't make the travel schedule work but she did record a video to contextualize the panel and if we could play that now, that would be great.

>> JACINDA ARDERN:  The Internet is a powerful force for good.  It has an extraordinary power to connect us all:  Communities and individuals, economically, culturally, and socially.  Part of what I understand this panel will discuss is this power for good, the idea to share ideas and values on a scale we've never seen before.  The Internet and the information Communications Technologies empower all of us.

But this speed and this reach is not always unqualified good.  We in New Zealand saw this vividly illustrated earlier this year when the horror and fear and trauma of the Christchurch terrorist attacks was magnified by the livestreaming of the attack.  The Internet and social media was weaponized as part of this attack with deliberate efforts to make the video go viral and to subvert efforts to find and to remove it.

Most of what we know about our response, founded on a shared view of the harm that this content can cause, in partnership with France, we in New Zealand established something called the Christchurch Call to Action, to eliminate terrorist and violent extremist content on online.  This is a voluntary and collaborative effort to address the harms this material can cause while preserving a free, open and secure Internet and protecting human rights and fundamental freedoms.  I'm really proud of the progress that we have all made under the call.  We have over 40 countries supporting the call along with some of the world's biggest tech companies, all acknowledging this is not a problem we can solve alone.

We also have a Christchurch Call Advisory Group with Civil Society groups, non‑government organizations, researchers and others able to provide guidance on these tricky issues and these are shared complex issues.

New Zealand and others work on the Christchurch Call as situated in the context of a range of really challenging I. related to online content.  And while we saw the worst of terrorist and violent extremist content after Christchurch, this sits against a back drop of increasingly difficult issues related to the spread of problematic content generally online, from hate speech to misinformation.

Now, as New Zealand, we are a small country.  Like any other state we can regulate within our borders but we cannot solve global problems on our own.  There are some things that shape how we can approach these issues together, including on Forums like the Internet Governance Forum.  For me, some of these are respect for international law, including international human rights law, but also other frameworks such as international counter‑terrorism law.  Secondly, maintaining a free, open, interoperable global Internet so we can retain the benefits of connectivity.

Thirdly, collaboration and consultation, including the need for Governments, Civil Society and tech companies to work together, something I think we need more of.

The Christchurch Call to Action has been just that, a call to action.  We've established new relationships and made commitments to better prevent and respond to terrorist and violent extremist content online.  Changes such as the evolution of the Global Internet Forum to counter‑terrorism into a stand-alone entity will also make it easier for us to work together but there's still more work to be done both here in New Zealand and overseas.  Thank you to all of you who are engaging in these debates and these discussions.  It is a brave new world that we live in.  Together though I do believe that we can face head‑on the challenges of our time.

>> JORDAN CARTER:  Thank you.

[ Applause ]

So the Prime Minister was able to highlight the work that New Zealand in the Christchurch Call.  You can find it online and I'm sure that Mr. Ash will be addressing some of that content in his remarks shortly.  So the first part of our discussion is on rights and ‑‑ sorry, on responsibilities and responses.  We're going to open with Dr. Park speaking on the topic of the different responsibilities of the different stakeholders so the floor is yours, Sir.

>> KYUNG SIN PARK:  Because I was abundantly warned about the time limit, I will read from a statement.  Violent extremist content is a misnomer.  There is a lot of extremely violent content online or offline.  Look at the games that kids play.  Remember the movie A Clockwork Orange.  The U.S. Supreme Court already said that animal cruelty videos in the Stevens decision and the violent video games in the Brown decision cannot be banned for content.  Content can be banned only for its external harms that are necessary to prevent to keep or maintain a Democratic society.

In the context of the examples at Prime Minister talked about, I believe that what we really mean by violent and extremist content is hate speech.  I believe we should couch that this course, this discourse comfortably in the discourse of hate speech regulation which we have understood and written so eloquently and abundantly.  Having said that, do we even understand hate speech regulation so well?

Questions remain.  United Nations norms define hate speech as advocacy of discrimination and hostility across national, religious, and racial lines, but what does hostility mean?  Is causing a hostile mindset a sufficient basis?  Why not count the chilling effect on vulnerable hearers of speech count as harm justifying regulation?

Should counterspeech of minority also count as speech when their speech doesn't have the effect of electrifying the preexisting oppressive social system permissive of discrimination and hostility.

And therefore counterspeech is not likely to cause discrimination and hostility that justifies regulation.  How about joining an organization that advocates for discrimination and hostility instead of advocating itself, advocating ‑‑ instead of advocating for such things by themselves?

And what are the responsibilities of platform?  I think I threw more questions than answers so I think that's a good start.

>> JORDAN CARTER:  Thank you for a provocative and interesting beginning to the discussion.  And on this stage, the interventions to come, come from the various different stakeholder groups.  We have some people from companies.  We have some people from Governments.  We have some people from Civil Society here on the stage and what I'd encourage you to try and tease out in your discussion if you can is both the things that you see your sector or area of this debate being responsible for.

And if you've got any views that might be controversial or interesting about what other stakeholders should do as well hopefully we can provide lots of food for thought and debate with the audience.

So we're going to start with some Government representatives and Paul Ash from New Zealand, would you like to say a few words?

>> PAUL ASH:  Thanks, Jordan, and thanks for the opportunity to speak today.  You've obviously already heard from the Prime Minister a key perspective, which was that New Zealand in responding to what happened in Christchurch did not believe that it could act alone.  We could quite readily have regulated and legislated.  We could have done that at speed but after what happened in Christchurch we felt the need to take stock and to think rather carefully about the options we had for dealing with an event that was unprecedented in its scale.

And that required of us that we worked through some of the perils of short shot regulation and eschew that in favor of a collaborative approach that would leave us by the time we needed to think about regulation better informed about what was possible, through voluntary collaboration and better informed about the remaining areas where we might need to regulate.

Taking that approach, we were very much of the mind or of the thinking that we're working in difficult, uncharted territory, where work like legislation can have significant unintended consequence unless it's rooted in a collaboration of parties.  The ultimate thing became the Christchurch Call, taking a targeted and focused piece of work that was focused in on the approximate issue we saw, that of someone murdering 51 people peacefully at worship and livestreaming that across the world.

In doing so, we saw it was essential to work with the companies whose platforms had been abused to transmit that.  We saw it as essential to work with close partners amongst other Governments who had a similar set of concerns around Public Safety and the impact of that event.

And we saw it as really important to reach out to understand what we knew was a very diverse set of views in Civil Society from the Freedom of Expression perspective at one end in a sense, and right at the other, the groups that were looking to represent the rights of the victims in this instance, and that we would need to find a way to bring that together into one construct.

That led to the Christchurch Call, and it led to the content of that Call which sets out some actions for Governments that include the traditional roles and responsibilities of Government:  Regulation, Public Safety, upholding the social contract and trying to ensure that the kinds of liberal democracies that New Zealand sought to work with on this, respect for international law, respect for human rights law, respect for counter terrorism law at the heart of what we do.  It also required working with companies on the things they should be delivering and how they should be thinking first of all about implementing their own community standards, and about how they might use tools to prevent the spread of that type of content.

Finally the key thing out of the Christchurch Call was the need to work together on things we did jointly, and in a sense that's where the real rub point on this has been, and where I think we have spent a lot of time learning and I think companies have, as well, and Civil Society engagement has produced lessons for us, too.  There is no Babel Fish that translates between those parties at the moment.  There is just a need for some really hard graft trying to understand the different vocabularies we speak, and that's what we're in the midst of right now including on this panel.

>> JORDAN CARTER:  Thank you, Paul.  Our next intervenor is State Secretary Billen from Germany.  Please take the floor.

>> GERT BILLEN:  What is the aim of that kind of extremism?  What are the goals?  What you find out is there are well‑financed Islamic but also right extremistic groups who try to destroy our Democracy, who try to destroy our pluralistic approach to people who are living here.  And the way they are trying to destroy the society is by attacking people, not everybody, but journalists, especially female journalists, local Mayors, people who are engaged in Civil Society, but also people who are representing different kinds of religions.

They want to silence that kind of people.  They want the people to stop using their right of free speech, and that was at the start of thinking what can we do against that kind of attack to our society?  We had talks and we have talks to social media platforms, to Civil Society, and to other partners and when we started a couple of years ago, we asked the social media platforms, what is your task?  What are you doing?  In these times maybe nothing had changed.  In that times, the social media platforms find themselves they are tech companies and they are working on economic issue.  They are companies, and we find out and I think that a shared kind of ‑‑ a shared idea now, they are not only an economic working company, not only a business, they have direct effects to our democracy and the way that we live together, because some kind of misuse or use that's something we could discuss, of that platforms are affecting us in our daily life.

And then we ask them:  What are you doing?  And they told us they will delete hate crime.  In Germany we're talking about hate crime, not hate speech.  It's not forbidden to hate somebody.  It's not forbidden that I tell you I hate you.  But if I have affecting criminal law that's different.  In Germany, it's forbidden to deny the Holocaust.  You can say Holocaust never happened but then we will bring you to court, and you go to prison if you don't stop it.

So we ask, what are you doing?  They have told us two things.  First, they have community standards, very good community standards.  That's their basis for delete content.  And they will delete criminal content within 24 hours.

And then we make a kind of misery complaining.  We have the ‑‑ get money to an organization, an organization that has a reputation as a trusted flagger who know what really substantial complaint, and they test it.  When they send the complaint to Facebook, Twitter and Google as a trusted flagger, the response was quite good.  They deleted.

But if I do the same as a normal citizen, there was nearly no reaction and that shows us what they have promised, they didn't fulfill.

And then we start our ‑‑ creating a kind of network enforcement law, and the law obliges the social media platforms to inform us and to build up transparency.  How many complaints do they receive?  How many complaints do they delete?  What from the reasons why they delete complaints or the reasons why they don't delete it?  We asked for figures and information.

Do they have an appropriate complaint management system?  Because at the beginning for example, my impression was that the complaint management system of Twitter was nearly zero.  They didn't spend enough money and people and advice and knowledge in cleaning up their platform.  And we took a clear decision.  The basis for deleting content in German has to be our criminal law.  Community standard, no Parliament ever has decided, is it appropriate or not?  It's a kind of private law.

But in our democracy, nobody can set private rules, private law.  That's up to the Parliament.  So the network enforcement law is about increasing transparency on what platforms are doing but also to ensure that in cases where content is violating criminal law, it has to be deleted.

What we see now is the network enforcement law could contribute to a better understanding but also to better monitoring of the activities.  We are now together with the French Government in developing ideas that we want to deliver to the European Union, because we think the European Union is the adequate level to develop that kind of rules and activities, and the European Union has promised to set up a digital service act and I think it will have another understanding than the e‑Commerce guideline.  The e‑Commerce guideline that we have in Europe is about notice and take down, but responsibility is more than notice and take down.

A second part, we have to spend more attention to the victims, more support for the victims, because many victims who get threatened by not only in the net but also in the reality when they go to the police, police don't have ‑‑ don't understand what's going on.  They say, it's just on the Internet.  That's not as ‑‑ that's not a problem.

But we have now some very bad experience in Germany in the early summer of this year, a civil servant in Germany was shut down, was murdered by somebody who got his information through the Internet and let's say who had a kind of radicalization through the net.  So victims, it's not only about deleting content.  It's also about to support people.  Some of the journalists for example have to leave their home because they find a picture of their front door of their home on the Internet.

The second is to improve our knowledge about the perpetrators in the net.  Who are the people who are the organizations, and who are they working?  You're looking to the watch?  I finish soon.  Because what we see in Germany there are not millions of Germans using or misusing Facebook, Google or Twitter.  There are less.  But we don't have enough knowledge how they work.

We would think it's ‑‑ there are mainly only 2, 3, 4,000 people in Germany active as part of that kind of network that tried to destroy our society.  That's the short version, and I'm really interested to hear about the experience of networks, but also Civil Society organizations, because there is still an ongoing discussion in Germany and that's about the relationship between enforcement and maintaining freedom of speech, not only in the Internet, but also on the streets.  Thank you.

>> JORDAN CARTER:  Thank you very much.

Our third Government speaker is Dr. Sharri Clark from the United States.  Please go ahead.

>> SHARRI CLARK:  Thank you.  And I have to apologize in advance.  I've developed a cold so my voice is not very good.  I hope you can understand me.

I wanted to just say a few words on behalf of the U.S. Government to explain a bit our approach to this issue.  Thank you.

The U.S. approach is a comprehensive and whole of society approach.  We've tried hard to focus not only on short‑term removals of content but also on building long‑term resilience to the terrorist messages.

Our policy remains consistent with longstanding Guiding Principles.  First, the U.S. Constitution and our strong commitment to Freedom of Expression, which is implemented through the First Amendment, informs any conversation or any approach we have to this issue, as well as our international commitments and obligations to human rights.

We continue to be proactive in our efforts to counter terrorist content online and that includes activities that facilitate terrorism, while also respecting human rights such as Freedom of Expression.

However, U.S. law enforcement does not compel the removal of content online unless it clearly violates U.S. law, and that includes things such as child sexual exploitation.  And content that promotes an ideology or belief alone and that includes actually a lot of especially extremist or even violent extremist content does not necessarily violate U.S. law.

Those things are protected under the U.S. Constitution.

The second Guiding Principle I want to mention, and I will say that that adds ‑‑ that follows on the first, which is that we rely actually on strong voluntary collaboration with technology companies for other types of content.  And that is instead of at this time at least new regulations or specific removal guidelines, we encourage technology companies, of course, to enforce their own terms of service, which typically do prohibit the use of their platforms for terrorist content or terrorist activities.  We also strive to improve our own information‑sharing with the companies, we and other companies, to improve the information‑sharing on things like U.S. designated terrorist and general terrorist trends and tactics to help them enforce their terms of service better, and while much remains to be done we do think we are making progress.  We would point to not only specific activities, increased expansion of terms of service and also implementation of those by companies, specifically, but also the industry‑led Global Internet Forum to counter‑terrorism and its partnership with the UN affiliated Tech Against Terrorism, and not only their work to work together with research and technology to prevent the exploitation of their platforms for terrorist purposes, but also their assistance to smaller companies, which sometimes don't even have a terms of service or even understand that terrorists are exploiting their platforms.

Third and finally, I'll just mention that we continue to maintain that the most effective means to counter terrorist and other objectionable content online is not through censorship or repression, but through more speech that promotes tolerance.  We emphasize the importance therefore of credible alternative narratives as the primary means by which we can undermine and defeat terrorist messaging.

And in the United States, we think that it's very important to build long‑term resilience, I've said this before, to terrorist messages by cultivating specifically critical thinking skills and online Public Safety awareness, doing that through education and through community engagement and that includes very importantly Civil Society, companies, communities, and all the other stakeholders on this issue.

We also recognize that banning offensive speech can be counterproductive to our efforts.  It can raise its profile.  It can also drive it into darker places, and in fact, undermine our counter‑terrorism efforts.  We have seen finally counter‑terrorism being used by some Governments as a pretext to crush political dissent or other activities that they deem objectionable.  And I would note here that we've been speaking in the broader context of terrorists or violent extremist content, right wing, or as we've been called it racially or ethnically motivated terrorism.  We're working on that, how to define that, some former members of those groups, and they are told us specifically that Government censorship is one of their best recruiting tools, because it reinforces their narrative of Government persecution and oppression.

Finally, of course, the international collaboration such as this Forum is very helpful in addressing this problem, and I will just before I close mention that on the issue of violent extremism, violence, extremism, the many definitional issues which we face which is a huge part of the issue, the problem, that we do separate hate speech in the United States, we separate hate speech from violent extremism.  The point for us is focusing on violence, calls for violence.  We as U.S. Government do not want to be in the position of policing thoughts or speech, unless it is actually crossing the line calling for violence.  And of course others.  Thank you.

>> JORDAN CARTER:  Great.  And thank you.  And thank you to those three Government speakers for exhibiting three quite different approaches to this area, both in the sort of responsibilities that you see, what your Constitution and legal works provide for and where you tackle responsibility.  We'll go now to three private sector perspectives and we'll start with Eunpil Choi from Kakao.  Try to keep your interventions brief.

>> EUNPIL CHOI:  We discuss for this agenda we should understand for these companies, each Government and countries, the cultural and the social context, so briefly we introduced our company, so we are the platform company that provide social media platforms, the mobile messenger and the Porter service so Kakao and the Korean Internet company have responsibility to self‑regulate as your service provider and have the duty to cleanse our con tents environment.  So we are trying to put the best to protect the user by the harmful content in the stage of the account and the creative contents and even appropriate we trying to protect in staging.

Although the legal strengths and the first of the Governments is very high in Korea, so the Korean Internet companies establish our policy through the KISO, we call it, the Korea Internet Self‑governance Organization, and actually, to protect the users were voluntarily initiated through the KISO.

I think that most important for the prevent the spread of the contents, the user voluntary participation is also important, and so Kakao provide the user reporting the function in all the more services so user can immediately report and excuse for those contents.  We encouraging the user to participate in these actions.  Reporting the harmful contents, to educate about our right to digital literacy and the campaign to create the environment.

And we also provide the Porter service, it's a similar service to Yahoo, especially on the new sections, so for exchanging opinions and expression between the users, we provide comment service but civil and critical social problems caused by the users so kind of inserting and degrading so we could not overlook, just overlook the social risk and the impact derived from the comment service so we officially announced the Coalition of the service on entertainment news first and the second step is reformation on the related service.

And so we also try to respond rapidly to social changes and to reflect the user diverse voices and while the securing the user rights and to find the ways to establish a better and safe environment, Internet environment, therefore we are revealing for our social responsibilities.  Thank you.

>> JORDAN CARTER:  Thank you very much.  Our next speaker is Brian Fishman, who I think is wearing two hats today, GIFCT and Facebook.

>> BRIAN FISHMAN:  Thank you, Jordan, and thanks to everyone for having me here today.  What I'll do very quickly is describe Facebook's approach to these issues, and then as the current Chair of the Global Internet Forum to Counter Terrorism, talk about the GIFCT, and I'll focus there because this is one of the major initiatives with a many of the stakeholders in the room are focused on in a moment and we're in a moment of transition so there's a lot to discuss in terms of where the organization is headed.

First on Facebook's approach we have a set of policies around what we call dangerous organizations which are terrorist organizations, hate organizations, and large scale criminal groups.  And our work in relation to these groups happens in sort of five vectors.

The first is the enforcement of our community standards which is terms of service.  The second is how we engage law enforcement, both in response to requests for information and when we see credible threats of violence, how do we manage that?

The third is support for counterspeech and we have a number of programs to support Civil Society efforts to push back on these hateful and extremist messages.  The fourth is really to look after Facebook people that are engaged in this, and that can both be in the real world but also developing programs and systems for people that are dealing with hateful and sometimes very violent and ugly content on a regular basis, to make that process safer and easier for them.

And then the last one is how do we engage the rest of industry and support the rest of industry.  Hopefully both by providing some tools, but also this is an area that is beyond competition now, for many in industry, so we want other platforms to learn from both our successes and our failures.

My team sits at the center of Facebook's efforts to do these five things, but there are lots of teams that work on this.  There are our engineering teams and our operational teams, there are folks that focus on just writing policy all day.  My team is a little bit unique in that we've brought in a series of folks from outside of Facebook that have some sort of expertise in these areas, and our job is to coordinate those different efforts across the company, and be real inside voices as subject matter experts to try to coordinate and drive those efforts.  Overall, though, Facebook has more than 350 people globally whose primary job is dealing with terrorism or hate organizations.

That's separate from the more than 15,000 people at Facebook that review content on a regular basis.  The scale is extraordinary and I want to point to that because it is a really deep challenge that I think is underestimated by almost everybody that's not focused on this inside one of these companies.  I didn't understand this.  I was one of the very first people that was really studying the people that became Isis in depth.  Much of that research happened online in 2005, 2006, 2007.  I've been studying these things for a long time.  I did not appreciate the scale at which social media companies operate and the challenges operationally that surface when you're trying to manage this type of content at that scale and I want to impress on everyone in the room to think about it.

This year, in the last 6 months, ending in when, end of October, the 6 month's ending in October at Facebook we removed more than 10 million pieces of content for violating our policy, for violating our policies around terrorism specifically.  The scale is just massive.  Now, our policy is very blunt.  We don't allow the praise, support, or representation of groups we consider dangerous organizations.

But we do allow in some circumstances propaganda produced by even a group like Isis, if it is shared by a media organization or something like that.  Scaling decisions globally when you're trying to take into context into account, when you're working in different languages, different cultural contexts is extremely difficult.

It's hard for human beings.  It's even harder for machines.  So we use some pretty sophisticated machine learning and AI.  It isn't perfect.  We were talking about some of the limitations in certain languages, so I think this is something I really want to impress on everyone, and it's one of the reasons why our policies operate globally.

The reason why this is important in the context of this conversation is that scaling an enforcement infrastructure if we're going to be responsive to National level legal structures for how many countries are in the world are there today?  A lot.  200‑ish ‑‑ is going to be extraordinarily difficult to do that well and I think this is a real challenge that we all face and as we have this conversation, I want to keep that in the forefront.

Turning to the GIFCT that's the last thing that's part of the effort.  Just to level set because some folks probably understand what it is.  Others probably don't.  The Global Internet Forum to Counter Terrorism was formerly instituted a little over two years ago, but it built on the back of a lot of informal conversations among technology platforms around terrorism that went back several years before that.  But we instituted the GIFCT, this was Facebook, Microsoft, Google, and Twitter, initially to share best practices and worst practices, to get together and talk about these issues.  Over time we began sharing something called hashes of known terrorist content.  That's like a digital fingerprint.  This crowd probably understands that better than most I talk to.

What we've learned in large measure and what was reinforced I think after the Christchurch attack in particular, and in the efforts around the Christchurch Call, were that it was time to step up some of these efforts, that we needed to be able to cooperate more effectively in a crisis moment when an attack in the real world went viral online.

We responded and we coordinated to try to identify the terrible virality of the video, of the Christchurch attack, and eventually we got our arms around it.  But this was an illustration where better coordination and not even coordination but just understanding when we needed to speak to each other and what were the right ways to do that, setting some protocols for having those conversations, was really valuable.

And so what we've decided to do is establish the GIFCT as an independent NGO.  This is a real pivot point for the effort in the organization.  We are in the midst of this process as we speak.  Literally the lawyers are finalizing the documents that we will sign to establish an independent organization.  We will then go into the process of hiring an Executive Director who will lead the effort to hire staff.

The core industry supporters of this will provide the funding for this organization.  And I want to talk about this in really two ways.  A ‑‑ go fast?  Okay.

I want to talk about this in two ways very quickly, is governance and then operations.  The governance of this is, this is still going to be an industry‑led primarily for industry entity but when we're talking about a National Security issues like this we want to make sure we have formal input from Governments, and from people that represent those institutions and so we've structured an advisory structure to incorporate some Governments into that effort.

We want to make sure those are Governments that respect human rights, and so we've said that they need to be members of the Freedom Online Coalition.  At the same time, and this is really key, we want to make sure we aren't having a bunch of conversations just with Governments behind closed doors, and so we want to build structures so that Civil Society can in those meetings, having those conversations with us, serving as a balance, because these conversations need to happen.  Some of these conversations need to happen in small rooms so we can make decisions but they need to happen where we have people that can serve as constructive checks on one another.

That's the governance idea.  We're really trying to institute that so this is way easier said than done and we've been having a lot of tough conversations over actually how to do this but that's the idea we're trying to get at.

The third thing or last thing is what do we want to do with this organization operationally?  We want to continue some of the things that are working well.  We've run good effective training programs for small platforms, helping them think through what are terms of service, how do they operate?  What are the different kinds of machine learning and AI?  How do you train reviewers and Bill those systems?  What are the sorts of things you can do to protect the mental health of those people who are doing those kinds of things and that kind of work?  That's one.

We want to continue the technical cooperation by sharing hashes, facilitating smaller companies to understand when a crisis is ongoing.  They don't have the teams that are able to monitor the globe in the way that Facebook or Microsoft or others do.

And the last one is we want to sponsor research because we sure don't have all the answers and neither do the Governments, and so we want to sponsor good research by good academics to get some of these things done.  So I'll end it there because we don't have a lot of time but the last thing I just want to remind everybody of is that as a Professor of terrorism, counter‑terrorism, I study this and taught in a variety of different universities, and we have to remember that terrorism is a strategy of the weak.  Terrorists, the strategy that they aim to pursue is to provoke us into making mistakes, to provoke us into either overreacting, or reacting in ways that aren't conducive to our long‑term interests.

And the core thing we all have to do as societies, in order to prevent that, is not talk about what we're against, but understand very clearly what we are for, and as we think about regulation, as we think about this conversation generally, what is it that we all stand for?  And ground our policies around terrorist actors in that idea.  Thank you.

>> JORDAN CARTER:  Thank you, Brian, with the two hats.

The third company speaker is Courtney Gregoire from Microsoft.

>> COURTNEY GREGOIRE:  Hi there.  I want to provide a little perspective.  We want to be able to have a conversation today.  I want to share that the topic here we're talking about is responsibilities and responses in the context of terrorism and violent extremism.  The Microsoft hat I think you're probably aware Microsoft is a company that offers a wide range of products and services and that gives us a lens into the need for clarity about when we're trying to articulate the problem statement and the risk in this issue area, and then how we define responsibilities appropriately across that.

I'll explain a little more what I mean but I'll step back and say as the Chief digital safety officer what we think about in this area is how we're thinking about advocacy, internal policy, and then of course the tech and tools to enforce our explicit terms of service and the partnerships to strengthen the broader ecosystem.  I want to talk about the responsibilities in the context potentially of the Christchurch Call and the reason that is why Microsoft was a strong supporter of this multistakeholder approach.  Sharri articulated we're talking about something that's a whole of society challenge, or problem.

If we're going to meaningfully address that then we need to understand how we're going to collaborate across the multiple Sectors who have a perspective, intelligence, expertise and different roles and responsibilities.  The Christchurch Call is unique I believe in articulating the defined roles and responsibilities of Governments, of industry and then what we should collaboratively do together.

I'll just articulate a few.  In the Government space, it is clear to counter all notions of what breeds terrorism and violent extremism from lack of economic opportunity and it articulates that.  On industry it's about being explicit in what our terms of service are, how we do our enforcement and now we're transparent and collaboratively how we share knowledge to tackle some of the more challenging areas.

I want to state what I think becomes problematic.  If we think about a framework that we're looking at right now I'll call it the patch work quilt of laws that is happening globally in this area, we need to start having a conversation about, what is the true risk?  And what is the appropriate responses?  We need to be thinking about what is different in a social media platform that might be shared from one to many?  By the way I'm not talking about Facebook here.  I'm talking about our own social media platform LinkedIn.  That's different from sharing one to a few or one to one.  It's wholly different from a space where we're creating a suite of office products so the next retail or manufacturer has access to technology and tools to provide economic opportunity.

We don't have control over content in that area.  And then you talk about the cloud infrastructure.  I'll tell you the challenge in this legal and regulatory framework is when some people approach us and say, let's just sweep in everything called a technology solution or an electronic service provider and do not have a conversation about what is the risk we're trying to address.

We want to address the risk and the way to do that is if you make sure we're clear on those responsibilities so I hope we can have a little conversation about what happens when that gets hazy and the challenge that as probably bridges right next into the next conversation and that is, our fundamental goal is when we take these actions that we are committed to upholding human rights and we need to be transparent about doing that and that's a further conversation we're having today.

>> JORDAN CARTER:  That will be a nice segue into the rights one.  One more speaker, Civil Society, Yudhanjaya Wijeratne.

>> YUDHANJAYA WIJERATNE:  Hi, my name is Yudhanjaya Wijeratne.  I'm from Sri Lanka.  I wear two hats.  I've been told I speak a little bit too fast so I'm going to slow myself down.  One is as a data scientist at a think tank called LIRNEasia.  We've been operating in the Global South for the last 10 years.  The other is as the co‑founder of a fact checker called Watchdog, founded in the way of the recent terror attacks in Sri Lanka, so can I ask for a raise of hands:  How many of you have seen a bomb go off?

There you are.  Because, well, there.  So not a lot in the room and I think those in the room will understand what I'm talking about when I say it's hell.  It's just numbness that happens.  It's not even panic, it's just:  What?  What?  It's this utter sort of blankness that hits you.

So when the bombs went off, now previously I had been since March 2018, I had been working in some instances, slightly antagonistically with Facebook and in some instances afterwards with Facebook crisis response on the analysis of large scale language datasets to better understand hate speech.  When the bombs went off, a friend of mine who was right there, he had been going to a hotel to meet his business partner, and he had been two or three minutes late and the man lives on Instagram so he'd been Instagramming live, bodies are flying off, called me and said, we need to do something.

And we met, and we realized that you know what?  By the time these conversations happen, that these conversations need to happen, we need to think rigorously and with I would say with academic rigor, particularly when we come to the moderation of content in an automated manner.

But by the time these conversations happen, it's often too late.  People are dying, and this is not something we can sort of tiptoe around at the end of the day and there are various mechanisms in Government.  For example, we discussed whether it would be feasible to wait until the ICC.56 had kicked in and reports are coming in of hate speech.  These things are being processed by a court of law and often not really, it's far too late so what we saw was a need for an almost first responder style of civic tech that could fight fire with fire in that particular instance.

So in 36 hours, we created what we call Watchdog which verifies information with the journalists on the ground in the areas where fake news is reported.  We launched with 45,000 users.  Columbia is a small city, so there was a lot of people there and we've been running since and at the point we realized that we have become the largest misinformation dataset in Sri Lanka and at the moment I have a panic moment where I realize we have 100 people working for us and we're really running this weirdly tiny thing that grew to be a monster.

And this feeds into what I had been working on as a data scientist at LIRNEasia where we tried to do with academic rigor and bring that perspective to policy, I actually considered as a failure.  When I say we had 45,000 users on launch people go wow that's amazing.  I consider it as a failure because at the scale and velocity of hate speech being produced in Sri Lanka right now, you mentioned that Germany has a minority of users indulging in hate speech.  I'd say in Sri Lanka it's a majority so at the scale and velocity that content is being produced online, humans aren't up for the task anymore, unless Facebook hires 50% of its user base to police the other 50%, this is just not practically possible.

Then we had started engaging with Facebook and trying to understand why Facebook in particular, because you have this level of policy that we were not seeing necessarily from other platforms, why these policies are not being enforced in countries like Sri Lanka?  They may be enforced in countries like Germany and in instances in the U.S. but in countries like ours, we're part of a network of Global South think tanks the example is horrible.  I come to bury Caesar, not to praise him and we realized the technical challenges of implementing something like the Christchurch Call, which is that a lot of languages in the Global South are what we call resource poor languages in linguistics, the fundamental corpuses aren't there, corpora, the legitimizers aren't there.  What this boils down to in a non‑tech perspective is what someone can do, what a tech guy can read a piece of policy, what they can do in a field lines of coding python in English or in German or in Dutch or Afrikaans, the West Germanic tree, which is well documented and researched, what they can do is about 10 lines.  And then we also realizes there's a design problem in some cases algorithms.  Now, consider to give a little bit more context the majority of hate speech happening right now in Sri Lanka is anti‑Muslim is anti‑Tamil so we spoke with quite a few Facebook engineers actually and they pointed out that specificity of a threat is something that they look for in trying to implement these policies.

So hypothetically something like, I will kill my, this neighbor, at 8:00 a.m. tomorrow is a specific threat that gets flagged on multiple levels and it's flagged as a high priority threat.

Now, Sinhala, my native language, doesn't have a future tense.  Right?  So this doesn't work.  So there needs to be in my opinion rigorous research done into these languages, into the shape and dynamics of how hate speech manifests and it's not enough anymore to analyze a few comments, to sit back and write wonderful policy, but there needs to be better collaboration with the tech community.  Because Facebook potentially is the largest repository of tech content online today, that data needs to be brought to Civil Society in the countries that these problems appear, and to academics in the countries that these problems appear, to linguists, to people who understand the etymology of these terms that appear.  I'll give you one example:  The word tambi means younger brother.  This between two Tamil speakers is a sign of endearment.  The Sinhala nationalist racist lobby has appropriated this to be an insult, so from a Sinhala man it's an insult on par with the N word.  There is Tamil content manifesting in a piece of Sinhala text and for an algorithm to understand that this is hate speech it must understand the ethnicities of the two people involved in the conversation which is a huge problem which is a violation.

So there are certain classes of problems that because of language structures, because of the nature of hate speech itself cannot actually be trusted to undergo analysis without violating human rights.  Then we have to go one level deeper and understand this word Tambi, used by a Singhalar racist would potentially be written in English script representing Singhalar, so now we're talking three layers of language nested in one.

So it's not that simple to imagine that any company can press a big red button and wipe this stuff offline.  That's one but what I'd also like to see is those companies engaging with people on the ground, with again with academics on the ground, because the research capacity is there.  The expertise is there.  But the datasets are elsewhere.  The designers are elsewhere.

I really do not fancy someone in Menlo Park who is familiar with English and the subject, object, word order therein designing something that is meant to work in other languages where word order is completely different.  It's a design flaw.  So there needs to be in my opinion much tighter collaboration and this comes also down to mind share.  This also comes down to funding because the Christchurch Call is a fantastic, it's an absolutely fantastic piece of policy but it took New Zealand stepping up to get the world's attention on that.

Bombs go off in a lot of other countries.  People die in a lot of other countries.  They're often not paid attention to.  They're often not discussed so I'm very, very thankful actually for New Zealand for stepping up and elevating these problems to global discourse to a level where you can have this conversation.  In my opinion, before we get to wonderful discussions on policy that potentially change how Governments work, we actually need to address the tech problems and lawmakers need to take the efforts to engage with the machine learning communities, with data scientists, to understand what is practically possible, and what needs to be done to make certain things which are impossible today, practically possible.  Otherwise all of this is just nice pieces of paper.

>> JORDAN CARTER:  Thank you very much, Yudhanjaya.  That was a fascinating perspective.  The whole part about the science behind a lot of these challenges shows both the breadth of the challenge that we face in dealing with this content and the fact that none of us has the tools to tackle it by ourselves and we need this cross cutting dialogue to turn into cross‑cutting action which is what the Christchurch Call called for.

That was our responsibilities and responses section, and before we get into the audience ideas and questions and comments, we'll go straight into the rights and risks slides, which I think you've seen some of already floating up in the dialogue we've had, and we'll start that off with Edison Lanza to sort of set the scene if you like, and then I'll be inviting the panelists to briefly respond.  You don't all have to respond.  You all can respond.  The more of you who can respond the quicker your responses will be, and we'll get straight into audience consideration so all yours.

>> EDISON LANZA:  Hello everyone and finally last but not least, Freedom of Expression.  And I want to say that hate speech incite violence and terrorism is one of the most controversial areas of content moderation and Freedom of Expression now around the world.  Even the difficulty of defining the category and the importance of it and trying to address very quickly two phenomena or two impacts that this problem have in the ecosystem of Freedom of Expression.  In one side, the response of the states and the risk in link with the responses of the State and in the other side, the tech companies.

First of all, in my capacity as Rapporteur on Freedom of Expression, I have explored the attacks on democracy and Freedom of Expression by states and not state actors in many countries.  With tech playing different context including international, non‑international conflict, terrorist attacks, the spread of the organized crime, in particular in my Region, Latin America, that perhaps terrorist is not the huge problem but the incite of violence regarding crime organizations around Latin America.

Also we have drawn attention on the need to address the serious problems that arise in the context of Digital Technologies, including misinformation, disinformation, propaganda, interference with the use of encryption technologies in the arena of journalism or human right defenders.

It's important then to address that the State often respond to this situation in a reactive manner and disproportional restriction on Freedom of Expression like censorship and the overbroad criminalization of expression.

This is not an effective response to extremism.  An open and critical debate is an important part of any strategy to address systematic attack of freedom of expression.  Without debate and counterspeech, the underlying causes and underground foster violence.  In particular states should refrain from applying restrictions relating to terrorism in a broad manner, because we are here all democracies with rule of law, but in this many countries in my Region like Venezuela, Cuba, Nicaragua and another one also democracies, then now are under pressure, like Bolivia, Chile.  The use of these concepts like glorify, justify, encouraging terrorists should be a push for more criminalization under or on the legal speech or the speech that is protected under international law.

In the other side, fostering of the content of Internet using kill switches or takeovers also is a problematic measure which can never be justified in international law.  On the other hand, we have the right of the journalists to protect the identity and the confidentiality of the sources of information against the direct and indirect exposure including surveillance and the judiciary.

The other actors in relation to the companies and the platform that have a big role, a huge role, huge role in the moderation of the content and the flow of information, the Special Rapporteurs of Freedom of Expression, all around the world, we have done in recent years concrete recommendations to the companies seeking and implement human rights standards in their policies, in the policies of content moderation.

And I want to refer finally to a set of recommendations to the companies.  First of all, we think that the backing up by the State regulation of the oversight of the Civil Society, the companies must mitigate human rights harms and doing human rights assessments to their policies.  What kind of measure that we recommended?  First of all, conduct periodic human rights due diligence assessment and review over their policies, and the effect on human rights.

Align the hate speech policies to meet the requirement of legality necessarily and proportionally the test that the international laws that now recommend in different areas.

Third, improving the process for the remedies in cases where people's rights are infringed especially creating a transparent and accessible mechanism for appealing the platform decisions.  I think ‑‑ and I very pleasure to heard that Facebook is moving on to building this kind of independent oversight.

Taking account the need for measures to combat incitement of violence, the companies will be considered more graduated response according to the context of each country.  I think that the case of Sri Lanka shows that if Facebook or Twitter or Google Microsoft have a global approach, also need to engage, more engage, with particularly of the local context in this country, not similar cases.  For example in Mexico than Argentina or Chile or Canadian, in the Western hemisphere, and in different cases we know that the Facebook response or Twitter, so on, response, the response is so slow because before that, don't have like a science policy or somebody to alert in case of the incitement of those who incite violence.

And finally we note it's necessary to adopting international standards in the content moderation policies, and that I think that can give to the companies a framework for making a better approach and also a better response to the states.  Thank you.

>> JORDAN CARTER:  Thank you, Edison.  That's a nice set of ideas to nudge this discussion around rights and risks on.

I'd invite panelists if you've got a few thoughts on that topic, both the discussion that you shared previously in the starter.  What are the rights that are at risk, if you like or the risks created to human rights posed by terrorist and violent extremist content regulation approaches?  And how do the risks get addressed in whatever it is we need to do.  Sharri, go ahead.

>> SHARRI CLARK:  So we understand that other Governments based on legitimate concerns have taken different approaches to Internet Governance such as regulations, and the argument is of course that having more restrictive laws in place would help us address the problem better.  However, in our view, we're not sure that more restrictive laws would do anything but constrain innovation and commerce, and make the Internet actually less open.

We think that Governments regulating removal of more or all vaguely defined terrorist content, and that can be many different things to many Governments, faster, including within a specific time frame, can actually focus too heavily on the technological tools rather than on the bad actors who are abusing them and in addition passing regulations to put the onus for content moderation on companies alone is likely to cause an over‑removal of content, and the building of things like upload filters, which will in fact possibly constrain human rights such as Freedom of Expression.

Our experience is that, and we continue to contend that voluntary collaboration with the technology companies and all stakeholders on this issue is a better approach.  We think that the companies know better the content on their platforms, how to identify and remove content more quickly and to keep it from propagating, and we continue to argue that some of the regulations we're seeing and potentially conflicting, somebody mentioned this but it's very important, potentially conflicting regulations as we're all doing this, not all but many Governments are doing this, can in fact be an inspiration for more repressive regimes to fine or imprison company executives for example for not removing, quote, extremist content that may actually be political dissent, and in fact, I would point to a recent foreign policy article which contends that for example the German network enforcement act may be being used by dictators around the world as a sort of model of censorship, and finally I just want to stress that we want to be sure that our responses to this threat are not endangering the Guiding Principles we all hold:  An open, intraoperable, secure, and safe Internet.  Our responses should not put at risk the positives as someone mentioned already, the things that we are trying to uphold.

>> JORDAN CARTER:  Thanks.  Dr. Park.

>> KYUNG SIN PARK:  There is another law from Australia that imposes liability on the platforms for not removing, quote, abhorrent violent content which is vaguely defined, but the real problem is that it obligates the platforms to engage in general monitoring which is already banned or supposed to be banned under e‑Commerce ‑‑ you Union's e‑Commerce directive and it incentivizes platforms to install upload filters or any other form of prior censorship so the Australian law actually goes deeper than the network enforcement act, but I want to say something on that enforcement act as well.

Now, safe harbor is also part of National human rights standard according to the UN Special Rapporteur's report on Freedom of Expression.  The idea is that if you hold platforms liable for the content that they don't know about, then they'll engage in general monitoring and prior censorship, that's the theory.

Not only enforcement act consents the platforms they are notified about, so on surface it does not seem to erode from international standard or intermediary liability or even the Manila principles of intermediary liability but and in fact what ends up happening is that the platforms have to take chance, end up taking chance, when they're pressured to remove notified illegal content in a very short amount of time, when it takes time to really decipher the message and really decide whether it's illegal or not, they take chance on the side of deleting instead of maintaining the content, we know that because Korea has such law.  We have a mandatory take‑down law for illegal content, but in fact, a lot of lawful content are take down.

When there was a big consumer digester with humidifier detergent, somebody had this crazy idea that evaporating the ‑‑ vaporizing the humidifier detergent itself will clean up the air, and a lot of babies and pregnant mothers were depending on humidifiers ended up dying.  Some people had all the symptoms try to put those warnings online but they're all taken down by platforms because they were given notices by humidifier detergent companies that it is defamatory, it is illegal so they took the chance on the side of deleting them.

That's one example of how the restrictive law can really shrink the civic space that can protect our safety.  I'll say something more about counterspeech.  Counterspeech uses the same language sometimes, or often the same language used by violent speech.

We at Open Net Korea are representing a feminist platform that uses mirroring strategy, basically using the same violent language against males just to let them feel how, let them feel themselves how women are treated in the language space, and because they're using the same language, it comes into ‑‑ it comes into legal trouble, but only the postings, not the operator, right?  The operator takes down all the content that she has received notices of, but still, the Government is going after network operator, which is suppressing counterspeech against ‑‑ counterspeech against male dominated language space.

>> JORDAN CARTER:  Thank you for those two interesting examples.  I've just got time for three or four so you're going to have to be really quick.  One, two, three, four.

>> GERT BILLEN:  I think there is a global concept that we have to delete child pornography.  It's over all cultures.  In another aspect there is no global commitment, I mentioned that it's forbidden in Germany to deny Holocaust.  A country like Denmark, it's allowed.  So we have national rules made by National Parliaments, and let's say that's our line, and as I said, we don't speak about hate speech.  We speak about hate crime.  If Freedom of Expression has a conflict with our criminal law, the companies are now forced to delete it, that's a current regulation in Europe.  Notice and take down, and they are still obliged to analyze what is legal or what isn't legal.

The experience with the network enforcement law shows that for example, a company like Twitter or Google get about 500,000 complaints a year, the users were concerned that something has to do with hate speech or hate crime and the delete rate is about 20, 25%.  That means that the companies don't delete everything that the users are sent to them.  They are trying to do a good job in finding out what is illegal and what has to be removed, and what has to be tolerated as part of freedom of speech.

So there is no kind of, as far as we see now, that there is removed too many things.

One effect of the public pressure in Germany and that was not only the pressure from the Government, it was pressure from the Civil Society, it was the pressure of victims both at companies like Facebook had increased the number of people who are working in Germany dealing with that kind of job and I think it's good.  It's a good result.

And we see one effect.  Deleting posts, but also deleting accounts of very extremist groups and people mean that these groups have to leave Facebook, Twitter, and Google.  They are looking now for other platforms, but they cannot convince other followers to leave Facebook, Twitter, and Google so it has a kind of positive effect in having less propaganda or less violating German laws.

>> JORDAN CARTER:  Thank you.  Courtney?

>> COURTNEY GREGOIRE:  Just to throw a couple more complications, I think we've had a good conversation on the challenge with right of Freedom of Expression.  I want to articulate I think why I started with the principles of the Christchurch Call were so important is I'll call it narrowly defined scope of terrorism and violent extremist.  We're having a robust conversation moving into hate speech is how we structured in that context.  We talked about the right of Freedom of Expression.  I don't think we should not forget the right to access information.  I don't think we should forget the right to privacy when we talk about overapplication of some of these regulation outside I'll call it public fora.  Those are clearly tensions but a couple points.

We were talking about the conflict of laws and the challenge and I think there's a couple trends that are concerning right now.  Even in the notice and takedown structure we're starting to see conflict of laws when we see Government orders coming in with extraterritorial application.  Yes you may have defined that as illegal content within your geopolitical borders.  You don't get to do a global order for that content area.  I think we're going to see legal challenges continue and multiply in that context.

Another way that Governments are looking that is they'll talk about it as mirror content.  We think it's the same.  It's got the same language.  You've articulated very well today, context matters.  It is, I'm sorry I come from the Government background, I don't know how you would narrowly define a mirror content regulation right now because we have to acknowledge that context matters so we're seeing a trend and a challenge as this plays out from a conflict of laws perspective we really do need some more robust conversation about how we are going to address narrowly defined harms that we acknowledge in a way that does uphold the global framework of human rights but I think I wanted to articulate those two narrow areas of places to be monitoring.

>> JORDAN CARTER:  Thank you, Courtney.  We'll going to Paul and Yud and then we're done.  We'll go to the audience.

>> PAUL ASH:  Thank you.  I want to start with.  This is hard.  Okay?  It's actually really hard.  But it's not so hard that we can't act and actually not acting I think is one of the major risks that we face.  If I go back to the, we've articulated risks to Freedom of Expression.  We've articulated a range of other risks.  I go back to the Prime Minister's words after what happened in Christchurch.  No one has the right to live stream themselves murdering 51 people, and there are really large number of rights that are violated when that happens.

I have spoken with people, in fact I've spoken with my own daughter, whose colleague was one of those 51 people, and I've watched what happens when that video flashes up on her live screen.  We have to put the victims at the center of this, and think really carefully about how we solve it but we should not be hasty.  We should not overreach.  That's why I think robust conversation and a holistic approach sits at the core of what we do and unless we do have that robust conversation, and we work this problem through carefully and in as measured a way as possible the bigger risk certainly from where we sit in New Zealand at the moment and when we look at the impact this has had on New Zealand as perceptions of the Internet and the way they think about the benefits of the Internet, the bigger risk is to the Internet as a whole and to the idea that toxicity might actually drive down use of the Internet for the good things that we need to use it for.

>> JORDAN CARTER:  Thanks, Paul.  Yud?

>> YUDHANJAYA WIJERATNE:  So one concern I have on the rights perspective is that there seems to be a framing of bad actors in particular, that the sort of implicit idea that hate speech is innately spread by bad actors and bad actors alone and misinformation is spread by bad actors and bad actors alone, so from the watchdog dataset let me tell you they're not.  The majority of hate speech and misinformation that often the two are so intertwined, you can't tell the difference.  The majority are from panicked and terrified moms on watchdog groups, saying:  I heard this, are you safe?  Or from office groups saying:  Why hasn't this guy come to the office today?  And they far outnumber the actual number of bad actors in play so if you're creating, when we create policy it is often important to acknowledge that there is a certain level of now surveillance that must be brought into play and we cannot put those other users at risk.  In the hunt or bad actors in this sort of almost military search to take these things down we need to acknowledge that there are actual people that are perfectly normal people at risk here.

And the second is on bias, because I've sort of, I hope I've hit home on the importance of having automated content monitoring, it is almost mathematically impossible to Engineer a system without bias.  If you take two groups you can optimize for parity between classes and various things but you'll always have false positives and false negatives.  In a machine system, in a human, or human plus machine, whether it's animal, vegetable or mineral it happens so outliers always happen and that's really where a lot of focus needs to go into.  We can have systems that effectively deal with 80% of scenarios but as you spoke of mirroring how do you deal with that?

This is where we bring humans into the loop and it's very important to think of that because we launch these systems.  Which brings me to my third point:  I hope at some point that when we do get around to doing these things, the fundamental datasets, the protocols observed in the design of these systems can be made open for critique by Civil Society.  When I look around at IGF this is a multistakeholder Forum and I'm 100% sure that right now, if you take these people alone, there will be someone who can come at the dataset, who can analyze the dataset and say this particular race is overrepresented or underrepresented.  There will be probably someone who can come at it from a rights perspective and say these classes of data should not be included in this dataset.  There's someone who can come at it from a legal perspective and say this is not complying with X policy or Y that is nationally acknowledged, and it is important policy and it is critically important that when these solutions are launched, I don't trust anyone who comes and says, we have AI to solve this.

I generally tend to call bull on those.  It's critically important that we have this multidisciplinary interrogation of solutions before they launch particularly if they are public facing solutions.

>> JORDAN CARTER:  Thank you.  Thank you for that exchange, panelists.  We're now going to open it up to the audience to see what we get.

And I'll just do kind of a sweep.  Then we're going to close with a very brief interventions and what that means is that if we run out of time we'll just stop where we are about the two closing questions so I'm going to remind you of those.  What are your policy recommendations going forward in this area?  And what role with the IGF ecosystem play?  In the meantime if you'd like to come up to the mics, we'll start at the front table so I'm going to go there.

Really brief, please.

>> Thank you so much for the wonderful topic.  This is Mohammad Hanif Gharanai from Afghanistan.  I'm working with the President's Office.  I will call this session as from my point of view ICT for peace, information, and Communication Technology for peace, so as you know that Afghanistan is under pressure and terrorist attacks so as a student of technology, what is the contribution of policy that Facebook made it, what is the contribution of that policy in context of Afghanistan?  Thank you so much.

>> BRIAN FISHMAN:  Sure.  Well, our policies are global and they certainly apply in an Afghan context.  Additionally, the Taliban are on the UN's consolidated sanctions list, which means that content produced by the Taliban can be shared in the GIFCT's hashtag database.  Let me pose a hard question:  We certainly remove content produced by the Taliban in support of the Taliban.

But how do we deal with content when the Taliban are engaged in a Peace Process, and they're engaged in negotiations with the Afghan Government, with other Governments in the world?  That is a very difficult thing for a platform that is trying to set sort of objective standards around an organization, and specific types of content, to manage, and it's easy to say, well, perhaps the platform should work with the Governments involved to figure out what's right there, but that opens the door to all sorts of other abuses.

So I think this is ‑‑ we certainly are attuned to this but there are other points we could go to.  Some of the smaller languages in Afghanistan, we run into some of the same problems described in Sri Lanka where some of our classifiers don't work as well because we don't have as much training data so I think it's ‑‑ every country in the world has a lot of these complexities in a lot of different ways, and Afghanistan is certainly one of them.

>> JORDAN CARTER:  Going to take the next question from the microphone.  So if you're not sitting at the front table and want to speak get behind a microphone so I can see you.  Don't put your hand up in the audience.

>> Thank you very much.  I'm Rajesh Charia from India speaking in my personal capacity.  It is affecting all.  If you see all the people who are sitting on the dais is now affected by the terrorism in any way and we're very much concerned about that.  Why not the international policies are being made in the terrorism so that if any country is getting affected, the realtime data removal or realtime source known to the law enforcement Agency will help?  Otherwise what happen?  The organization or the companies take the shelter of their respective country's law and they don't respond to the countries from where the terrorism request is coming.

And this delay cause badly damage to their respective country because I'm from the country which is badly hurt by the terrorism and I'm very much concerned about that.

>> YUDHANJAYA WIJERATNE:  Let me counter with a case study from Sri Lanka.  When the bombs went off, the Government, there was radio silence from the Government for 6 hours.  Then one Minister came online and said oh, yes, my father hangs out with intelligence types.  He said something was happening and that's why I didn't go to church.  Then the President came and said, I knew nothing about this.  I just saw it on social media.  Then the terrorist investigator Department came in and said hang on we've been telling you about these guys for the last three years.

There are instances where official Government sources are not always either competent or acting in the best interests of people and there are always instances where due process and due diligence needs to be followed over realtime, over realtime protocols, or faction.

There are plenty of instances where knee‑jerk reactions will actually do more harm and encourage surveillance state-ism as opposed to a free and Democratic society where these things actually critique better.

>> There is a technological challenge in this underdeveloped or developing country, but we cannot skip with the Freedom of Expression or the privacy.  Where the issue of National Security is coming, we have to compromise with the Freedom of Expression and privacy.

>> YUDHANJAYA WIJERATNE:  I suspect this will be a very long philosophical argument so maybe we should move on to other questions.

>> JORDAN CARTER:  We'll have to move on.  It's a dissatisfying format.  Gentleman silting at the table, this one at the microphone.

>> Thank you good morning.  My name is Khalid Fatel, the Chairman of the Emirates group.  Some of what I share may be uniquely relevant to the topics because we bring in certain expertise in that space.  MLI stands for multilingual Internet, you've heard of that term and a handful of people championed this back since the '90s.  The last ten years we've been focused on mitigating cyberterrorism and the motivation behind some of these groups.  And for Brian who is a lecturer of ‑‑ in terrorism, or anti‑terrorism, of course, you might find this is even more compelling because we created a label to identify these threats back in 2012, we call them polycyber, and why this is relevant is because in the context of dealing with this topic today at IGF, which is the online content in terrorism, it's only a portion of the big threat landscape.

And I really wasn't going to speak at this panel but I felt myself compelled because there's no such thing as a global policy that will serve to make the Internet this platform for good, and that's not enough.  So permit me to make some of you uncomfortable constructively.  If I was to ask you, instead of sitting on these chairs and holding the microphones to do your panel speeches, to sit on the floor, no microphones, and to have to scream to everybody to hear you, you'd feel uncomfortable.  Unfortunately doing what is necessary is going to require companies to do what is uncomfortable to their business models.

Two days after the Christchurch attacks, I put out a call for social media platforms to stop, to block livestreaming.  Yes, it was not meant to be ‑‑ it's not going to be beneficial to everybody who use it for good purposes, but we saw this as a future trend, and I think the gentleman from New Zealand, Paul, you highlighted this exceptionally well, and I think this is part of the challenge that we face.

So the Christchurch Call is superb.  I believe your Prime Minister has been exceptionally sincere, but unfortunately, we're always finding ourselves reacting to events, not proactively acting.  So this space at IGF is only a talk show.  This is conversation.

What may not necessarily end up happening is what is necessary, and there's no way we can create a global policy while the rule of what is accepted as legal in the United States, and I speak as a citizen, is going to be the format that will guide these principles, because let's be clear:  Christchurch had one tragedy, and they took action and they legislated.  I was in Los Angeles at the time when a youngster went into a high school and shot people.

Last but not least, the gentleman from Germany, you are right, the terrorists do have an agenda, but I think the biggest challenge we face is understanding accurately the motivation as to why these terrorists become what they are, and figuring out how to deal with this will help us not only take down some of the content but mitigate the threats.

Last but not least, let's keep ourselves reminded that the Cambridge Analytica had a far detrimental role in hijacking and putting democracy under unprecedented threat than terrorists who think they can change the way we do things in the West so I think this is a food for thought that what we are trying to do may not necessarily be achievable as a global model, but may end up being compelled to becoming a local legislation by which networks have to obey the law.

And let's be clear, as well:  When the Dow Chemical was caught or found responsible for the Bhopal chemical attack.

>> JORDAN CARTER:  That's getting off topic.

>> It's not off‑topic and I'll tell you why.  Because if a social media platform continues to treat itself as a technical platform not a publisher it mitigates the responsibility of the content on it.  Therefore the legal responsibility, the laws so far don't look like they're going to change unless we start figuring out how to actually compel organizations to do what is necessary to serve the global public good, I think we're going to have a bigger challenge than we're facing, my two cents.

>> JORDAN CARTER:  Thanks for your five cents.  I'm not going to ask anyone to respond to that one.  We'll go to the next question at the mic.

>> Hi.  I think there's a bias in terrorism research and it's existed for decades.  We haven't really learned the lessons of Latin America.  We haven't really learned the lessons of the global war on terror that started with 9/11.  Now, yeah, it's quite a platitude ‑‑ well, firstly, the history of terrorist attacks and terrorist victims are primarily a result of State terror so there's almost no discussion on the panel about State terror, and secondly, you call someone a terrorist and then the point to ones before me was very relevant but one day you're going to have to negotiate with the people you call terrorists, for example the Taliban, and for example, the Government of my country the African National Congress.  They used to be classified as a terrorist organization, and we had negotiations and a transition.

So, yeah, there's a huge bias in the debate and if we don't learn from the lessons of counter‑terrorism in history, we're bound to repeat them, and secondly, those of us who would like to understand terrorism and research terrorism, if the content is taken offline, where is that for the historical record for researchers and academics other than with Facebook, Twitter, and the major platforms?  Thank you.

>> JORDAN CARTER:  Thank you.  Brief response?

>> BRIAN FISHMAN:  So I think just briefly, I won't respond to all of that but I wanted to respond to the last point because I think it's an important one that the platforms are thinking about.  As a former academic, I thanked the Internet archive in the notes to my book, because of the amount of research material I was able to pull from there.

And so I think that there is a meaning raw terrorist material so I think it's a point well taken and I think this is a place where there is an opportunity for real collaboration, and the IGF is great Forum for that, to discuss how, when platforms like ours are removing this content that violates our terms of service, and I think we're right to do so, but how do we preserve that material in a way so it is accessible to researchers, to human rights investigators, and folks like that?  That is a conversation that needs to include activists, it needs to include companies and it needs to include Governments to make sure that we have space to preserve that material given existent privacy laws, et cetera, so I just want to thank you for pointing that out.  There's a place where there's likely a lot of agreement but a need for real cooperation across different Sectors.

>> JORDAN CARTER:  Thanks.  I've only got time for two more interventions so the gentleman with his hand up on the table had indicated a number of times so you go first.  Then the man standing there and then we're done I'm afraid.

>> Thank you, I'll keep it brief.  Neal Kushwaha representing private sector in Canada.  I have a question for the entire panel.  I'm very impressed with the group of you scholars, lawyers, Government and corporate representation.  Do you believe nations have a position to claim due diligence under international law to nation states for their lack of action toward headquartered companies who do not or are unable to manage contain or limit TVEC and hate speech or crime thereby holding nations accountable?  Thank you.

>> JORDAN CARTER:  Who would like to have a go at answering that question?

Do any of you want to...  That's a solid no in answering that question, which I think probably means it was one that's given people food for thought.  Thank you.  The person standing, yes, thank you.

>> My name is Makoum, so I know this is the Internet Governance Forum, so there is the risk to think that all problems even though Internet used to amplify them can be solved here.  Data could use protocols, it would solve all of these problems.  We do know that traditionally news organizations sometimes based in the countries where these terrorist activities take place might have a better understanding of the context, so I'd be curious to know how you see traditional news organizations playing their role in solving this?

Because the scale of the Internet is a disadvantage, it's a weakness in this context, another strength.  So the distributed nature of news organizations and the fact that they're locally based gives them advantage, so I'd be curious to know how these people, the news organizations, can help solve this.

>> JORDAN CARTER:  Sharri, do you want to briefly?

>> SHARRI CLARK:  I'm so glad you raised that question or point, it's a very good point and I think that one of the problems we face in this very complex and difficult as Paul said issue is that it is ‑‑ there's a risk because the Internet is such a, you know, popular topic, it does tend to suck all the air out of the room when it comes to other media, and we know that, we do know that a lot of radicalization to violence, a lot of these activities continue to take place person to person, without any technology involved, as well as through broadcast media, traditional broadcast media.

There are of course places where the Internet has not penetrated, sometimes mobile phone apps, sometimes not, but it's really very important even when that is true, that the Internet is available, that a lot of the sort of content we're talking about is also being produced and distributed through traditional media, and also I would point out on the positive side, that is an opportunity for us and that's something that we as U.S. Government have tried to continue to use these ‑‑ this broadcast media popularity as a means for counterspeech as the companies call it.  For us it's countering violent extremism or countering violent messages.  It's an important part of the equation and a reminder to all of us that this is a much bigger issue than what is online.  Definitely what is offline as well as other media are critically important.

>> YUDHANJAYA WIJERATNE:  I can see where that question is coming from.  There was quite recently ‑‑ after the Sri Lanka bombing, there was an Article in the State blocked Facebook and there was an Article in "The New York Times" saying good that Sri Lanka has blocked Facebook because it will end hate speech, this violence has gone too far and my first reaction was:  Hang on.  We've been putting all manner of remorseless pieces of metal into people's bodies for the last 30 years a Civil War and we did that without Facebook or without the Internet.  There are fundamental social problems that need to be solved, that there is reconciliation that need to be done in a lot of our societies, offline where connectivity and these things are, but this is slightly out of the scope of what we can tackle one piece at a time.

And we can tackle, we can look at it in the form of Internet and we can say okay, let's narrow that down to a platform, to a specific type of content and try and tackle that, when broader social issues need legislation and policy and thought elsewhere.

And I would say that yes a lot of our societies are generally influenced by conversation that has certain implicit biases that usually come from the West.  The gentleman from Africa spoke about Government and State sponsored terror and the fact that we don't acknowledge it.  These are serious problems.  There are certain implicit biases in fora like these where we assume that good policy in the hands of good lawmakers will solve all problems.

But that's not necessarily true if you look at the history of our Government and of our societies.  We have to take a little bit more of a bigger view and assume not everyone is acting in everyone's best interests.

>> JORDAN CARTER:  Edison make a quick one and then I'll go to the closing.

>> EDISON LANZA:  The role of the journalist and the press is very important.  It's very important also to protect the research of the press about terrorism and the possibility to speak out in the media about that.

There was a case in Canada that broadcasting network conduct an investigation with services about Isis and a Judge and the court complain that they should be disclosed their sources and this is a way to have a lack of information and debate about the phenomena of terrorism.

The second point very shortly is that also the public authorities use the narrative of terrorism to attack a legitimate discourse in many countries and also you know, the public authorities have Freedom of Expression but also have the duty to conduct the speech in a very rigorous manner.  For example, in our Region in Latin America, many presidents or Secretary of States in different countries for example in front or about this protest, they say this protest is a terrorist act.  This speech a terrorist act.  No, it's protected speech.  And finally many in the panel, in the public, you know, refer to the legal framework of the country like a pattern or like a scope that the platform should be adapted.

And I say, well, the legality is one of the conditions of the tests but the international law have other two tests that have relation with the proportion of the measure and not even all the laws complying with this kind of test, and perhaps and I know that many laws about hate and terrorism managing a broad definition, that don't allow to have a free speech and a flow of information into the countries.

>> JORDAN CARTER:  Thank you.  Thank you for the questions.  I'm sorry we didn't have time for more.  We're all very time compressed now so you have a 30 second slot to make a closing comment or remark.  Please do not turn that into 2 minutes each.  We have to clear the room so just very briefly we'll do the panel in order this way.  What would you like to close with?

>> KYUNG SIN PARK:  So the policy proposal, I think CDA 230 type of safe harbor goes too far.  I think it gives a bad name for intermediary safe harbor because it presents platforms from liability even for the things they know to be illegal.  Something like DMCA 512 the copyright takedown, notice and takedown system incentivize platforms to take down content but voluntarily so from constituent point of view it is okay and also they respond to reinstatement request to bring back up the content that are lawful in good faith from the perspective of authors.

Now, another pitfall ‑‑ .

>> MODERATOR:  I don't think we have time for another, sorry.

>> KYUNG SIN PARK:  Another pitfall of mandatory takedown type of regulation is that it entrenches the current dominant platforms into the continuing dominance because Facebook just said, you have 15,000 reviewers.  Now the platform that wants to compete with the Facebook will have to have the finance resource to hire that many people to be part of something like GIFCT so that's another policy reason against mandatory takedown type of regulation.

>> JORDAN CARTER:  Thank you.  Along the panel briefly, please.

>> GERT BILLEN:  We should not only talk about online, we have to talk about real life, and what I think is very important to exchange best practice, to exchange research.  What are the reasons that people become an extremist?  What could we do to avoid it?  Or what kind of repression is necessary on stage?  And therefore it should not only be in a discussion about social networks.  I mean, they have an important role and they have to understand what their digital responsibility means, but we've had to widen the discussion and to learn from each other.

>> JORDAN CARTER:  Thank you.

>> EUNPIL CHOI:  So a complicated issue and the corporate side, do we have to keep our duty and the responsibility for this kind of very extreme changing environment and we have to react agilely in changing the environment and for the corporate, we should give the responsibility and the immunity and for the most important thing is I think for the digital and the media literacy is very important, to keep our safety, it is all of our things to keep in mind.

>> COURTNEY GREGOIRE:  I do have to cite a valuable resource now.  I think this panel is emblematic what the President of Microsoft articulated is the tools and weapons in the digital age.  The tools generated to create economic opportunity, free flow of information can be weaponized.  I respond to the role the IGF can play in a multistakeholder Forum.  It's why the Christchurch Call remains important.  If we're talking about a whole society problem, we need to articulate what we're trying to solve and be very clear about the responsibilities of each sector:  Government, Civil Society, and industry.  Acknowledging that technology put tools out there that can be weaponized, we recognize our responsibility but if we don't all play a role when we're going to see all of the harms we talked about, the actual implication in suppressing human rights, the fragmentation of a global structure that undermines the promise of what technology can bring from economic opportunity and information opportunity, and lastly it would actually implicate the fundamental public harms that we've talked about.

So continuing this conversation, I will say needs to be free and transparent about what those conflict of law implications are for the broader perspective.

>> JORDAN CARTER:  Thank you.  Next?

>> PAUL ASH:  Just to reiterate:  This is hard.  And it's not just about the Internet.  It's about a holistic approach to dealing with harm and victims’ rights.  It's easy for us to come to conferences and spend time talking about what we can't do.  The Christchurch Call was intended to set out things we can and that's the right thing to do.  The easy thing to do would have just been legislation or slagging off of the companies, we didn't.  We want do the right thing.  Let's do it together.

>> YUDHANJAYA WIJERATNE:  I'm going to use the standard researchers copout question:  More research is required but honestly more research into languages, into the communities affected, into why these things actually manifest on line and how they actually appear and the historical conditions that led them there and the second is, for Governments to encourage first responder civic tech efforts, and not just people who can write misinformation or fact checker applications but people who can actually discuss counter narratives, get those things out there because there's always going to be this delay in what we're doing here, and between people being affected and that gap needs to be filled.

>> JORDAN CARTER:  Thanks.

>> BRIAN FISHMAN:  Quickly, I think that two things.  One is that sometimes ambiguity in law and in regulation leads to conservatism on the behalf of companies.  This is particularly true from my perspective in terms of data sharing to researchers and academics potentially to other Governments.

There are times when we would like to share information with researchers so that they can give us insights and we do not feel like we can or we risk severe penalties because of regulations like GDPR.  A lot of that may be not because GDPR prohibits it but because no one is sure yet what GDPR actually means.  The last point I'd make about GIFCT more generally, as we come together in that effort, I think it's important, we've come here today to describe GIFCT, because we think IGF is an important place and there are a lot of stakeholders here that are important to that effort that we want to reach out to more.  We haven't done that over the past two years but as we go forward the new organization we hope very much you'll be seeing more of us.  Thanks.

>> SHARRI CLARK:  I think in addition to the challenges of defining terrorist content and we could talk for days about that alone, I think one of the problems, one of the issues we need to address is an increasingly pressing question and that is:  What does success look like in this space?  And we've already discussed a little about roles and responsibilities and addressing the problem but I think that the questions are:  Are we really looking at an Internet in which there is no objectionable or harmful content as deemed by various Governments?

Are we expecting users to be able to be educated enough to identify and address terrorist propaganda or other content online?  Are companies responsible for removing anything that is harmful or objectionable?

I think that removing all content from the Internet and many of these objectives are actually not reconcilable with our belief in human rights and fundamental freedoms, so we really need to seek I think for policy recommendation a very difficult but important balance between strong security ‑‑ I emphasize strong security ‑‑ at the same time that we're respecting human rights, and not having even a chilling effect on Freedom of Expression.

And I think it's important that we remember that the decisions that we as Governments especially right now are making either collectively or individually can have and probably will have a huge and significant impact on the future of the Internet, as we know it.  On the IGF's role, we are very supportive of the productive role that the IGF plays in bringing together stakeholders and discussing these issues and we would be very supportive of having the IGF in Poland next year continue this conversation, perhaps even broadening it if possible, that would be many sessions I'm afraid, to other sorts of online content issues, if appropriate.

>> JORDAN CARTER:  Thank you.

>> EDISON LANZA:  I think a race for censorship or criminalize is not the better response.  We must be strict in the definition of what kind of expression is protected under international law and what kind of expression that incite violence or terrorism is not protected, and this is a narrow definition that we need to take decision in terms of takedown or to ban this kind of information.

And also I agree that we don't move to trespass all the responsibility for that platform to take down or otherwise the problem.  I think the different actors we need to build in a multistakeholder approach for this issue and a counter‑narrative in fact, and also to denounce the use of this narrative by many Governments to persecute dissidents or legitimate protests.

>> JORDAN CARTER:  Thank you.  Thank you all.  What I think the discussion shows is that there are subtle but important differences in perspective among a whole range of actors.

But we do have a civilized discussion about them so a Forum like the IGF that brings these stakeholders groups together is a really helpful checking in point and a Forum for discussion.

And it may be that we can't come to global agreement about how to deal with these things but we can be better informed about the approaches that we are taking, and through that, come to a better understanding and keep the dialogue going.

So I don't think we fixed the issue, but I don't think that was our job.  Thank you to all of the panelists for taking the time out to be here today.  Thank you to Susan, Silvia, and Jutta who put this together on the MAG side of things, and thank you all for participating and let's go get some lunch.  Thank you.