IGF 2017 - Day 2 - Room XXV -OF80 Tackling Violent Extremism Online: New Human Rights Challenges for States and Businesses

 

The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> MODERATOR: Hello, everybody.  Thank you for joining.  It's a busy, busy time for everybody here.  I'm learning from my panelists that it's particularly busy for them, as we didn't necessarily have everything in order, but I think now we have a general lay of the land for the session.  We're going to try to be as efficient as possible because we have an enormous amount of expertise on the panel brought into this conversation.  And we want to give as much of a chance as possible.  I think that as with the session that I think we're following in this room, this is one of the topics that I think is of greatest interest within the IGF this year and within the world community that's looking at these issues generally. 

As you know, we're looking at tackling violent extremism online.  My name is Peggy Hicks.  I work at the high commission of the human rights.  We, of course, are very interested in looking at this interface between human rights and digital space.  I just returned from Silicon Valley in the past couple of days.  And one of the things that we're really looking at is how we can make sure that some of the expertise, ideas and frameworks that have been developed in the human rights area are usable and transmittable within the digital space.  You know, that we don't necessarily create a whole new ethical approach that doesn't necessary draw on and build from human rights frameworks and the great work that's been done, for example, by my colleague I see sitting there, Michael, on hate speech and relationship and a robust plan of action, for example. 

But we won't have time to go into all of that this afternoon.  What we'll try to do is allow each of our panelists here to speak and give their inputs for about four or five minutes.  I'll try to keep to time on that.  And then we'll open it up for questions from all of you and then return to the panel at the end for some concluding remarks. 

So I'd like to first give the floor to the gentleman on my left who probably doesn't need much introduction in these circles.  He's done so much incredible work in reporting on these issues of human rights and protection of the rights of freedom of opinion and expression.  But in particular, of course, he's done incredible work sort of focusing on the role of tech companies in the digital space as well.  So we're very glad to have David Kaye, the U.N. special rapporteur with us.  I just want you to give us a sense of what you see as the challenges and addressing violent extremism online and what are the trends or the possibilities that you see in that space?  Thanks, David. 

>> PANELIST: Thank you for organizing this panel.  So I just want to make a few general points.  This is a really amazing panel.  And there's so many of you here.  So I feel like in other panels, we haven't always left enough time for discussion.  So I'll just make a couple of points.  Trying to answer this very small question that Peggy has asked, what are the challenges.  So I want to say a couple of things.  So the first thing is a generic problem about the term "extremism."   And I think this is important ‑‑ this extends beyond the online space, right?  It's really a question about rule of law.  And I think a core problem for dealing with how we restrict, if that's what we want to do, extremism, putting aside the violence part of the question, is what is extremism, right?  How do we define extremism?  And there aren't that many definitions, if any, of extremism either in national laws or in international human rights law around the world. 

And so that's a problem for public governance, right?  It's a problem for individuals who need to know what law ‑‑ what the lines are, whether there are any lines at all between what is legitimate expression and what is ‑‑ what may be criminalized.  And it's a problem in particular for online platforms because online platforms are often being told, you need to restrict extremist expression.  But it's hard for them to do because they don't have definitions about what those specific ‑‑ what the term might mean. 

Related to that is a very real problem, again, in public law, which is the redefinition of everything as extremism, right?  So ‑‑ and I won't even ‑‑ I'm not going to give any examples because you probably ‑‑ everybody here may have three or four examples in their heads, right, about journalism being redefined as terrorism.  Or minority expression, particularly minority religious expression, being defined as extremism.  I mean, there are just too many examples to go through here. 

[ Lost audio ]

It's a problem of public policy and it's a problem of platform.  Okay.  So that's the one sort of generic point that I wanted to make.  And then the second ‑‑ and then I'll end with the second point ‑‑ and if you want sort of the footnote to all this, my colleagues in OSCE of Inter‑American system and African System for human rights system, we all issued a joint declaration on freedom of expression in May of 2016.  So if you just Google joint declaration, and I guess I just gave a commercial for Google.  But it's a small "g."   If you just look up joint declaration and extremism, then you'll find it.  And we go through these different principles. 

I think one of the major problems right now for online space is essentially governments on the one hand kind of hijacking terms of service and making demands for takedown of content that they'll call extremist that they might kind a kind of wedge in the terms of service that work for them to make the takedown call.  But they're doing so when they themselves could not restrict that expression as a matter of human rights law or as a matter of domestic law. 

And related to this problem is that we don't have much information about how many terms of service takedowns are being requested.  So ‑‑ and we don't have a real good sense, I think, of how much is being requested by government, how much is being requested by individuals, because, of course, a lot of the platform regulation is decentralized right now.  It's, you know, sort of flagging that individuals can adopt in order to restrict or at least get expression that they disagree with investigated, maybe to get people's accounts suspended on different platforms.  So I think these things are obviously connected.  And if we think about them only as problems of the platforms, we're going to miss a whole bigger problem, which is tease are problems of public policy.  There are major, major problems of rule of law around the world that are global because they're problems in western democratic countries, and they're problems in repressive countries around the world.  And these are not problems that are only solvable in the sense they're solvable by the platforms themselves.  So I'll stop there. 

[ No audio ]

>> Thank you, Peggy.  Thank you to your office for putting together this great panel.  It's a privilege to be part of it and to share a panel with you again, of course.  I wish I could show you the slide ‑‑ I'm sorry, David (?).  There's a high case in India that I'd like to talk about in the context of extremism.  There are two sides.  One is the redefining of content that is obviously not extremist content as extremism.  And then the other sort of hard case problem is when there is extremism that takes place but it's wrapped up with important political expression.  And so if you think of any movement, the Irish struggle, any movement that (?) But that the residents of a particular region might call a legitimate political demand, you see that the speech connected to that movement ranges from political arguments, which are protected speech, to what you might call incitement of violence.  And so we've got (?) Which is a hotbed for this kind of expression.  And a major social media platform got into a lot of trouble recently because there was an elderly gentleman that was killed if Kashmir, and there was a lot of content that went online related to the killing but also related to himself. 

So the slide that I was going to show you is one is a picture.  It's the cover of Kashmir with his face on it.  I'm sure if you search, you'll see it on a search.  It's literally his face and there's the word "Kashmir" underneath.  That was taken off the platform.  And then there was another one of him looking basically like a teenage boy with his mobile phone (?).  And that was taken off as well. 

Academics that said that this is something that Kashmiri people about, their accounts blocked.  It was one of those cases in which there was a mass block that resulted that had a relationship to extremism but valuable political speech.  It's something I like to compare to people so that you can understand the kinds of decisions that platforms have to make regularly in the context of extremism.  I think I'm going to stop here.  I have a second more controversial example, but I'm not sure it relates to this discussion. 

>> There we go.  Thanks very much.  And sorry that we're not able to do both technology at a technology forum.  But I think that is a really good example of taking from the general point that David made to the concrete example of how these things play out in practice and ways that can be very, very troubling in terms of its execution and where we go from it. 

So I'll turn next to Brett Solomon who is also probably known to many of you as the executive director of Access For you.  And we're looking forward to hearing his comments.  Brett, we wanted to just get your general thoughts on the issue that's been raised on violent extremism so far. 

>> BRETT SOLOMON: Hi.  Thank you very much for allowing me to be on this panel.  I think the reality is that (?) You know, it's a problem that we've been dealing with, actually, for a long time.  Before the advent of new technology.  I think that, you know, violence, constant violent extremism, et cetera ‑‑ you can't hear me?  I have got an mic there.  So I was just saying that there's historical antecedence to this discussion.  The other thing to think about, like we have a whole future ahead of us where we're going to need to grapple with this as well.  So we kind of need to look backwards and forwards to understand the process, the kind of different competing priorities, et cetera, which have been identified by David. 

But I also want to understand now that we're in this context (?) In the digital age, like what is the benefit that we're actually getting from these programs?

And I want to say, like ‑‑ and I know platforms will be collecting all of this violent extremist content, but how much actually is there?  And what is the impact of that content in enabling violent extremism in the real world?  And what is the commensurate impact of taking that away?  I'd like to understand these issues and I think we need to understand these initiatives in the context of necessity and proportionality because I'd like to be clear on that, we're essentially enabling to name it a massive monitoring and surveillance regime.  And allied to that censorship architecture.  And if we think about the history and the future of this, then what is that architecture going to do when we're not just talking about content?  Because at the moment, in order to remove content, you need to monitor the network, you need to surveil the network.  And in order to remove content, you need censorship.  So these are both rights‑infringing activities. 

And largely, as David suggested, they have been outside of the context of the rule of law, outside of the context of judicial oversight or regulation, without properly agreed‑upon definitions in realtime.  And often also subject to company terms of service that have no consistency across the sector.  So I'm not denying that there are is extremist content online or (?) Online, et cetera.  But with the lack of definition and the lack of process, and even good intent, we're essentially creating very serious infringement upon the right of privacy and the right of freedom of expression.  And then looking into the future, what about all the other rights?  The right to education, healthcare, water, political participation, et cetera. 

But what happens if you actually add (?) To biometric databases and registration and artificial intelligence?  You actually have a system where we're not just criminalizing content.  We're also criminalizing identity and our capacity in the future as now to participate in (?).  So I want to just emphasize that this infrastructure that's been created both by the government and within companies has long‑term consequences that we can only begin to imagine.  And I sort of want people to just close their eyes for a second.  Not that I'm really legitimately asking you to do that, but think five years into the future and the cascading of few technology and how that will interface with identity, biometrics, access to essential services, you name it.  They're all going to be linked.  And so it's not just going to be a question of content regulation or content moderation, but access as people and as individuals and as communities. 

But there are also some questions that I think need to be asked today, and I think they've kind of been touched on here.  What happens when a company is so (?) To keep up content that they take it down just in case?  And we're seeing that already.  What's the incentives for the German hate speech law, for example, and the remaining of content up online?  What about remedies?  Like what are we actually doing when content is removed wrongfully?  And what are the consequences for in the future criminalization of identity?

What happens when the state requires a CVE program for a company to actually enter into a jurisdiction?  What happens to the publishing of minority or opposition voice (?)?  What happens when disagreeable news that's called fake news and disagreeable content is equally defined as extremist content?  What happens when harsh images across all the tech platforms are shared and you actually have no capacity at all in the monopolized environment to share any content identified by one company as violent and extremist content?  What happens when people try to circumvent this content moderation and get arrested for that, as we're seeing with the increased criminalization of digital security in tools?

What happens when companies can't afford a CVE program?  Do we end up with even greater concentration?  So these are just some of the questions that I think we need to think about.  And I just want to end on the issue of gender because I touched on identity and LGBT community because I think that in this space, it's always important to understand from a gender perspective and to think about how many countries LGBT content is seen as extremist content when for many of us it's totally lawful expressions of identity. 

>> PANELIST: Thanks very much, Brett.  I think that was a great summary of some of the crucial issues that I think your questions could lead all of us to stay awake nights and never close our eyes if we try to answer all of them.  They're very, very thought provoking. 

I'm going to turn now to Fiona Asonga who is the chief executive officer of the nongovernmental not‑for‑profit technology service providers association of Kenya, which operates in the Kenya Internet exchange point and computer incident incident response team.  And she's worked there as well as a liaison for the private sector with government.  So it's great to have you as part of the panel and to be able to help us bring in the perspective both as you see it from Kenya but also, you know, your direct links with business and sort of the perspectives and the realities of these issues as they're emerging within the context that you're working.  Thank you. 

>> FIONA ASONGA: Thank you very much.  There's a lot of questions you've asked, and I think some of them we've dealt with in Kenya because that is one area (?) On what has been.  We have the same challenges that you've raised in terms of having to take a balance between bringing down content (?) And the right to privacy and the freedom of expression.  However, we have to be honest with ourselves.  With rights, there are responsibilities.  And I think it's very easy for us as (?) So you have a right for freedom of expression.  You have a right to privacy, but those rights are valid insofar as they are not infringing on the rights of others who are also using the same platforms and the same services. 

So in Kenya, we have a national integration coalition act which are issues of hate speech and which we address issues of conflict and possible (?) And all this.  And so during the August elections that took place this year, there were incidents (?) Actually have guidelines on how social media platforms should be used.  We've got guidelines on how content should be handled and what kind of content should be accessible (?) Channels.  There are very clear rules and guidelines that need to be followed by the different service providers and the end users. 

(?) Extreme content and the reason we've done this is of our proximity to Somali, we have Al Shabaab attacks range ‑‑ increase, we had them once in six months.  We for you have them every week.  Every week there's a bomb somewhere.  Every week there's kidnapping.  Every week there is ransom, kidnapped, and all that kind of stuff.  So our environment has got to change as a business entity.  When you sit at your desk, you're wondering whether you will be the next target.  Will your child be hijacked by Al Shabaab for them to demand a ransom for you?  That's how the government and the private sector to act differently.  And so we collaborate.  We collaborate as (?) National security intelligence services.  With the military intelligence services.  We share information on what we see online.  We've had very good partnerships with platforms such as Facebook and Twitter that have worked very closely with us.  The period of time of time we've had them on board.  And during the elections, you were actually able to go and target individual entities and have them off.  Just remove individuals offline because their right to expression was creating a problem for everybody else's rights to use the platforms and to feel free and safe.  And if you're going to use the Internet to plan how you're going to set up bombs and how you're going to bring down populations and buildings, and we see running infrastructure (?) And knowing that our own families are at risk.  We have to think differently.  And we are forced by our environment to think differently. 

So yes, you have rights.  (?) The Internet in Kenya, it's accessible.  You have a right to communicate.  You have a right to use it.  But choose it in a responsible manner that your rights do not infringe on the rights of everybody else.  All of us can drive.  You have a right to drive or not to drive.  But that doesn't mean (?) Why do we respect that?  Why can't we do the same online?  We should be about to handle the online space in the same way that we handle our physical space.  Yes, I have a right to walk, but that mean that I'm going to get out of this room because I have a right to walk?  If I did that you will definitely throw me out and you'll say she's crazy.  That is what's real.  We have discussed, we have looked up, and so we have laws and regulations that govern those rights so that we can have an appropriate balance between the right to privacy and the rights to freedom of expression.  Because if we don't have that, if we advocate for rights only, then we'll find ourselves in a situation where we are pushing for the rights but nobody's taking responsibility for how those rights are abused. 

And I think that is what is extreme content.  And what we've done is we've been able to get support from most of the content managers, and yeah, we deal with Al Shabaab.  We get them offline and we deal with them as individuals because technically it's possible for us to (?) To know where you are and what you're doing.  (?) We use the sim card registration.  And once we get an indication from security that they want to investigate, (?) We are able to give them access.  And we keep track.  We actually keep track of how many of those cases come in (?).  So we've tried to ‑‑ (?) To try and address the whole issue a bit differently because if you sleep on the job, it will be my husband, my sister, my son will be blown up.  Thank you. 

>> MODERATOR: Thanks, Fiona.  It's very important for us to hear the perspective of that direct engagement for the very real physical reasons that you've talked about.  I think you've also, obviously, hit on many of the same issues that have been brought up about the point about what happens in the physical space has to be carried over online.  I think part of the issue and what we have to break down is how do we do that effectively, and who's doing it?  And it's interesting to hear, in particular, how it works in Kenya, who's responsible, and I think some of the key questions we'd like to get back to are on that, write down the responsibilities between government and content providers. 

So with that, we will turn to the big "G" Google representative on the panel.  We're very glad to have with us Alexandria Walden with public policy and government releases.  And she works on the Google initiative and other things dealing with controversial content.  So please, Alex. 

>> ALEXANDRIA WALDEN: I feel like what I really want to do is give a quick overview what our approach is.  But there are been so many questions.  So I'll just cover that and I recognize that much will come up in the discussion portion. 

I think it's important to talk about, to give a little context with a we're talking about, at least how YouTube and Google are dealing with these issues.  For YouTube, there are 1 billion people who come to the platform every day.  And I know many of you already have heard this, but there are 400 hours of content uploaded every minute.  So when you think about that, that means that we have to be creative about the ways we are making sure that we are maintaining the policies about the type of platform that we seek to maintain. 

So we value openness.  We are a company committed to free expression values and to access to information.  But it's not anything that goes on our platform.  We have a set of community guidelines, content policies, that govern the rules of the road.  Those are publicly available to all users.  And to be clear, most users come to our platform to do, you know, sort of purposely legitimate things.  So logs or look at cute animals, to, like, watch sports.  And it's also become a place where people watch the news and learn about what's happening around the world in places that previously were inaccessible.  So all of that openness and the ability for others to tell stories and for the democratization are the open platform that we maintain. 

An important piece of that is figuring out how to deal with exploitation of our platform.  And so we've done a number of things especially when we're talking about terrorist content and hate content, we've done a number of things over the course of the last year to make sure that ‑‑ and to demonstrate how we are being responsible and drawing responsible lines for dealing with these issues.  So I guess what I'll do is just sort of highlight some things quickly. 

One of the important things for us is that we work collaboratively across the one of.  One of the ways is through the global Internet forum to counterterrorism.  Another way is a multistakeholder where academics come together to talk about the challenges of free expression and privacy vis‑a‑vis government.  So, you know, there are multiple ways in which we are collaborating to make sure that we're getting the best input because we recognize that we don't have all of the answers. 

Another piece that I wanted to flag here, because it's come up across the panel, is about transparency.  And also Brett's discussion about how much ‑‑ how much of our content is really, you know, this hate and violent content.  Last year less than 1% of the content that we remove is for violent extremist and hateful content.  Again, when you think about the scale, billions of users, hundreds of hours every minute, and less than 1% of that is in these categories of content.  But it's really important for us to be responsible in the way that we deal with these categories of content because we're dealing with a set of highly motivated stakeholders, right ‑‑ or not stakeholders, but a set of highly motivated actors.  And so we have to be smart about the way that we're approaching the problem.  One of the ways we do that is through the (?) Program.  We work with NGOs all around the world that are experts in hate and terrorism and anti‑Semitism and Islamophobia, et cetera, all of these things. 

And those folks are able to flag content in bulk for us.  And the purpose of that is not that we're using them to flag at scale, but what we're doing is using that information to help us train our classifiers so that when we're using and learning to identify this kind of terrorist content or this kind of hateful content eventually, when we're using extreme learning for that, we are training our classifiers on the kind of content that's being flagged by experts across all of these fields.  So we really do rely on partnerships with experts across all of those fields. 

I guess the last piece I want to hit on as it relates to transparency is that we, you know, we recognize that it's important for the community to understand what is ‑‑ we have a transparency report already out there.  And what that does cover is government requests.  And that's something that we set out to do years ago, and we've improved upon that process.  Sort of every iteration, there's a little bit more context about the kinds of things governments are requesting. 

But as we move into more and more conversations about how we do content moderation, we recognize that the world is interested in how we do that.

And so we have committed in 2018 to being more transparent about what the flagging process looks like, what in the aggregate those numbers are.  So I just wanted to make sure folks are aware that that is something we recognize, people are interested in, and we are committed to doing more. 

So I'll stop there.  I know there's a lot.  Not a lot of time and a lot of questions. 

>> MODERATOR: You also came in exactly at five minutes.  Very impressive.  Thank you.  I think that was a really amazing range of views on an incredibly difficult topic and one that's giving people five minutes to speak on is really unfair because I'm sure to even touch the surface of the issues takes longer. 

But we'll ask the same of all of you.  We'll have a few minutes here for you all to come in with questions or comments, ask people to identify themselves when you're speaking and try to keep your comments as brief as possible so we can have as many questions from all of you as we can fit in.  So I will take any comments.  All the way in the back. 

>> AUDIENCE: Yes, David from Cambridge University.  We've had this conversation about transparency which has come up in a lot of the sessions.  And I was wondering if the panelists could reflect on how to create the right sense of transparency.  Because on the one hand, I think many citizens feel that they should be able to rely on reasonable local law to complain about content.  But on the other hand, some platforms then provide no accountability at all as to what action they take they're taking off a lot of content.  Whereas others would tend to through maybe the database, and I'm not going to name names here, but publish the name of the person, maybe even their physical address in some cases and the URL, even if it's removed.  If it's against a search engine, which could be quite chilling on the privacy and the integrity of that individual.  So can we come up with global standards as to what is appropriate transparency and what is needed and what is actually a threat to the very reporting which we would tend to think is necessary? 

>> MODERATOR: Great.  Thanks very much.  I'll take a couple of questions and pull them together, please. 

>> AUDIENCE: Hi.  I'm from the Danish human rights institute.  I'm still struggling to understand the scope of the responsibility to protect with regard to private companies, responsibilities to protect human rights.  And in particular, related to freedom of expression.  So is it a freedom of expression issue if a private platform informs ‑‑ or enforce their terms of service without any government interference?  Could they enforce their terms of service on a daily basis, as part of that process, they take down content that is legal after national law, but there is no government involvement.  Is that a freedom of expression issue under the international human rights law?  I'm still struggling with that question. 

>> MODERATOR: I'm very glad to say I think we have somebody who knows the answer and has written extensively on it. 

>> AUDIENCE: My question follows on from the previous one.  We know that governments have orders and make some efforts to collect those.  But a lot of locks are instigated by individual people.  Is there transparency?  The question was also for Mr. Kaye or anyone else who wants to comment. 

>> MODERATOR: I think it was the next group because they're all connected, so I'll come back to the panel and we'll try to have one more round.  Since I mentioned on the human rights law question, I feel comfortable referring to David on that. 

>> DAVID KAYE:  You should write a book on it.  So I think, obviously, it's a great question in the way it's ‑‑ it's not the question, but maybe a couple of thoughts because I think we're all struggling.  I don't have the answer, I don't think.  First is I like to refer to Article 19 of the ICCPR first which says everyone enjoys the right to freedom of expression.  And I like that because it's been understood by both the human rights committee and as freedom of expression have been interpreted by other bodies as meaning that individuals have the right and that certainly if we're talking ‑‑ I know you talked about absence of government, but if we're talking about government, the government also has a responsibility, I think, to protect that right to freedom of expression.  So that's, for me, sort of a first point is that it's written ‑‑ the terms of human rights law are about the individual's right to express.  And so I think Brett used the phrase "rights to respecting" or something like that in his intervention. 

And I think that to the extent that companies interfere with that freedom of expression, then I think that's ‑‑ that it could be understood as a problem of human rights law and that companies, just like other third parties, have a responsibility to protect other rights, that companies have that responsibility, too. 

Now, one of the problems, I think, is that if we lived in an environment where there were a huge number of competitive platforms, then you might be able to say, well, this platform is, you know, is more restrictive of expression, but this one is not.  And so you can move from platform to platform and still have the same reach, let's say, in terms of your expression.  I think that gets harder in spaces that are dominated by one or maybe two platforms.  And so if you look at a place like Myanmar, for example, you know, a lot of public expression, at least in Yangon and Mandalay, the cities, is dominated by one company, basically.  And so that company, that platform has kind of provided the public space.  And I think thinking that through has a ‑‑ the company has a kind of quasi‑public actor may be different from how we think about it in a place where there are multiple platforms. 

I'm just throwing out some ideas.  I'm not totally answering your question.  But I do think that we can think about companies as standing in a position of responsibility to protect individual rights as they're defined in human rights law.  And there just may be variations from place to place.  But generally speaking, I think they do have that responsibility. 

>> MODERATOR: Thanks, David.  On the two transparency‑related questions, you closed with that issue, Alex.  Do you want to come in on it, and others on the panel, just signal me. 

>> ALEXANDRIA WALDEN:  As it relates to blocking, we do on our transparency report include information.  A lot of the time those are not for us, for YouTube, for our platforms.  When our services are turned off, we are not always notified.  And so oftentimes we are, you know, we get notice from someone in the community that says we don't ‑‑ you know, we don't have access to YouTube.  And then we'll see a tip that it's been turned off in a place.  So we are not actually always the first to found out that services have been turned off or that there's blocking happening.  But to the extent possible, we always include that information as part of our transparency report.  Both links to news articles and, you know, sort of you can see a location in the transparency report that tracks sort of the service generally.  And so we do our best to make sure that as much of that information is possible (?). 

>> FIONA ASONGA:  I haven't seen a transparency report that flags important terms and conditions and the kind of blocking takedowns (?).  I wanted to add specifically that especially (?) I think that many of us what will happen if they flag or overflags (?)

>> ALEXANDRIA WALDEN:  We don't currently include information.  In 2018, we do plan to include more information about our flagging.  It's not something that's currently there, but it is something that we are in the process of developing and figuring out how (?).  On flagger, the way that the program works is that participants must have high rates of accuracy in flagging specifically.  So there is a participant that is flagging and not flagging accurately, then they would not remain in the program. 

>> AUDIENCE: Yeah.  On the transparency issue so transparency reports began in 2010 or 2011 largely by Google, which is great and I think we're starting to see now an industry trend towards transparency reporting.  Largely in terms of, like, requests from law enforcement.  But there are obviously many other factors here in terms of content removal.  Access now has issued the transparency reporting index which has all of the ‑‑ or not all ‑‑ month of the reports, transparency reports, that have come out from companies.  So we have all of them in play (?).  But the thing as I mentioned before, there is a lack of consistency.  So we don't actually ‑‑ unless we, like, add all of those transparency reports together, we don't have a sense of ‑‑ and even then ‑‑ of the level of removal.  And then on the flip side, we don't have transparency reports from governments.  So we have them from some companies.  So we actually need two sides.  One last thing on transparency.  I think transparency is the easiest thing that a company can do in terms of human rights compliance.  It's the easiest thing.  There are many other things that they need to do on top of that.  But the transparency thing is a first step, and I will give credit to Google for initiating that trend and that norm across the industry. 

>> MODERATOR: Do you want to add about Kenya? 

>> FIONA ASONGA:  I think it's covered in the report and the authority have the national command center with a different security agencies.  And so on that level, we do have ‑‑ I need to check if it's available online (?) That we share with the service providers.  And then every service provider gets their own private request from law enforcement.  So they keep track of those.  And I don't have the letters, but we are able to provide that report and for what kind of investigations (?)

>> BRETT SOLOMON: On that, with all due respect, I understand the complexity of the situation in Kenya and many other countries.  The points that you mentioned in terms of the process (audio fading in and out) and I think that the Kenyan community in the same way as other national populations (?) And the obligations of the state provide that information in terms of what is the definition of extremist content.  What is the process of identification of that content?

How is it stored?

How is it collected?  How is it communicated?  (?) Because this content has been removed.  Which members of the Kenyan government (?)?  National security agency?  Is it the military?  Is it the civilian?  I mean, all of those questions ‑‑ and I understand the complexity protecting populations is absolutely an essential thing (?) The thoughtfulness of it.  But if they're just talking about transparency, there is a real lack in every jurisdiction of information, vital information about those processes and also ‑‑ and again, the benefit of (?) Protecting populations. 

>> PANELIST: If I can just respond to that, (?) And the guidelines of the procedure of who can ask for that information.  And then we do have a national integration commission and people who can handle all of the issues.  So everything is channeled through there with the exception of terror‑related issues that are handled through the ministry of defense.  So everything else goes through the court process.  (?) Statuses to be able to (?).  But the guidelines on what content is not allowed on social media has been published.  And every Kenyan knows, for example, you don't put pictures (?) Put photos of dead people, you know (?) Yeah.  It's not allowed.  (?) Images that are more acceptable.  And communication that is not acceptable.  And (?) For every Kenyan. 

>> MODERATOR: Thank you.  I'm sure we could have a full additional conversation on that.  I think one thing that's interesting is the extent to which states, you know, are needing to jump into this space, and companies as well.  You know, sometimes jumping further and faster than perhaps we have everything that we need to do, but understandable for the reasons that you've said.  But trying to find a way to both continually improve and to take some of the comments and the practices that are being developed as we go forward because both the threats and the measures to address those threats are evolving continuously.  So it's a particularly difficult regulatory environment, I guess is what I'm trying to say.  We have one question back here.  A gentleman here and then you in front and that will be it, I think. 

>> AUDIENCE: I'm Richard.  My question relates to algorithms.  And I can understand given this issue of (?) That many platforms, algorithms or processes to identify and take down content.  But my question is given the limitations particularly in terms of definition of violent extremism, the biases that can exist, and the lack of transparency in algorithmic processes, (?) Response to the issue of violent extremism, or is there a way if for the that safeguard can be added to make them rights respecting? 

>> AUDIENCE: Thank you.  I have two very short questions.  One is how do the different jurisdictions handle the issue of multilingual content, especially that which is framed in a very subtle way where the translation is not obvious?  The second point is where does the deleted content go?  I think it's two days ago when there was an accident in ‑‑ I think in Manchester or Birmingham in the UK.  I heard an announcement on the news that people should not share images of the accident out of concern for the families, put could they please send them to the police for their forensic investigation.  So I'm just wondering whether there are any policies around where the content perhaps goes for public consumption goes to assisting other things such as research or forensic investigation.  Thank you. 

>> MODERATOR: Thank you.  I had two other people who wanted to come in.  We're almost at time.  If you promise to limit it as short a question as you possibly can I'll let both of you do so, please. 

>> AUDIENCE: Thank you.  My name is Neil.  I work for the Dutch Nation.  I'm also responsible for the unit that is content reporting.  I really like to elicit questions that shared with us and also questions that we have to deal with on a daily basis.  We don't have answers to all the questions and it's an ongoing process.  We try to find the best way of implementing ‑‑ well, our task, the tasks that we have been given.  But the one thing that I'd like to add, and that's about the criteria because in terms of service, I think it's good to know that at least how we did it in the Netherlands is that terms of service don't really ‑‑ they are not important to us.  We only refer content if we think that is important because we're not judges.  But if we think that these are violations of democratically defined laws, so it needs to be ‑‑ and there's a high bar that needs to be either incitement to violence or recruitment to weaponry.  And these are the ‑‑ this is the type of content that we report.  I'll leave it there for now.  Thank you. 

>> MODERATOR: Very sorry to make you go so quickly.  Please. 

>> AUDIENCE: Thank you.  From the communications authority in Kenya.  I just wanted to give two quick clarifications on something Fiona mentioned.  By way of clarifying on the issue of (?).  I think we need to make a distinction between content that is terrorism and content that has political implications, national political implications.  Like if you read about our elections, there's a lot of (?) And things like that.  So in terms of terrorism, the government does have a cybersecurity center that does that.  But for the other content that is more benign, not terrorism but still extremist in the sense that it creates divisions, religious and all of that, that is a different independent constitutional commission.  I guess to clarify that. 

>> MODERATOR: Thanks.  That's very helpful.  I told my panelists I'd give them each a minute at the end to close.  It will have to also bring in any responses to the questions that were raised including the one on algorithms, the comment from our colleague from the Netherlands and from Kenya, two comments there.  So please, maybe I'll go in reverse order if that's okay and start with Alex. 

>> ALEXANDRIA WALDEN:  When it comes to rights respecting use of algorithms, so for us, we use machine learning to help us identify content.  We don't automatically remove content.  There will always be a need for humans to be part of the process.  It's a mix of humans and machines that will get us to sort of operationalize these issues at scale.  And that's an important ‑‑ that's an important flag, the way we talk about machine learning. 

When it comes to multilingual content, we have reviewers both on the side that is government requests and then on our policy review end.  Across the board, we have lawyers and reviewers that can review content in a variety of languages around the world, 24 hours a day, 7 days a week.  And that's not to say, of course, we make mistakes sometimes.  And when we do make mistakes, we have an appeals process on YouTube.  And we often reinstate content once we are flagged, once there is an appeal and once they're on notice that there may be an issue, we have a process, and we do reinstate content when we realize we've made an error. 

And finally, I would be remiss if I didn't flag that (?) Because of very critical part of the way our company sees addressing issues of fraud, hate, extremism and xenophobia and terrorism online.  We have invested millions of dollars in campaigns to help YouTubers better use the platform to amplify their messages to push back against these narratives.  We've done that in addition to investing in NGOs that do innovative ways to create other ways to tell stories.  So I just want to flag that as another way we really rely on freedom of expression and users using their voice to push back against some of the problematic things that we see come up online. 

And then just finally, as we've seen across the panel, these are complex issues without simple answers.  And we recognize that we don't have all of them.  And so we are very committed to continued dialogue and multistakeholder settings to help all of the players really, like, address and problem solve these issues. 

>> PANELIST: I'm going to borrow from the last comment.  I have three things to say.  One is I just want to emphasize that merging terrorism and law and order problems happens a lot.  And it's something that we should guard against because there is an increasing trend towards a particular (?) Of the state one of the things that Brett described.  That's one. 

Two is when it actually takes place, the national security is the big one traditionally.  And that creates the space for a lot of censorship of speech.  This is the kind of thing that even the supreme court has accepted, saying the national security is an exception, but not defining clearly what a proportionate approach to national security should be.  That's something that we need to push back on if we make sure that human rights are protected (?). 

The last one, it sounds silly but it's a serious problem.  I'm just going to leave you with an image of the ways in which the extremist narrative can be abused.  In India, we've got this phrase that is laughed at but is really dangerous which is essentially the place that they use for religious marriage when one of the parties is Muslim.  But this is something that has actually been seen as a legitimate act of terrorism.  It's been taken seriously by the supreme court that has actually investigated the case of a marriage like this.  So I'm putting this out there just to let you know how far an extremist ‑‑ how far the extremist narrative can stretch, if you let it go out of question. 

     >> PANELIST: I think at the end of the day, you are not able to solve human rights issues with respect to entities on their own.  What has worked as part of the collaborative effort that we have between private sector, government and Civil Society, and we are able to sit down together and to address the challenges and the issues and (?) Overlap and focus on that.  There is a lot still to solve.  But the fact that we've got this center that is available (?) Regulations, create laws (?) And support those laws to be able to appropriately act and keep the environment as separate as you can (?).  I think for me that's what I would give everyone as a takeaway to go with.  The technical environment is changing.  The world is changing, shift.  Thank you. 

>> BRETT SOLOMON: So my takeaway, I think, is where I started is that the censorship and surveillance regimes are currently being built, and they are almost impossible to dismantle.  I know that because in the U.S. context, it's taken 25 years to get one amendment and one piece of legislation to slightly (?) The capacity of the NSA.  So it's almost impossible to dismantle.  That's basically what's created now, infrastructure, artificial databases, artificial intelligence, where everything is connected.  And so I think that in establishing these frameworks, including privatizing enforcement, creating (?) Will result in the name of fighting terrorism and then adverse and terrible impact on our ‑‑ upon all of our lives and not just the right of freedom of expression. 

>> DAVID KAYE:  Two quick points.  To the Netherlands, kudos.  It would be nice if all states approached these issues from a democratic perspective.  Unfortunately they don't.  I just want to make a quick note, we've been talking about platform regulation mainly almost exclusively.  But one of the real big problems, particularly in the extremism space, is network shutdowns altogether.  This isn't a Dutch problem.  But in many places around the world, whether it's extremism or incidents of terrorism lead to total network shutdowns.  And so I don't want us to forget that, and to also remember that even in a time when it's so clear that private actors are involved in regulating content, governments are still the major threat to freedom of expression around the world.  At least in most parts of the world. 

And then the second point just on algorithms and whether we can have ‑‑ whether algorithmic rules or automation can be rights respecting.  I think one point here is that governments increasingly see automation as the sort of the solution to everything.  So this isn't really just a question of what the platforms are doing.  Because I think as Alex suggested, there has to be some automation when you've got the scale that these companies are working on.  They've also got human engagement which as Sara Roberts has mentioned at a couple of panels and you should look at her writing, present some real key problems for key individuals who are doing that moderation. 

So I think that it's not a question of whether it can be rights respecting, it can be, and so the question is, I think for us as we are all thinking about algorithmic transparency and regulation is what are the inputs?  It's for the just about the code.  It's about what are the human inputs that go into that and how transparent is that process so that we can actually have a debate where we have equal information as companies and governments do about what actually is feeding into those automated tools. 

>> MODERATOR: Thanks, David.  I promised we'd try to cram a lot of information into our one‑hour session.  I think we've succeeded in doing that.  Just three quick points from me.  One is just to pick up on the issues that I think Brett and others have raised quite clearly, that you know, from U.N. human rights standpoint, one of the key issues here is that if we don't get this right, if we allow sort of vague notions of extremism and overbroad policies and approaches to shape how we do this, we know what has happened in the space in the physical world.  And it will help in the digital world, and it will help better, stronger, faster in terms of going after everybody who's in opposition and everybody who, you know, the content, I think my examples are instructive to all of us about how broad and how scary it can get quite quickly if we don't try to regulate and have a real understanding of how we're going to address some of these challenges.  So I think our marching orders are set very clearly by the discussion today. 

My other point I wanted to mention ‑‑ David's got a copy of this report that Mike Posner worked on.  If we're going to give shout‑outs, back there.  Thank you for this.  I think it's a good description of some of the issues.  I read it recently myself.  So I do think there's a lot of great work being done on how to break these things down. 

And then finally I'll just close by, of course, thanking our wonderful panelists and also acknowledging the wonderful team, Tim who helped us pull the panel together today and all my other colleagues that are in the room who have been working on these issues and trying to bring us together to talk about this as we go forward.  So thank you all very much. 

[ Applause ]

(The session ended at 17:10.)