IGF 2021 – Day 2 – WS #17 Content Moderation BEYOND Social Media

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.



>> We all live in a digital world.  We all need it to be open and safe. We all want to trust. 

>> And to be trusted. 

>> We all despise control. 

>> And desire freedom. 

>> We are all united. 

>> JIM PRENDERGAST: Good morning, good afternoon, good evening, everybody.  My name is Jim Prendergast with Galway Strategy Group  coming to you from a very dark and cold Washington, D.C., this morning.  I'd like to welcome you to IGF workshop number 17, content moderation beyond social media.  Now, if you were looking at the website, if you're able to look at the session description on the website, I'll let you know that some things have changed.  As they say, life gets in the way.  We've had a little bit of a change in our panelists.  So what I'll do is quickly introduce our team that will be with you today.  We all agreed on, when we started through working through how we wanted the session to go is that we want this to be a very interactive discussion.  We want this to be highly participatory.  We only have two panelists, so to speak, and they're going to keep their remarks short.  Very much interested in what both our in‑person audience and our online audience has to say on this topic.  I know there's no shortage of thoughts and opinions.  And we hope to have a very thorough and vibrant discussion. 

So joining us today in person, I see Courtney Radsch, who is our moderator in person.  She'll be trying to manage things there in the room.  As I mentioned, I'm Jim Prendergast, remote moderator for the session.  We also have Sam Dickinson who is our rapporteur who we'll actually hear from towards the end as we do our wrap‑up.  And then our two conversants/discussants/panelists.  We have Liz Thomas, who is the Director of Public Safety and Public Policy at Microsoft.  I’m sure I just slaughtered that.  I apologize.  Liz is coming to us from New Zealand.  And Jordan Carter, who is the CEO of the ccTLD in New Zealand.NZ, joining us from New Zealand.  So we've got a ‑‑ we're spanning the globe, so to speak, with everybody on the session, but we're skewing heavily towards Oceana, which is unusual, to say the least.  Courtney, with that, I'll throw it over to you, and you can get us going on this adventure. 

>> COURTNEY RADSCH: Great.  Well, thank you so much, Jim.  And thank you to everyone who is in the room.  We actually have a packed room.  We’re in a small one, but it's really great to see so many people in person as well as everyone who’s joining us online, on Zoom, so welcome to hybridity. 

Today we are discussing one of the most important topics, I think, which is content moderation beyond social media.  We have heard ‑‑ please join us if you want to come in.  You can grab a chair from up here.  Take it into the audience, like I said, we have a full room, which is great, but we have a lot of seats up here for speakers that are not filled.  So you're welcome to grab a seat and join me up here.  Or pull it out to the audience. 

So as I was saying, content moderation beyond social media.  So much of what we discuss today is about Facebook, Google, Twitter, some of the big social media platforms that do indeed govern so much of our public sphere, what we say, how we say it, who gets to see what we say.  But many of the debates, many of the issues that are truly at the core of what we should be discussing, especially as we get into conversations about regulation, co‑regulation, self‑regulation, and all of those good things have to do with other types of services, smaller platforms, competitors to social media, shopping services, recommendation services, other layers of the so‑called Internet stack.  What role should, you know, content moderation play in the DNS, in the IP services, in cloud service providers, data centers?  We are going to delve into these issues of content moderation beyond social media.  We want to talk about and especially hear from you about whether we need to take more nuanced approaches to content moderation, or should we be looking for a one‑size‑fits‑all policy? 

Should we come up with some key principles that could really form the basis for a multistakeholder framework for understanding content moderation beyond social media?  These questions are all really in flux, and I'm super delighted to see so many people here and online who are interested in thinking this through with us. 

Before I open it up to questions, I want to make sure that everyone knows we're going to take questions and comments.  This is a great session.  This is not where you just get one minute to make a question.  We want to hear your thoughts on these topics as well.  First, we're going to hear from Liz.  She's the Director of Public Policy for Digital Safety at Microsoft and formerly worked for the government of New Zealand.  Then we're going to hear from Jordan Carter who is the CEO of Internet NZ, the ccTLD, Manager for .NZ and a longtime participant in Internet governance in the Asia‑Pacific regionally and globally.  And you're going to hear from me as well.  So, again, my name is Courtney Radsch.  I am an advisory council to Ranking Digital Rights which helped organized this session.  And they put out a corporate responsibility index that assesses how platforms of various different kinds and telecom providers are doing in terms of public accountability, transparency, and adherence to human rights.  I have a lot of other hats that I wear, but that's the one I'm wearing today. 

So without further ado, I'd like to turn it over to Liz and then to Jordan and, again, we're going to open this up for a conversation, so I invite you to use the chat.  And I invite you to raise your hand, and then we will integrate these into this discussion.  We have 90 minutes.  So we have plenty of time for this.  And with that, Liz, let me hand it over to you to kind of give us your perspective from Microsoft how you're thinking about content generation beyond social media. 

>> LIZ THOMAS: Thanks so much, Courtney.  And I did want to start by saying a big thank you to you, to Jim, to Jordan, and Samantha for being here to participate and to shepherd in the conversation today.  And it's really exciting to hear that there's a full room at the other end.  So looking forward to a really good conversation on this.  And I think this might be one of the most New Zealand/NZ introduction to a conversation that I've heard in a while.  So if at any point we are getting too far into a New Zealand accent, please do also let us know. 

And it's really great to have the opportunity to speak on this.  And I do want to take a moment at the outset to put this conversation into context.  And that context really is that the Internet Governance Forum in 2021, it's another staging point in what's been a really busy year, albeit remotely, of Internet governance, of public policy, and of regulatory activity.  And I think, you know, from the perspective of someone who's been involved in a lot of conversations and I know not even close to all of the conversations, you know, we are content regulation, content moderation, there is so much happening in this space.  But particularly, you know, sitting where I sit, seeing a lot of the work that's happening around the world to develop a really wide range of content regulation, whether that's in response to specific incidents or to try to deal with a wide‑ranging set of concerns that are really around online harms and unlawful content. 

And we see that a lot of this activity has social media in its sights.  And that's something that might be really carefully defined in those regulations or those measures.  Or it's something that might actually capture a much wider range of services that actually have really different features or functions.  And as we've kind of noted in the outline for this session, and as you all will know, when we're talking in a social media context, those platforms often have content moderation options that do go beyond that kind of take-down or leave‑up binary.  And those include tools like labeling, interstitials, downranking and other things.  But, of course, this isn't the part ‑‑ this isn't the case across every part of the Internet ecosystem, and that’s let alone when you start to move down the technology stack. 

And so from where I sit at Microsoft, and looking at a particular set of public policy issues in a company that has a really wide range of products and services, including enterprise and cloud services, we do see a need to avoid taking a one‑size‑fits‑all approach to content moderation issues and especially to any digital safety regulation.  And that includes both self‑regulatory measures alongside sort of strict legislative measures.  And that really means avoiding those one-size-fits-all solutions both within layers of the stack, up and down this stack, and between enterprise and consumer products. 

So for us, you know, looking at only a single layer or indeed a subset of applications on a single layer, it really ends up obscuring some of the complexity in this space.  And, you know, unfortunately there's no getting around that fact.  We all know that content moderation is complex, and we can't get to effective targeted approaches to dealing with these challenges and issues without really understanding and embracing that complexity. 

So without thinking about the layers of this stack, for instance, we risk missing some important nuance about the relationship between a provider and the ultimate end user or the extent to which that provider actually has options to deal with specific content moderation challenges.  And, of course, the further down the stack you go, and I suspect Jordan may touch on this from a domain perspective, the more restrictive those options become, and that means in some instances the only option may actually be platforming. 

And this is also the case when we consider enterprise as opposed to consumer services.  So for us as Microsoft, where we are providing enterprise services, we do not have a direct relationship with the ultimate end user.  We hold that relationship with our enterprise customer.  And for a combination of legal and privacy and technical reasons, we generally are unable to moderate individual‑specific pieces of content.  And so the degree of control that we exercise in that scenario is fundamentally different to that on a host of consumer services. 

And so part of what we want to do is really to help regulators and others understand that these services do have different risk profiles, different expectations of privacy, and different contractual obligations in play.  And where these differences are lost or conflated, whether in regulatory or other measures, it can result in challenges, and that's both for the users that regulation is intended to protect but also for the wider Digital Economy

And this is, of course, without going into some of the significant differences in risk profile and user expectations and functionality, even among different consumer hosted services.  And I know it's trite to say that the Digital Economy is the economy, but, you know, on an average day, for a normal person, your day might include using email, a general social media platform, search services, a dating app, checking the news, accessing on‑demand video.  And all of those things come with a different set of expectations around what users can expect from that, around the kind of information they expect to access, their privacy expectations and everything else. 

And so that means when we're thinking about content moderation and technology, it's really important to remember the size of the ecosystem as a whole and the hugely diverse range of services that could actually be implicated in the initiative, and some of the size, maturity and other differences between those providers. 

So as Courtney's kind of flagged, you know, we do want to keep our remarks brief today because what we want to talk with you all about is how we effectively flag some of those important differences and equally why it's so important to make these kind of distinctions.  And actually for a provider like us, what we can learn from these kind of conversations about moderation across these different kinds of platforms.  You know, this is a conversation that's rapidly developing, and it's a changing ecosystem.  So how can we help put forward and think through a principles‑based approach to these issues?  And so I'll leave it there.  I look forward to hearing from you all in the course of the discussion.  And I see, as I’ve been talking, that Courtney's been beckoning other people into the room, which I feel like is a great start.  So I look forward to hearing from Jordan and the conversation as it flows. 

>> COURTNEY RADSCH: Great.  Thank you so much.  We have a full house.  And although we have only one speaker here, the speaker deck is full of audience members because this is such a critical topic, I think, that so many people are interested in.  So thanks, Liz.  So definitely hearing from you, avoid a one size fits all.  Not only do we need to think about different layers of this stack, different types of services, but one of these considerations is enterprise versus consumer‑facing services and the degree of control, as you mentioned.  What type of staffing?  How do we also think about an approach that doesn't ‑‑ that doesn't require having massive amounts of resources in order to moderate content?  What could that do to the Digital Economy, to competition, et cetera? 

Jordan, I'd like to hear from you.  Do you agree with Liz, and what's your perspective on content moderation beyond social media? 

>> JORDAN CARTER: Well, thank you, Courtney, and everyone, hello from the bottom of the world.  Very sad not to be joining you in Katowice in Poland.  I'm hoping that normal service will be able to attend and IGFs will resume in 2022.  Fingers crossed.  It's been a long time since Berlin. 

Look, I just want to start with some broad opening comments.  That content moderation as a problem will keep coming up because we are mediating and moderating more and more of our lives through online forums.  And while we are wonderful creatures, human beings, we are also ones that occasionally do things that cause harm to other people.  And the moderation challenge is most challenging, I think, in those enormous scale social networks that we all know because of the breadth and impact that they can have on things.  And you only need to think about the impact of the pandemic and associated misinformation there to see exactly what that means.  But it's just one example among many that you'll be able to think of. 

And because of the vast diversity of the types of services that we use, and one size fits all isn't even the beginning of a starter.  How do you say that a framework that might be appropriate to give regulatory encouragement to a platform like Facebook is going to work for a platform like Telegram, is going to work for a platform like Tinder, that's going to work for a platform like eBay, you know?  And that's just some very obvious, big examples. 

That's the first thing.  This problem isn't going to go away.  And it isn't clean cut because it isn't just content that's obviously harmful that becomes the target of regulation.  It's the precursor content that can drive harm or mobilize people to doing things.  And so it's not just stuff that's immediately bad, but it's stuff that can encourage people in that direction.  And it's stuff that has online impacts, and it's stuff that has offline impacts. 

And in dealing with these, these are hard problems to grapple with.  And the technology sector, I don't think, has the social license or mandate to answer all of the questions about how to handle it, precisely because of the breadth and depth of the challenges that this is giving rise to. 

And a point about infrastructure providers.  You know, I work at a domain name registry.  We don't have the option of taking down a post or sending a message to a user on a server.  There is no granularity to what we can do.  We can take a domain name out of the DNS, and then no service attached to that name will work.  And it's a bit like an on and off switch.  It isn't moderation.  It's cancellation, if you like. 

And so that's something that we should all keep in mind.  The complicated messy reality of this just in terms of the diversity of services.  And then introduce that complexity and diversity to the enormous diversity of our societies and cultures, our religions, our languages.  The exact same words for the exact same image in two different places or two different groups of people can have a completely different meaning.  An innocuous phrase to one person could be a very real and dangerous incitement to violence in another.  So that is most difficult for the biggest global platforms.  But it's also difficult for the concept of any kind of harmonized policy or regulatory approach because, you know, how do you, in that enormous human diversity which we all celebrate, trying to account for the bottom line is an enormously challenging thing. 

And the concentration of Internet services, which I think people have mentioned, does complicate this because, you know, the old perhaps early naive Internet thinking that we could choose a different service, choose one that's got the policies that you like runs  slap bang into the network effects that mean we all want to be on the platforms where everyone else is.  That's how you communicate most easily. 

And then on the other side, adding more complexity, some of the players are very small.  My organization has got five or six people working on the whole compliance and enforcement area and three quarters of a million domain names.  Do you really want us making decisions about what should and should not be on the Internet?  And then the question is, do we want our government to be making those calls through due process?  And if our government starts doing it, will it start doing what jurisdictions like the European Union has been doing and saying, you know, we're not just going to have our laws affect our services or our industries.  We're going to give them a global application.  So if you think about the GDPR or possibly in the Digital Services Act that's under negotiation in the EU now, you know, as a provider of domain names that are open for anyone in the world to register, in theory, we would have to abide by that legislation through a process in which we have no voice and no process and, you know, sending someone to Brussels to advocate for us on that is not high on the list of things to do. 

So that indicates that the discussion needs to be a sophisticated and careful one involving all of the stakeholders who are affected which itself will be a messy and grisly process.  But I come back to the point I made at the beginning.  Leaving it just to the tech sector isn't the right answer.  Private sector decision‑making especially down the stack of the Internet becomes extremely binary in its impact.  And the enormous diversity of societies and services that are involved in this question mean that a one‑size‑fits‑all approach simply isn't going to be viable.  So, you know, some norm settings and conjecture of policy towards things like, for example, the Santa Clara Principles could be a way to explore.  Co‑regulatory approaches could be ways to explore, but I think we want to avoid the situation where every country in the world is passing legislation about this question and reporting that it should apply to all services around the world that gives citizens access.  That way is the way to a very messy and unsatisfactory situation. 

So I hope I’ve said something in all that to provoke you, and I'll leave it back to you, Courtney.  Thank you. 

>> COURTNEY RADSCH: Thank you, Jordan.  Indeed.  You know, this issue not just ‑‑ you know, that content moderation is not just about what happens online.  Many people know that there are significant offline impacts.  And one of the impetus for this workshop was the deplatforming of then‑President ‑‑ the U.S. President Trump.  And I don't want to debate the right or wrong of that.  But I want to talk about a couple of the things that happened to get to this point about how content ‑‑ we need to think about content moderation beyond just social media. 

So we saw ‑‑ and we saw this also with a news website before.  We've seen this with other types of content providers where they've been kicked off social media.  We then see payment services.  For example, PayPal or Venmo, Stripe, credit card services choose to deny the right to process payments for that user or that political -- politician.  We then have seen ‑‑ you mentioned, Jordan, that, you know, domain names, hosting services can choose to kick off and in some cases kick people off their services.  But in other cases that technical capability doesn't work.  Not only that but you don't have staff who are even tasked with doing that. 

But then we see that there is content moderation in terms of the platforms that are then allowed to be hosted on, say, Cloudflare, digital security cybersecurity services.  Is that a good thing to open them up to cyber attacks?  You know, so this is very complex.  And we heard, you know, a lot about both the mandate that companies have to handle it and what that means for resources internally, and the resources it takes to do regulatory advocacy. 

So I am going to ask, while we kind of covered one of the first things that I wanted to hear from you all about the consequences on the overfocus on social media and scene setting.  I'm going to go back to Jordan and Liz.  But before that I'm going to ask people on Zoom and in the room -- we don't have the polling function, unfortunately, on the platform, so we're going to do this old‑school.  I want to ask people, just by a show of hands on Zoom, use your little reaction button, and in the room, do you think that domain name services, that level of service should be doing content moderation, or should they be not doing content moderation?  So show of hands on Zoom if you think they should be doing some sort of content moderation and show of hands in the room.  I'm going to ask Jim to count those, and I'll count this, and then we will reveal that after we hear from Jordan and Liz about how we can develop more nuanced approaches of an overfocus on social media. 

So how do we actually get to that, especially as we're talking about some of the regulatory initiatives?  You know, you mentioned, Jordan, the fact that, you know, the EU is adopting these approaches that have impacts and influence far beyond their borders.  So, Jordan, I'm going to go to you first for a couple of minutes to talk about more nuanced approaches and then go to Liz for the same.  And, again, a reminder to put your hands up in the room right now if you think we should be doing DNS‑level content moderation and, again, on Zoom.  All right.  Jordan. 

>> JIM PRENDERGAST: Jordan, before you go, can I just -- for folks who are in the Zoom room, in order to indicate yes or no, you have to actually go to the reactions button, which has that little smiley face with the plus on it.  Click on that and you'll see a green check for yes or red X for no.  And if you do that, it will show up in the participants list.  Thanks. 

>> COURTNEY RADSCH: Great.  Thank you.  All right.  Jordan. 

>> JORDAN CARTER: It's a great question and, of course, if I had the answer in my back pocket, Courtney, I'd probably (?) and very well‑rewarded policy consultant in national capitals all over the world.  But I don't have a snappy answer for you.  And I don't want to sound like a stick in the mud, but one of the first principles, I think, is around transparency.  And this is one aspect of the EU Digital Services Act that I think does help because in a lot of these areas, one of the key things that's lacking for scrutiny by other players, by content regulators that already exist in jurisdictions, NGOs, oversight bodies, researchers and so on, there's very limited understanding of what happens in any of these services. 

So a transparency mandate can help broadly to understand what is going on in various service providers.  And to give reporting also can provide some light on the requests and powers that governments are already using with respect to them.  And that can help and I think ‑‑ I don't know that it's quite on point here, but the ‑‑

>> COURTNEY RADSCH: Sorry, but I want to ‑‑ what do you mean by transparency mandate?  What do you mean by service providers and transparency?  Can you, like, get a little bit more specific on that point?  What would they need to be reporting on? 

>> JORDAN CARTER: So in terms of reporting, there's often a basic question of have you been asked to take content down by people with regulatory authority?  So that's the kind of basic transparency that I'm talking about in this context.  And then, you know, you can go beyond that to ask them to report on decisions that they have taken on their own recognizance that relate maybe to terms of service, maybe to public complaints, maybe to other policy frameworks that they abide by.  So that's ‑‑

>> COURTNEY RADSCH: Thank you. 

>> JORDAN CARTER: ‑‑ the kind of thing that I'm talking about.  And there are sort of other questions around the much bigger question of algorithmic transparency that I don't really want to go into because it links more to the social media side of things, I think, for a lot of us. 

>> COURTNEY RADSCH: We'll keep that for another session. 

>> JORDAN CARTER: Yeah, yeah.  This is beyond that, not on it.  So in the lack of a very specific snappy answer, I might hand it back to you at that point. 

>> COURTNEY RADSCH: Okay.  Thank you.  Liz, a couple of your thoughts about more nuanced approaches. 

>> Liz Thomas: Thanks, Courtney.  And I agree with Jordan.  I think, you know, that transparency piece is important.  And I think but even in transparency reporting and thinking about what's required there, there is also that kind of differentiation between the data that's useful from different services and in different contexts as well.  And I know, you know, we've had really great conversations about in some of the other forums which we engage about, you know, the audiences for transparency and the purpose as well.  So I think it's a great starting point.  But there's even a lot of nuance in the approaches there. 

Coming to sort of the question here, and, again, I know this is going to sound trite and unhelpful in some ways, but part of it is really highlighting some of the complexity because I think there is ‑‑ there can sometimes be an ease of sort of assuming the Internet is a certain subset of platforms and, you know, part of it is an educational piece and really helping highlight some of the differences between those ‑‑ between those services, between some of the tradeoffs that might be involved but also having to bring together experts that can have bridging conversations.  And I think part of that is ‑‑ part of that's the educational piece but it's also bringing together sort of thoughts around privacy and safety. 

And to some of Jordan's earlier comments as well, bringing in those kind of diverse voices to really help understand how some of these complexities and these tradeoffs really play out for users in a wide range of geographies and to understand where tools or restrictions or legislative measures might be abused or where blunt tools may have a disproportionate impact.  So I think part of it is the conversation we're having here today, which is I know we often sort of talk about not admiring the problem, but part of it is about saying we know this is complex.  Are there ways for us to break down that complexity and help make it manageable? 

>> COURTNEY RADSCH: All right.  Thank you for that.  So we had zero hands being raised in the audience here.  And I think very little responsiveness on Zoom.  So it looks like that attempt to kind of get feedback was a flop, or else everyone agrees, like, that's not where we should be doing content moderation.  But I want to ‑‑ so since people in the room can't see the chat -- just mention that Rowena Shue pointed out in agreement with somebody named Susan Payne that, you know, this idea about DNS service providers is too broad for nuance and that as Jordan mentioned, it's not really content moderation.  It's disruption of DNS resolution, which may be appropriate in some circumstances depending on context.  So, again, getting into nuance here. 

We have a question from Jose Michaus on Zoom.  And I'd like to invite anyone in the room who has a question or comment at this point to stand up to the mic or raise your hand.  But we have a question here for both speakers that says ‑‑ he agrees that a ‑‑ or, sorry.  Jose agrees that one size fits all is probably not a good idea.  At the same time, do you see a problem in enacting more and more regulation that might be fragmented or contradictory?  So, Liz and Jordan, thoughts on that?  I think to some extent, you've kind of already said that, but if you can get into a little bit more detail.  And I think especially maybe if you have any examples, that would be really helpful. 

>> Liz Thomas: Yeah.  I mean, I'm happy to jump in with the short answer, which is yes.  And I think that comes two pieces.  One is a tech sector perspective where it becomes increasingly challenging to operate on a global scale where you are adapting to regulation in a range of different markets in an unharmonized way.  But the wider piece, I think, goes to the kind of conversations at The Internet Governance Forum, which is the risk of really what this does to a free, open, and secure Internet and the ability to share information kind of freely, to engage in different ways and actually have some of those technical systems but also those wider conversations still working together. 

And I think one of the opportunities and challenges for us all is that we're at this point where a range of regulatory measures are being enacted in different jurisdictions.  Some of them are contradictory or taking different approaches.  But we are at a point of opportunity where having a conversation around some of these principled approaches and ideas that can feed into that is an opportunity to help shape that conversation. 

Whenever, you know, realistically we may not ever achieve perfect harmonization, but I think there is still an opportunity to help take that conversation forward in a way that really highlights some of those unintended consequences. 


>> JORDAN CARTER: Yeah.  I mean, again, the short answer is yes, but the problem here is that the organic Internet approach to solving problems related to Internet and technology policy is decidedly not solving this, right?  So that is what is driving the rise in government attention.  And, you know, that we're speaking here in an Internet Governance Forum that by design cannot create harmonized policy outcomes of any sort.  And that's why there's an evolution push right on for the IGF, right?  Because, you know, if the Internet governance environment is unable to solve the problems that the Internet is giving rise to, states are not going to not step in.  So I think that's one of the kind of things for us to keep in mind. 

So there is a problem with it.  And I think one of the ways that we can help is by encouraging regulatory authorities to be part of discussions like this and to understand that as they go through processes of considering these problems within their jurisdictions, they are engaging with this broader debate because I know in, like, this is not a useful example in a substantive way but just a process thing, in New Zealand we have separate legislation dating from the 1980s that governs content regulation, and by media.  So there’s the Broadcasting Act and there's a Film Videos Publications Classification Act, and there's another piece of legislation.  And the government has taken years to do a review of those on a media basis to try and have a harmonized law framework. 

Now, I might be wrong, but the New Zealand government probably doesn't have officials in this session.  And turning to the IGF as a source of experience there is not something that there would be the depth of experience necessary in the government machine to do.  So an organization has to bring some of that there.  But I think that's just another point that there's a mismatch here between government's need to protect citizens and the way that Internet norm setting happens.  They're not closing the loop.  I think teasing that out is probably something that we all need to be doing a lot of as a community. 

>> COURTNEY RADSCH: That's a fascinating observation.  And I guess I would ask any New Zealand government representatives in the audience here in person?  No?  And any government representatives?  No.  So this is a missed opportunity.  Maybe they'll watch the livestream later. 

You mentioned something, though, about media and the efforts that are being done around media regulation and content.  And that's another aspect of thinking beyond social media.  So I come from the journalism community.  I also co‑chair the Dynamic Coalition on sustainability of journalism in the news media, which is profoundly interlinked with Internet governance and specifically with content moderation.  Both in terms of media outlets have content all the time that is going out there and that they offer opportunities for engagement with their audience.  And they, of course, have different liability responsibilities than social media platforms have. 

Similarly, news media are at the mercy of recommendation platforms at visibility algorithms in order to have that information seen, which becomes profoundly impactful during public health emergencies, protests, uprisings, you know, any number of things, not to mention, you know, noncontentious elections, et cetera.  So, you know, just to emphasize the point that I agree, we need nuanced, multifaceted approaches to content moderation because what works for a noneditorial intermediary is going to be different than an editorial intermediary.  It's going to be very different from another type of service provider. 

As we think about ‑‑ and, again, I want to invite anyone in the room or on Zoom to join the conversation, step up to the mic.  Please, go up to this mic here, and as you're going up there, I'll just note that Jose who had asked the earlier question wanted to note that the IGF Youth Ambassadors are holding a virtual booth and will be discussing their initiative and introductory guidelines on regulating intermediaries' liability, an outlook from Latin America.  And so you can find that on the IGF website, how to participate in that.  But, again, I think this is an example of how many different types of groups are thinking about this.  And, of course, there's the Dynamic Coalition on Platform Responsibility.  Thank you. 

Please, introduce yourself. 

>> Yes.  So my name is Xavier Brandau.  I'm from the Civil Society.  So I'm part of a group, a network of groups, of collectors of citizens fighting hate speech and misinformation online.  So we see a lot of content every day.  We are mostly on social media, actually.  So maybe a bit out of the scope of this conversation, but just our take on content is that there is a lot of content because there's a lot of people online in digital spaces.  And there's a lot of life.  And here, for example, in Katowice, there's a lot of staff here at this international center, Congress center, because there's a lot of people coming.  So shouldn't we assume that, by definition, content moderation is a huge task, a complex task, and human resource‑intensive task, you know.  So doesn't it boil down to putting the right resources, financial, human resources, whatever the level is, whatever the actor is, and what we see, we are on Facebook.  We see that it's very simple.  The resources are not there.  It's as simple as that, you know.  So, yeah, just this question.  Could it be a take, you know, on solving the next challenges? 

>> COURTNEY RADSCH: So before you leave, what country do you -- or region do you primarily work in? 

>> So we are mostly in Europe.  So we have tons of problems in Europe with content moderation.  We are not even, you know, in countries where there's a civil war or where there's, you know. 

>> COURTNEY RADSCH: So let me ask you a question about that because I think, you know, this goes to a point that also Jordan raised earlier, which is about the resources needed.  You're saying content moderation is challenging and you need to devote resources to it.  We know, for example, gaming, which Microsoft is involved in, has a huge issue with content moderation and chatting, et cetera.  So getting beyond ‑‑ let's not talk about Facebook.  Let's talk beyond social media.  Wouldn't ‑‑ so in Europe you have this interest in combatting the monopolization of the sphere by a few powerful actors, especially those based out of the U.S. in Silicon Valley.  On the other hand, what you're saying is that we should have only allow companies, I think, that have the resources to do moderation to provide those services.  So isn't there a tension there?  And wouldn't that lead to fewer outlets of expression because you would have to have resources?

And I want to hear from you and then go to Jordan to get his ‑‑

>> Yeah.  So it's interesting because there was this discussion yesterday about having more democracy in online spaces and digital spaces.  So what we do, we are called I Am Here, and what we want is to promote civil courage and citizen participation to online spaces and civic discourse, you know.  And so maybe it's part of this discussion about democracy online.  It's like maybe companies should delegate more power to the communities.  So, for example, we would have possibly moderators that are users.  So what we see on certain social media, for example, what we see on Wikipedia, which works great.  So, I mean, they have responsibilities for sure.  We want companies to put much, much more money into it.  But perhaps there are hybrid, you know, ways of functioning where there's more power to the citizens. 

>> COURTNEY RADSCH: Great.  Thank you for entertaining my question.  So, Jordan, what do you think?  You know, you mentioned the challenge of resources, the fact that you don't have this type of staff.  How do you react to kind of that perspective?  That while content moderation is a key issue of our time and you need to put the resources there.  Are you going to go hire people to do this? 

>> JORDAN CARTER: Yeah, yeah, yeah.  I think that in some cases that is right.  So there are some examples of where content moderation is a thing that has happened where it is underresourced.  And that's, I think, one where organizations are ‑‑ have big scale and are not kind of keeping up with the demand for that aspect of their services, which they have chosen to enter into, right? 

If an organization puts up a policy for its services that says behavior isn't allowed, and then it's blatantly allowing that behavior because it isn't investing in the approach needed to deal with it, I think that's an easy yes to your question.  But it isn't always resources.  So if you take the case of a country code domain registry, you know, the content ‑‑ the domain name points to some content on the Internet.  Is that the right point in the chain to say we'll make a decision about content?  Should it be the host of the content?  Should it be an agency that is responsible for content regulation, whatever domain this is?  So I think that's the kind of complexity that needs to be unpacked.  Because we could boost the price a little of domain names on a wholesale level, and we could hire smart and savvy lawyers.  And then the question would be, well, on what basis should we make these decisions?  Is it on the basis of a complaint?  Would we apply New Zealand broadcasting law?  Do we apply some other data principles?  And how do we choose, you know?  We engage with the local community.  So resourcing sometimes, but other times mandate, and other times point in the chain, I think.  So it's, again, very ‑‑ it's complicated. 

>> COURTNEY RADSCH: Okay.  I'm hearing a theme here.  It's complicated.  Liz, let's go to you.  Microsoft is a trillion – multi-trillion‑dollar company.  You have a lot of resources.  So first, can you talk about how you think about content moderation on some of your other platforms.  Let's specifically talk about cloud services where I think there would be an expectation about users that you're not looking at what we're storing in your cloud. 

On the other hand, let me bring in a specific example of why you might want to be looking at it.  And, again, I want to emphasize that I'm bringing this up because, you know, I think we want to press these issues.  So my asking a question does not imply that I necessarily support that.  But in Vietnam and Cambodia, there are shared Google Drive folders which could easily be Bing folders or wherever your cloud services are called.  Excuse me for not having the specifics in mind.  But shared folders in the cloud where they share information with content trolling farms in order to plagiarize content that is then repurposed from Facebook onto YouTube and put out on social media and websites around the world specifically with a focus on Myanmar, which has then been linked to violent conflict there. 

So these sites are sharing, you know, in the cloud.  I also want to hear about how you think about content moderation on your gaming services where you have a lot of, you know, people messaging each other during gaming.  But we also know that women, technology reporters, gamers, have been specifically targeted with really vile harassment in gaming spheres.  So can you talk about those two very different types of content moderation systems?  What sort of resources do you devote to content moderation in those two different types of platforms? 

>> Liz Thomas: Thanks, Courtney.  No shortage of questions there.  I mean, I think the first point I'd like would actually go back to the question that was asked about resources and human resources in particular.  And I think to go to Jordan's point, yes and.  And I think it's a yes and because of the scale at which information flows on the Internet, at the scale at which people are communicating and the speed, you know, we do also need to use technical tools to help us in those situations and to do some of that detection legwork for us and in some instances, you know, really get that reporting down. 

The other point I'd just make is even with all the human moderators in the world, content moderation is still really hard.  These are complex speech judgments which involve a weighing of rights and where involve a whole range of considerations and context.  And so to the extent that you can, you know, hire an Army, it's still not always going to mean that any platform has the right answer on this because these are not easy decisions.  And I don't want to commit ‑‑ to unnecessarily come back to complexity, but it is complex. 

To the points around cloud storage.  So for us, it's One Drive, which is our consumer‑hosted cloud storage services.  And so we do undertake moderation on those and, you know, we do deploy automated tooling and Bing goes through a process by in certain circumstances will be reviewed by humans.  We have those processes in place.  I think the scenario you've raised in particular on that one, Courtney, is a really interesting and challenging one, right, because it goes to a really complicated question, which is conduct and content which is not necessarily an issue on one platform on the individual but goes to a wider pattern of behavior or cross‑platform conduct and actually intersections with the real world.  And that's, again ‑‑ you know, I'm saying the word repeatedly, but this is, again, where it becomes really complex.  We don't necessarily have the insights as a single company into that or how that manifests on other platforms.  And, actually, in some instances there will be people uncomfortable with us, engaging on the behavior of particular users or particular accounts around activities.  So, you know, this is where we hit some of these really difficult issues around how we appropriately make these judgments of the information and we make them on – and, again, on humans help, but these are still really hard issues and involve a weighing up of really, really complex challenges. 

The final piece on gaming is a similar one, right?  You know, XBox and XBox Live is a space where a lot of conversation happens.  And there are a range of moderation tools and practices we have there.  And actually, the point ‑‑ I wrote down the phrase because I really liked it, which was civil courage and citizen participation really aligns to some of the way we think about the concepts of digital civility but also the kinds of programs we have in place through the XBox Ambassadors program, which is a kind of a holistic approach to some of these issues, which is we want to help create healthy atmospheres and conversations in the first place before we get to the point where we need to take moderation action.  We want to have the policies in place to act as a deterrent to warn people about the consequences.  We want to have the technical tools and human moderation capabilities to pack that stuff up where it comes in and to help make some of those complex decisions, and the intertech enforcement on moderation action is necessary.  So, you know, these are things we have in place.  And to your point on some of the particularly around women's experiences on gaming platforms and online, we aren't always going to get everything, but we do – we will continue to strive to improve and to hear the lessons learned on that.  So I hope that's kind of touched on a range of the issues that you've raised. 

>> COURTNEY RADSCH: Thank you.  I also want to follow up with you, Liz, on kind of the implication that I hear here.  You know, this idea of, like, kind of co‑regulation, of, you know, content on these platforms.  You raised, Xavier, the example of Wikipedia.  Liz, you talked about the ambassadors and kind of the role of the users.  But there is a significant $2 trillion difference between Wikipedia and Microsoft.  Is it legitimate to ask the community to moderate content on for‑profit platforms?  Do you remunerate your community when they do a really good job?  Should we be looking at volunteer models?  Again, like, in for‑profit platforms?  And, Liz, I'd just love to get your reaction to that, and then we're going to go to a question in the chat. 

>> Liz Thomas: That's a really interesting question, Courtney.  It's one I don't necessarily have an answer to.  And I think what I think I would say is I think it goes a little bit to the kind of business model that you have.  And part of it is we are a for‑profit platform.  We accept that we have a responsibility to our users as a result of that, and we try to create safe communities.  But part of that is that we're in the communities and the kind of spaces that we want to create and help users feel like they're a part of, and it's not necessarily going so far as moderating themselves but saying and engaging in this platform, we want you to feel like you can do that in ways which are respectful and healthy and part of a conversation and set the tone for community.  And that's not the same, I think, as perhaps the kind of the weight or the responsibility that might come with a moderation that exists in some other forums, but it's a really interesting question and something I'll go and think on. 

>> COURTNEY RADSCH: Great.  All right.  We have a question from Luiza Mulhero, and I'd invite you to unmute and ask your question on Zoom. 

>> Oh, hey.  Thanks for the opportunity.  I'm Luiza.  I'm from Brazil.  And I'm also an Internet Society Youth Ambassador.  And we were talking about the complexity of the Internet service providers, of the content moderation beyond social media.  And I ask you about if you think we should cooperate on universal taxonomy or a classification across jurisdictions concerning this diversity of all those intermediaries?  Because we were talking, like, about principles and some consensus somehow on the importance of this differentiation.  So I will also ask if you think that this initiative would improve more accurate regional intermediary response regulations alike across the world, for example?  Thanks. 

>> COURTNEY RADSCH: Thank you so much, Luiza, for that interesting question.  You know, Jordan, what do you think?  Would that be helpful? 

>> JORDAN CARTER: I think it would help just in this sense.  If you were trying to structure a discussion around, you know, the ways that various problems can be dealt with at various ‑‑ you know, we used to use the word “layers,” right, but the world is quite complicated now compared to that.  And having a sense that, you know, maybe starting closest to the Publication Act of the speech that's involved and sort of giving some examples of the kinds of organizations that are in these various categories.  So I'm always a fan of taxonomies.  So my short answer is going to be yes.  And, you know, like, should payment providers be done?  Should cybersecurity infrastructure providers be involved?  Should domain registrars be involved?  It couldn't help ‑‑ it couldn't hurt for people to find an authoritative and well‑worked‑through list of these just to inform their perspectives. 

I'm not cynical enough to think that that would be a bad thing to do just because it would encourage people to try and regulate all of them.  I do know some people in the deeper, darker parts of the technology community who have that blackout approach.  So I think it would be helpful.  Some exercise like this happen on narrower slices.  Like in the Internet jurisdiction policy networks work on domains, there's been a discussion around trusted notifiers, for example, as one example and tried to do a taxonomy of the various things that domain providers can do in dealing with these things.  So there are microexamples of it.  I don't know anyone who's tried it at that biggest ‑‑ at that biggest realm. 

>> COURTNEY RADSCH: Great.  Thank you, Jordan.  I will point out the Platform Responsibility Dynamic Coalition did a glossary around some of these content moderation issues, and I think it came out to almost 300 pages.  So it could be very long.  Liz, what do you think? 


>> COURTNEY RADSCH: Global taxonomy, would that be a helpful starting point for us? 

>> Liz Thomas: I mean, yeah.  As with Jordan, I think where we can try to find ways to ‑‑ I always like to try to find ways to order information and to make it digestible.  Because part of the challenge we have here is that these are conversations that are happening with people with a wide range of different understandings and experience.  And if we can find ways to help tell that story effectively and a taxonomy to my mind is a really helpful way of doing that.  Sometimes a little tricky 300 pages.  But if there are ways to help order this information and make it digestible, then I think that sounds great to me.  But really can’t wait to hear others in the room and online as well. 

>> COURTNEY RADSCH: Yeah, I’d love to hear that, too.  What do people in the room think?  Would this be a helpful kind of next step?  Because I think we've had these conversations.  We're having them more and more.  There's greater realization about the fact that content moderation does occur beyond social media and at all of these different types of services and stacks.  What do people in the room think?  And, Jim, let me hand it to you.  Are you seeing any comments in the chat?  Any thoughts from our audience?  I'd love to get your insights. 

>> JIM PRENDERGAST: Not at the moment.  I'm trying to encourage participation from the Zoom room participants, but we may have to start calling on people individually to make that happen. 

>> COURTNEY RADSCH: All right.  Well, I don't see anyone here.  Anyone want to jump in with thoughts about that?  Great.  Please, come up to the mic and introduce yourself.  Really excited to get your input.  And if you'd like to, you know, raise another topic, feel free. 

>> Yes.  So I'm (?).  I’m the researcher at the Kosciuszko Institute.  We are an NGO working basically on cybersecurity and new digital technology.  So this is pretty much our idea of work.  I would definitely be ‑‑ like, I would definitely be really into the taxonomy because I think those -- like, having very clear roles on how to do this content moderation, because I think that, you know, the full stack of the Internet is really complex, and there is a bunch of layers like users don't really see on a daily basis.  We are not in contact, in direct day‑to‑day contact with many of those entities, with many of those layers.  So I would say there would definitely be a bunch of issues we would have to solve, and a taxonomy would be definitely helpful in that. 

And issues, for example, with transparency, as it was already mentioned a few minutes ago.  For example, like, why such and such content was taken down, or was the content ‑‑ like, on what grounds was it taken down or prohibited, for example?  Because that also might be the case.  Was the provider forced to do that by an authority, on what grounds as well?  So I think that taxonomy could definitely help with that, having clear roles.  Also a more harmonized approach across different regions as well.  Because, you know, Europe might do one thing.  The U.S. might decide to go the other way.  And, you know, with the Internet and all the technologies being so globalized and so widespread across the entire world, pretty much, I think we definitely need a more harmonized approach, a worldwide version, sort of. 

>> COURTNEY RADSCH: Can I ask you a question?  I'll go to you in just an sec.  So in the cybersecurity domain, do you think that cybersecurity providers should be involved in content moderation?  And, again, I raise the example of Cloudflare.  Like, does potentially denying an account or a user access to cybersecurity services further the goal of a secure, stable, unified Internet?  Does it detract from it?  Like, how do you view content moderation in cybersecurity? 

>> I think that content moderation is, like, one of those things where, you know, all hands on deck, where you cannot really exclude any of the entities, and cybersecurity I think is also pretty much needed in this regard.  So I would say definitely yes they are needed.  But as we have many of those projects were a truly multistakeholder approach is necessary, I think, you need to have clear roles and, you know, so that every entity, every layer of this full stack of the Internet or other technologies knows what they're doing, what they're supposed to do, and what is not some of their universal possibilities or tasks to do. 

>> COURTNEY RADSCH: Thank you.  Appreciate that.  I want to invite you up to the microphone as well.  And as you're getting up there, just note that Farzaneh Badiei notes that I wonder if we really need a harmonized approach.  If mistakes made, then we will have harmonized mistakes too.  So I suppose that's another approach.  Please go ahead and  introduce yourself. 

>> Thank you.  Hello, everyone.  My name is Frederica.  I'm from the Dominican Republic, and I'm part of the ISOC Dominican Republic Chapter.  As a user and part of Civil Society, content moderation online is really important, I think.  And in my case, for example, we were talking about this taxonomy.  That would be great in order to have, like, some standards around the world.  But I think we all need to ‑‑ we cannot forget something, which is, like, the diversity of cultures.  For example, I live in the Caribbean.  The Caribbean is a small country but, for example, there are places where Spanish is the main language or French or English.  And the Spanish itself, for example, spoken in the Dominican Republic, maybe it's not the same or doesn't use the same slangs in Chile or other parts of Latin America.  So what I mean is content moderation online should also consider this diversity when we talk, when we speak.  For example, maybe an expression I use could be harmful or could be considered hate speech in another country that speaks Spanish.  So this taxonomy maybe will fill this lack of principles that could be used for anyone.  But we should, like, maybe always update it, considering this cultural stuff, cultural, like, differences and also context. 

>> COURTNEY RADSCH: Mm‑hmm.  Great.  Well, thank you so much for raising that point.  I think that's really key.  And I want to zoom in on this issue of multilingualism, which is a huge thing in Internet governance.  And the fact is is that content moderation at scale depends on algorithms, right?  You can only have good algorithms if you have good data.  You only have good data if you're looking for it and collecting it and doing it in languages and then, you know, as you're saying, cross‑referencing that with the cultural context, the meaning and all of that.  And, you know, something ‑‑ I was speaking at an OSCE session yesterday on AI and disinformation.  And, you know, one of the things, for example, in Ethiopia right now where there is a violent conflict happening, apparently in a country of 100 million people with six major languages, Facebook, for example, only has two of those languages in its integrity systems.  I would say probably most platforms -- I imagine Jordan, in New Zealand, you probably don't have any of the six languages of Ethiopian on your staff or in your algorithms.  You know, Liz, I don't know how Microsoft has that.  You know, I've worked a lot in the Middle East, and I know it's a huge challenge with Arabic.  Content moderation in Arabic is terrible.  Because, again, most of the companies I think that we look at that have power of the public sphere are U.S. or, you know, European language based. 

So this issue gets even more complex when we introduce other languages.  So thank you for raising that.  But you also mentioned this issue of principles.  And I want to turn to our two speakers on principles.  Actually, before I go to our speakers on principle, sorry, I just noticed that there is a hand raised in Zoom from Shilongo Kristophina and would like to ask them to unmute and ask your question or make your comment, and then we'll be going on to the issue of principles.  Please, go ahead, Shilongo. 

>> Hi.  Good afternoon, everyone.  It's Kristophina.  I don’t know why my name is --

>> COURTNEY RADSCH: Sorry about that. 

>> No problem.  So I work for a think tank in Southern Africa in Cape Town, and we actually just completed a study on misinformation in the global south.  So with other partner organizations in different regions in the global south.  And that's one of the things that came up.  That the challenges that these actors who are countering the infodemic is that, you know, Africa, the sub-Saharan region, for instance, has diverse languages.  And so, you know, who looks at what?  You know, they don't have enough, you know, sufficient algorithms to, like, you know, look through the different languages.  But also a few months ago, Facebook's oversight board had its first case of content moderation in southern Africa, in South Africa, actually.  And it was -- you know, it was a post that was racially charged.  And I think from the perspective of, you know, the States and from Europe, it was very, you know, biased.  You know, it's different. 

But given Africa's history of apartheid and racial injustice, if one looks at it from that context, it's not really, you know, I don't know.  It's not attacking anyone.  It's not ‑‑ I mean, the board had their own say, but it's very different context.  And I wanted to speak to that, the point that was raised about a harmonized approach.  The contexts are different.  For Africa, for instance, has a history of, you know, states that are repressing freedom of speech.  So, you know, given the different contexts, I don't think we can have a globalized approach to content moderation.  And also ‑‑ we also have different challenges, Internet connectivity.  I'm speaking to where I am in Africa is very, very low.  And we've actually found out that the majority of content that circulates is on radio, for instance.  Radio is a tool that is used widely on the African continent.  Everyone is focusing on, you know, digital platforms.  But then there's radio that, you know, is where the information is traveling, you know, it's made viral and offline.  So, you know, we have come across actors who are looking at ways to moderate content offline by, you know, speaking to communities, listening on radio, cutting out clips from local newspapers and, like, you know, going on radio, talking to communities.  Have you heard this?  Protecting human rights also, you know.  Looking at vulnerable groups and, like, talking to people about, you know, people living with albinism, people who are part of the LGBTQ community.  Women, for instance, speaking to those communities and speaking to the kind of biases and the kind of myths, so to say, that circulate in those communities about these people.  So that's all I just wanted to say. 

>> COURTNEY RADSCH: Well, thank you so much for sharing that perspective.  I also want to acknowledge where we are.  We are in Poland about half an hour from Auschwitz.  So we do have to remember that these are not, you know, just hypothetical issues, you know.  We have to understand, this is very real.  There is no separation between what we're talking about happening online and what can happen in reality.  You know, one of the reasons this is such an interesting session for me, I think just reading recently about how Amazon's ‑‑ there was a study done about Amazon's book recommendation system.  They cleared their cookies.  They went in with no user profile.  And upon just searching for a couple of key terms were very quickly linked to far‑right extremist groups, publications about anti‑vaccination, and a lot ‑‑ these algorithms that end up connecting all of these different extremist groups and conspiracy theorists.  So I think that's something else we need to think about, again, that this happens across different types of platforms, radio, media, et cetera. 

I do want to note this comment from Jacqueline Rowe in the chat which is talking about a taxonomy of all possible types of content online or content hosters is like trying to create a taxonomy of all human behavior and that she feels that ‑‑ or, sorry, Jacqueline feels it is unrealistic given some of the cross‑cultural, cross‑linguistic differences.  So ‑‑ and she notes that a principles‑based approach might be more realistic. 

So I want to turn now to that idea of principles, just noting, however, that I think what I'm hearing coming out of this session is a taxonomy of different types of service providers might be helpful, not necessarily types of content, though maybe I've misunderstood.  But as we come to the conclusion of our session, I want to go to this idea of some key principles because we're hearing that might be a way to address some of the complexity and the constant evolving of types of services and layers of the Internet. 

So, Liz, let's start with you.  What are some of the key principles that could form a basis for regulatory, co‑regulatory, multistakeholder frameworks for content moderation beyond social media, and then to you, Jordan. 

>> Liz Thomas: Sure.  And I'll be relatively brief here, actually, because we have shared in a few contexts, we do have a set of principles how we think about these things which will, you know, continue to iterate and evolve.  And so for me it's really about hearing reflections from the room.  But the things that we kind of think about on a principled level are around developing rights, developing measures that respect fundamental rights, that do recognize that there's no silver bullet for any of these problems, and nor is there any one‑size‑fits‑all solution. 

Really thinking about that diversity across services, around maintaining a free and open Internet.  So a range of the kind of principles that I've sort of touched on talking earlier today.  And I think that we've canvassed a little bit in the conversation.  So ‑‑ but, again, I do recognize, too, that these are at a higher level that don't necessarily grapple with some of the complexity that we have raised.  And I think that some part of what I think -- you know, I'd love to hear how we can take some of this to the next level to help us navigate from that principled level using a principled approach, I guess, but to sort of think about how we, say, recognize complexity, one size won't fit all, but what is the principled level that sits beneath us that helps us take that conversation forward? 

>> COURTNEY RADSCH: I want to press you a little bit to do a little bit of that thinking right now.  So you talk about the importance of, you know, human rights principles.  Okay.  Which ones?  Because some of those are in conflict with each other when we talk about content moderation.  The right to privacy, the right to freedom of expression, the right to life, protection of children, right?  And let's just take CSAM, the Child Sexual Abuse Material, as an example where that is inherently a violation of child's rights.  But the measures taken to find that and moderate that type of content across services would potentially be a violation of privacy.  So when you're thinking about, you know, this idea of human rights, how do you operationalize that?  Which ones are you going to focus on over others? 

>> Liz Thomas: I think I'd say two things in response to that.  One is there has to be a dialogue.  And I think there has to be a dialogue between different parties and different perspectives but also a dialogue in a sense between those rights and the way that you balance those in any particular situation and make those judgment calls and be willing to adapt and to change that over time in response to learnings, to norm shifts and other things. 

The second piece is coming back to those fundamental principles around necessity, proportionality, and legality.  So really understanding, where possible, with clarity.  And that's one of the things that we ask for from governments is that clarity, that principle of legality, to help us understand how we navigate, and we are there setting restrictions or lines on this stuff to help us understand that.  And then in the way that we shape our internal thinking but also encourage others to do it is -- and, again, human rights is -- none of these are easy, right?  So when you're thinking about necessity and proportionality, it is a dialogue between these as well. 

>> COURTNEY RADSCH: Thanks, Liz.  Jordan, over to you.  Same question.  What are some of the key principles that could form a basis for this approach? 

>> JORDAN CARTER: I think one of them is that human rights and rights that apply offline apply online.  And that immediately gives rise to one of the biggest challenges about these principles or that framework, which is that these online environments, whether they're social media or payment providers or messaging services, are almost like a new space, right?  And we have public institutions that are dedicated to doing exactly the balancing of rights that you just mentioned, Courtney.  And now we sometimes expect these big private companies whose, you know, one of their abiding purposes is to make money for their shareholders to suddenly engage in a process that in all other areas of life is done by the state.  And so I think that's just an interesting meta‑comment on the challenge there. 

In terms of some principles that sort of seem to provide some guidance.  In my research for prepping this, I did come across this thing called the Santa Clara Principles.  And I thought they were quite interesting because they were ‑‑ you know, there's stuff about human rights and due process, about making sure that the rules and policies that platforms apply are understandable.  So, you know, and visible.  That cultural competence, you know, language, context is absolutely essential to this problem.  That integrity and explainability of these processes is very important.  And those feel to me like some high‑level principles that do begin to speak to this challenge.  But that said, these principles are about transparency and accountability and content moderation. 

So I think there's a deeper level of principle that we need to sort of tackle, which is, you know, we talk globally about this with an assumed commonality sometimes, I think, of what is the problem that we're trying to solve here?  And there are some people who would project onto a discussion like this the desire to literally stop people being killed as a useful starting point, a de minimis starting point. 

There are other people who think that these media of all sorts should be controlled so as to prevent anyone being offended.  Now, there's a giant chasm between those and there are all sorts of other projections, too.  So I think that one of the principles to be thought about is how we ‑‑ you know, what problem are we trying to solve here and understanding some of the diversities around that would be an interesting thing.  And I see from the chat that a new version of the Santa Clara Principles is being released today in a few hours in a Town Hall where it will be 3:00 or 4:00 a.m. here, so I won't be there. 

>> COURTNEY RADSCH: Well, luckily they will be online.  It has been a big effort by a group of Civil Society, but part of the reason for doing this Town Hall today is to get wider input.  And it's great to hear that you see those types of principles as holding some promise for the way forward.  So I would definitely invite everyone here in the room and on Zoom to join that discussion. 

Also to put your links -- Kristiana mentioned a couple of studies -- both in the chat so we can include those in the final report or tag me on Twitter @CourtneyR and we'll make sure that gets in. 

I also wanted to bring in a comment from Farzaneh that, you know, this issue around content moderation happening beyond social media is happening on platforms that are also far less transparent.  You know, transparency reporting, reporting on content moderation, has become somewhat of a norm on major social media platforms but much less so on other services and in other parts of the stack.  She notes that Apple's app store only started reporting on takedowns of apps and the reasons for doing so in 2018.  But there are many services that don't do any types of transparency reporting.  And so I think we could think about transparency reporting and transparency more generally as one of the principles that should be a key principle for any sort of framework.  There are lots of groups and folks working on that. 

And I'll point out, you know, that the New Zealand government has been working on that with respect to the Christchurch Call in the Global Internet Forum For Counterterrorism, which are focused on countering violent extremism online and coordinating those types of efforts.  So this is happening in ‑‑ there are also many sessions here at the IGF that are focused on issues of transparency.  So invite you all to look into that. 

I also want to bring in a comment from Luis and invite anyone here in the room to just make a final comment as we wrap up.  That, you know, when we go down further into the layers of the Internet, the stakes of moderating are higher.  So I think that regulation should not be so tight.  This is Luis.  In this case the principles orienting intermediaries should be those of transparency, accountability, and cooperation.  And Jordan jumping in to say that he agrees on the chat, which I think, you know, is a kind of good place for us to wrap up in terms of thinking about how this ‑‑ maybe this idea of a taxonomy, this idea of getting more in depth in some of the complexity about different types of entities, layers, services that are involved in this.  What type of principles can orient versus different types of intermediaries?  Because it does sound like one of the things we've heard from this conversation is you have different types of intermediaries.  And even some companies may play different roles in different types of services.  So I want to invite each of our panelists to make a couple of final remarks and perhaps bring in some of the comments that you've put in the chat so that everyone in the room can also participate in that conversation.  And then I'm going to turn it over to Sam, Samantha, for some final kind of thoughts about what we've heard today and next steps.  So, Jordan, over to you first. 

>> JORDAN CARTER: Great.  I was hoping it would be the other way around, but that's okay. 

>> COURTNEY RADSCH: I thought you might be. 

>> JORDAN CARTER: Yeah.  Everyone doesn't want to go first at the end, right?  Look, I think it's been a useful exploration here.  And there's an abiding theme of complexity.  But I think there is some kind of common recognition that a completely ad hoc, you know, thousand flowers bloom approach to this and does risk creating some unintended side effects.  I think one of the comments in the chat suggested that one of those can be the removal of people's online presence. 

Another can be that it becomes impossible like the services start disappearing from jurisdictions if we get this regulatory framework too far out of whack, the liabilities involved, if nothing else, can just lead to withdrawal of service.  So, you know, you might find that you could be in a digital no man's land where companies will not be available to you because of the incoherent or problematic regulation that your jurisdiction has chosen to engage in. 

And then another one is the jurisdiction conflicts, you know, where people are legislating for different requirements on the same things.  There's a long‑standing conflict, I think, in parts of the ICANN environment around availability of data about domain name registrations and the conflicts between privacy and law enforcement interests.  So that's just one example of that. 

And I think there is a bit of a theme that's saying principles can help guide this.  And my closing question on that would be that they can, and so can taxonomies.  But a discussion as part of that about what goals we're trying to achieve is really important.  So I'll just leave it at that. 

>> COURTNEY RADSCH: Thank you.  Thank you, Jordan.  That is very important, I think the fact that you mentioned goals.  And as somebody who has worked in advocacy for the past, you know, decade, you're not going to get a good end point.  You're not going to end up where you want to go if you don't know what the objective you're trying to achieve is.  I'm going to go to Liz for final thoughts.  And I saw we had one comment ‑‑ one hand in the audience.  So I'll invite that person to come up to the microphone before we go to the final wrap‑up from Samantha. 

But Liz, your kind of final thoughts and comments on these principles, on the taxonomy, and what you've heard over the past hour or so. 

>> Liz Thomas: Thanks, Courtney.  And in light of time to wrap up, I know we have a hand in the audience.  I'll be brief.  I would agree with everything Jordan said.  And I think, you know, that piece around understanding what we're trying to achieve is really important.  I do think to the point that we've made at the start around sort of ‑‑ well, two points.  One is bringing regulators into the room because, you know, this is where they are acting and having these conversations.  So part of it is informing these. 

And part of this is, you know, as a tech company representative, recognizing the extent to which this can't and shouldn't be wall-less.  These are conversations that go beyond us.  And I think the final point I'd like is one around users which is, I think, some of the points that have been raised through this and in the chat, too, is ultimately the impact of this is on individual users and their experience and the harms that can occur online and offline.  And I think it's so important to keep sight of that as we go through these conversations. 

>> COURTNEY RADSCH: Thank you, Liz.  One might even call them people.  A very quick comment from the audience before we move to Samantha Dickinson to wrap us up. 

>> Thank you.  I’m Tapani Tarvainen from Electronics Frontier Finland, not the foundation.  And I wanted to pick on Jordan's comment that if you do too effective moderation, the conversation will go elsewhere.  Technically it would already be impossible to set up not a platform but a medium for discussion or whatever that's totally unfederated, peer‑to‑peer basis, that would be impossible to moderate, really.  And if we get too effective in moderating different platforms, people will go there.  The only reason those are not more popular now is that there is no commercial incentive to run them.  We can also see this in social media, that moderation is still effective, people will go to other platforms.  So striving for perfection is not a good idea here.  Just make it good enough. 

>> COURTNEY RADSCH: Okay.  Thank you for that perspective.  Fascinating.  I'd love to just, you know, my own perspective on that, I think that some people may, indeed, go elsewhere if they feel that moderation is not in their interest.  But there are also many people who want to be in safer spaces.  And so I think that's one of the tensions that this whole debate deals with.  So with that, thanks again for everyone's participation.  I want to turn now to Samantha Dickinson to kind of take us to the final stretch.  What have we heard over the past hour and a half?  What are our key takeaways here? 

>> SAMANTHA DICKINSON: Can you hear me? 

>> COURTNEY RADSCH: Yes, we can. 

>> SAMANTHA DICKINSON: Hi.  I first should say I have been tweeting out this session.  So there's a whole heap of reporting if you look at my Twitter account which is sgdickinson, which is probably more coherent than what I will be saying now as I try to summarize. 

First of all, it seems clear that there isn't a one‑size‑fits‑all approach.  There is complexity both in terms of the sorts of platforms and services that are being discussed, the sorts of content and behaviors of individuals and movements related to content that needs to be moderated.  And to the sorts of regulations being implemented across jurisdictions.  So as a result of that, there was discussion about whether there needs to be possibly a taxonomy‑based approach or a principles‑based approach.  There's pros and cons for both of those.  In terms of taxonomy, there seem to be some agreement that a taxonomy of different types of content wouldn't really work, but that a taxonomy of the different types of service providers and platforms would be helpful. 

As well as those two possible future attempts to sort out or organize the issues around content moderation, Jordan discussed that it's also important to think about what sort of goals we're trying to achieve because at the moment, there are a range of goals as well.  So it's hard to find principles if the goals aren't anywhere near aligned. 

One of the interesting things that came in just in that last comment about if moderation on a platform is effective, people move elsewhere, linked in with an earlier comment that Liz made about it's interesting to think about setting values for online communities.  So that instead of having to take down or moderate content, you proactively encourage that content not happening in the first place. 

I'm just looking through my notes, if there's anything else.  There was also a discussion about algorithms, which is, you know, an emerging issue everywhere about algorithms relying on good data, but good data means you have to look for it, and you have to be considering differences in languages and cultures, and that this is a complex issue when you have particular platforms based in a particular country, but because the Internet is borderless, they may not be considering the need to look further than the language and culture of their specific country. 

I think that's about all.  Is there anything else I needed to say?  I don't think so. 

>> COURTNEY RADSCH: Thank you so much for that summary, Sam.  And I think that if we've heard anything today, it's about the complexity, but the fact is if we break down that complexity into various different parts, we might actually be able to manage it.  And we'll have to wing it as we go.  I think that's also the lesson of this hybrid session where I want to thank all of the participants who joined us online.  I want to thank the many people who came into the room and have joined me up here and in the audience and participated in the chat, in the room, everywhere.  You know, this was a complex endeavor.  It is a complex topic.  And I think that we've identified a pretty clear next set of steps to take us partly on the road towards thinking about and addressing how we get beyond just thinking about content moderation on social media and actually coming to some of the solutions that can bring us to the next steps. 

So I would like to thank Jim Prendergast.  I'd like to thank Jordan Carter and Liz Thomas as well as Samantha Dickinson for their really fantastic contributions to this discussion.  This is really just the beginning of an ongoing discussion.  Join us as we continue to address this topic in the various institutions that we're involved in planning this IGF, in many other sessions that will take place during this Internet Governance Forum and in the intervening time on the various Dynamic Coalitions and best practices forums,  because I think there are many opportunities to really get down to business.  So with that, thank you so much.  And we'll see you at the next IGF

>> JORDAN CARTER: Thank you. 

>> JIM PRENDERGAST: Thanks, everybody. 

>> Liz Thomas: Thank you.