The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
(Video plays:)
>> We all live in a digital world. We all need it to be open and safe. We all want to trust.
>> And to be trusted.
>> We all despise control.
>> And desire freedom.
>> BOTH SPEAKERS: We are all united.
(End video.)
>> VESZNA WESSENAUER: Welcome, everyone to the session on Best Practices in Content Moderation and Human Rights. I'm Veszna Wessenauer, the moderator of this session. I'm based Budapest, Hungary. I'm working with Ranking Digital Rights. We'll be talking about the RDR during the session a little bit, so I won't go into that.
Today's session is about a topic that is very familiar and probably has a love and hate relationship to many of us, which is content moderation and human rights. And we'll be hopefully going home with a few interesting takeaways on better how human rights can actually guide content moderation or content governance on digital platforms. A few quick housekeeping rules before we go to the panelists, so if you wasn't to raise a question, or ask something, you can do it through the chat or you can just simply raise your hand. The first part of the session will be dedicated to the speakers, and we'll be going through a few questions together with them. We'll then open up the floor for Q & A from the audience. Hopefully we'll have a few minutes in the end to wrap up and close with a few concluding remarks from each speaker.
So first, I would like to call our speakers to introduce themselves. Please just say a few words about your organization and how it promotes the right to free speech, and peaceful assembly for internet platforms.
And I'll call Vladimir first. I think he's also there in person in Katowice.
>> VLADIMIR CORTES: Thank you very much.
I'm Vladimir Cortes, the Head of the Digital Rights Program at Article 19 Mexico and Central America. Article 19 is an organization that promotes and defends freedom of access to information. We have been working in Mexico and Central America regarding the protection of journalists, and with particular attention also to freedom of expression and access to information in the digital realm and things related to content moderation, surveillance, digital divide, and how do we protect human rights in the digital space.
Thank you very much and very happy to be here in this session with everyone. Thank you.
>> VESZNA WESSENAUER: Thank you. Abby?
>> ABBY VOLLMER: Hi, thanks, everyone. I'm Abby Vollmer. I am the Director of Platform Policy and Counsel at GitHub. GitHub is the world's largest software development platform, where over 73 million software developers from around the world collaborate on software projects, and over 200 million repositories. In my role doing platform policy and counsel, I'm legal support to our Trust and Safety Team, and also our Policy Team, on platform issues. And I also lead our human rights work for the company.
So in thinking about this question, that's been posed, free expression, free assembly, and peaceful assembly, and free association, one very relevant way we promote those rights is through our terms of service and the acceptable use policies that are part of that. Our acceptable use policies track and align with those rights under international law as well as the limitations on those rights which we'll get into more, I think, in the discussion today.
>> VESZNA WESSENAUER: Thank you. Berges?
>> BERGES MALU: Hi. I'm Berges Malu. I'm the Senior Director for Public Policy at ShareChat. We have two platforms in India, a short video platform called Moj, and our family platform ShareChat, and together we have about 300 million active users on our platform. We operate only in Indian languages, and our platform is focused on providing users the ability to have free speech and are governed by Indian laws and customs rather than global platforms, which are otherwise operated on foreign laws. And that's our focus. I'm very happy to be part of this conversation and talk about how we work towards human rights and our transparency rules as a company.
>> VESZNA WESSENAUER: Thank you. Brenda.
>> BRENDA DVOSKIN: Great to be here. I'm Brenda Dvoskin. I'm not part of an organization or company. I'm writing a doctoral dissertation at Harvard Law. Part of my dissertation looks at and studies Civil Society Stakeholder Engagement, the UN Guiding Principles on Business and Human Rights. I hope that might be relevant to this panel.
>> VESZNA WESSENAUER: Thanks, Brenda. And Allison?
>> ALLISON DAVENPORT: Yeah, I'm Allison Davenport. I'm a Senior Public Policy Council for the Wikimedia Foundation. The Wikimedia Foundation host Wikipedia and a number of other large, free knowledge, open‑source projects like Wikimedia Commons or Wiki Data.
In terms of our advocacy work, we largely advocate for free expression worldwide by promoting flexible content moderation models, but also opposing really restrictive laws like censorship and surveillance as well.
>> VESZNA WESSENAUER: Thank you. And we also have a very important person, our Rapporteur, Peter.
>> PETER CIHON: Hi, cheers. My name is Peter Cihon, and I'm Policy Manager at GitHub, and our Rapporteur this evening.
>> VESZNA WESSENAUER: Thank you. All right. So we have everyone. That's great.
I think we can start with a very difficult but also very important question. When it comes to freedom of expression, there's always a lot of balancing going on in terms of, like, can we restrict free speech? If we can restrict it, what are the other rights and interests that are actually more important than the right to freedom of expression? And is there a legitimate way to do that? So I think my first question is how can platforms weigh the need to protect speech with the risks that harmful speech can bring to users and is this also something that varies from country to country and legislation to legislation? That there is a lot to do or not to do? I guess it would be interesting to hear from the platforms, first, and maybe Berges you can tell us something about how you approach this with your platforms.
>> BERGES MALU: Sure. I think one thing we are very focused on as a platform is providing users the ability to put out their views in their local language and their preferred language. So our focus has always been the democratization of the internet. A large portion of Indians don't speak English or use the internet in English, and more and more Indians are now coming online. We now have about 700 million Indians connected to the internet. One thing we realized, while freedom of speech is essential, as a platform, there can be problems arise. While we have a terms of service and acceptable use policies around community use guidelines, our focus is to ensure that only content that is wholesome, fun, and enriching for users on our platform gets reach. In the event that there is content that does not actually violate Indian law but is still problematic on the platform, for instance, it could be misinformation, it could be bad content, essentially, we reduce the reach and the virality of that content.
That's in comparison to content that is a violation of Indian law. If it's pornography or hate speech or creating discord between religions or between communities, that's content that we take down because it violates Indian law. Just for context, Indian Constitution does not provide free speech in its entirety but we have conditions against it as part of our Constitution, which is you cannot create discord amongst community or between Indian and friendly countries, et cetera, et cetera. So our focus is to ensure that people actually have a space to speak, but that doesn't necessarily mean that if their content is not good for the community that it gets reach on the platform. If it is supported by users that there is a problem of content but it does not violate Indian law, in some senses what we do is reduce the reach and virality of the content.
>> VESZNA WESSENAUER: That sounds interesting. Is it working? Is this a best practice from your perspective?
>> BERGES MALU: It has reduced the instances of problematic content getting virality on our platform and creating actual harm. We've seen harm being caused because of social media platforms, not just in Indian, but other geographies, also. Since we haven't seen that happen through our platform, I would imagine that it is providing some results.
>> VESZNA WESSENAUER: Thank you. And I would be curious to hear Abby's take on this, as someone who works on a global platform, and I imagine you encounter very different and diverse experience when it comes to this problem.
>> ABBY VOLLMER: Absolutely, yeah. One thing to emphasize, in a lot of cases, it's important to have a nuanced and ideally human review of the content because context does matter. As you mentioned, in different cultures, things can be interpreted differently, but in terms of thinking about a best practice, I think that in thinking about context there's also where does this content live on the platform. And how might somebody who is stumbling upon that content view it? Traditionally, it might have been more common to think about, the person posting this content, what are they trying to say? What's the intent behind it? But I think in trying to run a global platform at scale, more and more we find ourselves also thinking about the position of somebody who might just be stumbling upon that content and might not have that context. So depending on where, if the content lives in a place where there's a paragraph of text around it, and it can explain why they're saying this word or using this symbol or whatever, there might be a justification for having that online. But for example, if somebody uses a hateful symbol or symbol that could be interpreted as hate speech as their avatar, even if they don't mean it in the hateful way, it's risky to have that posted on the platform in places where, based on how our product is used, if you comment on something, your avatar will show up, but without the context and your ability to say how you're using the symbol, et cetera. So over time, we've realized that's important to take into account. The other thing I'll note the terms of best practices that we do, we think about the least restrictive means on content. Rather than to give an extreme example rather than banning somebody off the platform forever because they violated a term of service, in rare cases that might be appropriate, but in most cases, if it's a line of code in a particular project, ask the person to remove that line of code, and letting them know if they don't do that the consequences will be more severe, et cetera. Maybe working with them if there's a disclaimer or label or something to clarify their intent. So it's not cut‑and‑dry kick the person off the platform or not, but there are things in between platforms can do to promote access to content while also protecting rights and the community at large.
>> VESZNA WESSENAUER: Thanks. I also wanted to say if any of the speakers wanted to follow‑up on someone's take, feel free to do that.
I think we'll get back to this definitely, because there's a lot to unpack here, but I also wanted to talk a little bit about what strategies Civil Society can take when it comes to content governance and the wide approach to content governance. And I'm deliberately calling it content governance not just content moderation because I think it's important to also think about all the other ways that content can be governorred on these platforms. For me, moderation is a bit too narrow and there are a lot of other things that can be done. I'm from ranking human rights and we do have our own approach which I'll talk about, but I'm curious about Article 19. Article 19 has been contacting a lot of campaigns to encourage or to force companies to become better at how they, like, treat freedom of expression and the right to assembly online. So Vladimir, if you could tell us a bit more about your experiences regarding how platforms responded to these campaigns and what worked, what didn't work, and what is it that you would really, like, highlight for companies.
>> VLADIMIR CORTES: Thank you very much. Thank you also for all the people that is around here. It's really great the last session that we are having here at the IGF, and also for all the people that's connected.
From Article 19, yes, we have recognized how certain decisions coming from platforms can affect freedom of expression in terms of takedowns that, in other words, are protected by Article 19 from the Human Rights Declaration, from the ICCPR, and some other international standards.
In that sense, seeing and looking how artists, how activists, how collectives, feminist groups, LGBT groups, are facing certain restrictions in which we consider illegitimate on the decisions they're taken down, just to mention, for example, one journalist in Mexico from an indigenous community. He was just posting something in his own language, in Zapotec, one of more than 60 languages that we have in Mexico. And the system of governance ‑‑ and I also agree with Veszna in saying that it's not just only content moderation and understanding that content moderation. It's also a complex process with different steps, with different moments, that we identify. But how these measures and how the decisions they're taking affects this diversity of groups, the minorities of groups, these take‑downs in which at the beginning we were identifying there were, like, a bunch of things that they didn't know. It's like, okay, I violated the community standards, for example, at Facebook, but it's like, okay, tell me which community standards. There's a lot of community standards that you have there. So it's introducing one of the key elements around due diligence, how platforms can comply and how platforms can explain more to users and not being defenseless of knowing exactly how they're taking certain decisions.
And what we're seeing, and also with pointing out and, like, exposing these different cases in which users were facing this kind of restrictions or take‑downs during protests or during social mobilizations, it's that some of these platforms start moving and start explaining more how they were taking their decisions, how they are moving more towards a human rights approach, not just in terms of what they might dislike. And we just have to remember how certain social media platforms were, like, taking decisions at the beginning. It was like, okay, take down whatever you dislike or whatever you think affects you, to really being more robust on the decisions they are making.
And another key element, not taking the decisions in the opacity. Taking decisions in a way that users and perhaps the users that are not familiar sometimes with the legal and technical knowledge should understand. So it's also like giving and providing concrete examples. Okay. How I am building this type of policies and how I am taking the certain decisions. So I think from the beginning, and when I start working at Article 19, we were seeing this. Journalists facing certain takedowns, they didn't understand very well why they would have certain restrictions. And progressively, also, in the dialogue with social media platforms, explaining how important it is that their policies and their terms of service should be in accordance with human rights standards as a threshold and as a principle to protect freedom of expression. Of course, recognizing that also for the most expression it's not an absolute right, but that any restriction and any action should be the most efficient, legal, proportional, and necessary to address the different issues that we are facing and that we are, like, seeing in different social media platforms and other intermediaries.
>> VESZNA WESSENAUER: Thank you. You started talking about something that is, like, I think creating a lot of tension that we have these global platforms but then we also have region‑specific or actually like platform‑specific issues that the global nature of these platforms is making it hard to address them. And there's also this, like, trap, this Facebook traps. We're all very focused on Facebook and it's really hard to actually, like, talk about the other platforms, which are not necessarily as global or not as, how to say, like ‑‑
>> BERGES MALU: (?)
>> VESZNA WESSENAUER: Maybe. I'm looking at Allison and Brenda and I'm curious how you see what is missing from the discussion when it comes to content governance? Allison, you're working a data platform which is not the classic social networking platform but you still do have to do a lot of content moderation in a very different way from the usual social networking service approach. So I'm curious, what is it you're missing from the policy discussion? I'm looking at Brenda because I know Brenda has a special take on how global platforms could or should be governed or what is it that is maybe not in the right place in the discussion at the moment.
So I don't know. Allison, do you want to start?
>> ALLISON DAVENPORT: Yeah, I'll tee us up because I think I have the more sort of general approach to this. Yeah, as you mentioned, the Wikimedia projects have a really unique content moderation experience compared to sort of these large social network websites. So not only do Wikimedia volunteers create the content on Wikipedia, but they also create the rules for that content and do the content moderation itself. The Wikimedia Foundation does do a small amount of content moderation ourself, largely due to sort of legal regulations and also content that we don't want our volunteers to be handling, like CSAM or something like that.
I think another really unique way that our platform sort of differs from these large social media networks and one that's been particularly difficult when we come up against regulation is that our individual Wikipedias are separated by language and foreign language communities rather than sort of national communities. And they set their own rules within these language communities. And so that can sort of lead to situations where, for example, Spain may be proposing a national law to regulate the internet, but we have Spanish speakers on Wikipedia who are from around the globe and they're setting the rules for Spanish Wikipedia. I don't think that this is, you know, obviously we are unique but I don't think this is such a unique content moderation method that we are the one outlier. I think even on these large platforms you can find community standards and community moderation happening outside of the big platform rules. I'm thinking about small communities on Facebook or Twitch, where people have their sort of own moderators for these small communities.
And so I think my biggest warning with these, like, big regulations that focus on social media platforms is that they often assume sort of top‑down control of content, and top‑down content moderation practices, that are actually sort of counterintuitive to how these community models work now and work very effectively now. And can actually sort of impact how those models work.
>> VESZNA WESSENAUER: Thank you. Brenda?
>> BRENDA DVOSKIN: Great. I guess I'd like to follow up on something that Vladimir said, and I guess it's the title of the panel, this idea that the best practice or the best thing that platforms should do is to align their content moderation or content governance with human rights law, and that is supposed to fit or work for every platform. I guess that usually the legal, semi‑legal, argument behind it is that the UN Guiding Principles on Business and Human Rights set the expectation that companies will respect human rights.
I'm a bit concerned about this approach, actually. I think when people say that the UN Guiding Principles, when they read them as saying that platforms like Facebook, or any platform, really, because the UN Guiding Principles apply to all platforms, so it could be Wikimedia or the New York Times Comment Section, that they should adopt international human rights as their own content rules, they're reading the Guiding Principles in a very creative way and in a way that is very different from the original meaning that the Guiding Principles have.
So I can say a bit more about how that meaning has evolved and I can say why I think that matters. It's not just it's okay to change the meaning of legal text. I don't mind that. But I can say why I think the new meaning is a bit concerning. Originally, the Guiding Principles, what they say is that if you're a company you should identify what human rights individuals have and respect them. Do not infringe on human rights that individuals have. Now, I guess this is a key question: Do individuals have a human right to say whatever they want, to express themselves freely, in privately‑owned platforms and have their speech limited only in a manner consistent with international human rights law? I think the answer to this question is clearly no. Like, the New York Times can have crazy rules for their comment sections, as they do. Like, do not post comments in all capital letters. We don't like it. That is obviously not consistent with international law. And if I post something in all capital letters, and the New York Times delete it or doesn't publish it or approve it, they're probably not violating my human rights. I don't think I may be able to sue them. Definitely, governments should not pass laws telling the New York Times not to ban that comment because it's violating human rights. So I don't have that human right, and therefore companies don't need to respect it. We might like the standards that some UN treaties have and think this is a great rule, and Facebook should have this rule because it's great, but not because there's any kind of semi‑legal expectation.
And the reason why I think this is a concerning interpretation and not necessarily a good ‑‑ I'm not an originalist. I don't think this interpretation is bad because it's not the original meaning. It can evolve. But I think this meaning is concerning because it usually, whoever likes the standards and wants to have those standards applied to content moderation, instead of admitting that they have a normative preference, which is not everyone's normative preferences. Some people would like to see stricter regulation in speech, more in favor of safety, and less protective of freedom of expression as the UN treaties are. But instead of saying, we like the standards, and assuming that they're exercising and wielding political power in imposing those standards to platforms, they're saying we are just applying and doing what the Guiding Principles say that companies should do, and we have all agreed on this and everyone likes it, and no one loses here, because this is just the standards that everyone likes and everyone agrees we would use. And that is just not true. I think it's irresponsible to hide one's normative preferences under these arguments of sort of objective standards that we're just applying to platforms. Yeah, I guess that's my main point about human rights and content moderation and why that sounds like it's obviously a good thing, but it might not be necessarily.
>> VESZNA WESSENAUER: That is like some thought‑provoking take. So I want to invite the other speakers, if you want to follow up, or reflect on Brenda's point. Is human rights a good framework? Or do you agree with her concerns? Berges?
>> BERGES MALU: I think this was something that other folks have been repeating. You don't have free speech within a private platform. You go to a restaurant and if you're speaking too loudly or using language that is unparliamentary, the restaurant does have the right to say, "Get out of the restaurant. It's private property." And they have their own terms. Right? This isn't really affecting your free speech. You have a right to free speech in a public square. But if it is a private platform, I cannot go on any platform, be it some of my competitors or GitHub or Wikimedia or whatever else, and just say what I want, if it is violating laws or even the platform rules. Right? Maybe GitHub doesn't want a product that allows copyright infringement or a product that allows for hate speech propagation, whatever else. They have the right to take me off. And I don't think that's a violation of my human rights. It's the enforcement of their rights as a company to do what they want to do on their platform as they please. But should there be some sort of governing principle on what is possible and what is not possible on platforms? Possibly. Because, otherwise, you'll just have platforms which censor content because they don't like certain critical content or certain types of content. That can be problematic, too.
So there has to be a guiding principle about how this is done, but I agree with Brenda. You can't claim human rights if you're on private property or on someone else's private platform.
>> VESZNA WESSENAUER: Allison?
>> ALLISON DAVENPORT: Yes. This is a short intervention. I completely agree with what Berges was just saying. I think that one aspect of why people are turning to human rights, however, is sort of a recognition of the ubiquity of a lot of these very large platforms and sort of the inescapability of their use in certain areas. So I think that many people do much ‑‑ live much of their lives online. They use these platforms often to get jobs, to interact with family, to make pay transactions, do really important things in their lives on these platforms. And these platforms obviously have a lot of control over how speech is connected on the internet. So I think that while it may not be the most compatible standard, having these or the fact that even these human rights standards are being brought up condition of shows the gravity of which ‑‑ of sort of the situation that we've gotten into with the ubiquity of these services online and their ways of sort of moderating access to different opportunities online.
>> VESZNA WESSENAUER: Thank you. Abby, Berges, and then I also want to say something.
>> ABBY VOLLMER: Yeah. These are really excellent points and really interesting discussion. I think Berges' analogy to the restaurant is an important one. I agree companies are private and we're sitting in a place that legally we can regulate how we want, but I think where human rights law comes in and where I find it useful for our company is a place for us to align with. If people are saying companies now hold this really powerful space, this really powerful place of regulating speech in public or quasi‑public places, where do we ‑‑ where do we begin? What can we look to as a guidepost? That's where human rights law provides us with something to kind of base our rules on and even in terms of due process, having a right to appeal and all of that, is really important.
So I think I agree with Brenda in terms of it's dangerous in a way to think about these obligations under human rights law, like a company must do X, Y, and Z under human rights law, but I think from where I sit from a company I find it very useful to see how states are obligated to act under human rights law and to think of ourselves somewhat analogously in some situations and how can we kind of look to those rules and standards and processes as a good place to begin or maybe even emulate in a way that makes sense for our platform.
>> VLADIMIR CORTES: Just briefly to mention, first, I think the analogy or reference to restaurant, we have to think about the role of private companies shaping civic space, the relevance they have for a public interest. It might be relevant, yeah, one discussion that someone might have for a restaurant, for sure. But I think it's quite different, and I think we have to look at how the role as a new governors, as some others mentioned, for free expression around the globe would just think about social media companies that have, like, 2 billion users. So I think we have to just like focus on that, yes, they are private companies, but they're private companies that are relevant for protest, that are relevant for democracies, that are relevant for greater discussions, and are companies that are also that they have to comply to certain rules and certain standards. And even when we are thinking about private spaces, just recently the Committee for Human Rights of the United Nations, they released the Observation #37 around Article 21 from ICCPR on the Right to Association and Peaceful Assembly, and there are certain provisions that applied also to private spaces and recognized the right to protest, also, to these private companies.
So I will just make this remark that we have to not just be thinking about private companies as something separate or a bubble, that it's apart and they can do whatever they want to do, but as an integral and more holistic approach. And I think there are like certain guidance. I don't think it should be applied directly as if we are applying to the states, but I think there are like principles, standards, guidings, for how they should react to protect users. And I think there is one element, for example, from the guiding principles from the UN Guiding Principles on Business and Human Rights, that I think it's relevant. And they mention in some of the principles this idea of the frustration that some users may have when a private company acts. And I think that's relevant when we are talking about takedowns or how they may restrict certain exercise of the rights.
Again, it's not just like applying directly what principles are saying or it's not applying what directly the American Convention on Human Rights says in Article 13. It's like certain standards and certain principles that we can see in light of these human rights standards.
And finally, just to mention, some social media companies are incorporating, for example, the robot principles on hate speech. We think this is important in terms, again, not applying directly, but seeing also talking about hate speech is sometimes very complex. And that it's a way of applying, for example, the six‑part test to really understand if some message or a certain post made in their platforms may really become an incitement, for example, to violence, or it's something that is protected by freedom of expression. At the end, it's keeping the balance put in the center the users, and see how relevant are these companies on shaping civic space, democracies, and participation of everyone.
>> VESZNA WESSENAUER: Thank you. I think Berges wanted to say something and I know Brenda probably has a lot to say by now and then we'll start opening the floor to the audience as well. There's already one question but I think Vladimir kind of answered it. And then maybe Brenda has more to that.
>> BERGES MALU: Sure. I think the only point I wanted to make, going back to Vladimir's point, I understand it is a civic space and, you know, platforms play a very important role in how people can actually speak out, and I think a key part of this is also transparency from the platforms on why they're taking the actions they do. And a way to appeal those actions. So for the longest time, you get booted off a platform, and there's no real way to actually talk to someone at the platform or to appeal or even question why you were booted off the platform. Something we've been doing for the past couple of months and it's also part of the regulations now in India, each month we put out a transparency report, what pieces of content we have taken down and why, and we provide a breakdown of different reasons, whether it's copyright or hate speech or something else. There's also a mechanism where a user can actually send a message. When they do, a ticket number is raised. We have to acknowledge a message was received by us and provide a resolution within 15 days. We also have a grievance officer and a compliance officer so that you can actually appeal what the team says, go to the grievance officer. If you don't like what the grievance officer says, you can go to the compliance officer. I think that's important for a platform. A lot of times you don't really realize why you were booted off a platform. You try logging in and it just says you violated the standards so you don't get to use the platform again. Or your post which could be on critical issues around public issues or civic issues, and it just gets taken down. To your mind, you don't realize what community standard or what rule of platform has been violated. Or was it the government that asked for the post to be taken down? So transparency is much more important, especially when it's a private platform, and a private platform has its own rules, but in order to be more transparent so its users as to why they're taking the actions that they do.
>> BRENDA DVOSKIN: I remember you, Veszna, said a few minutes ago that you wanted to say something. I don't know if you wanted to go first.
>> VESZNA WESSENAUER: Berges kind of said what I wanted to say.
Finally, I can speak a little bit about RDR. I think what Berges was talking about transparency, that's what we think as well. Ranking Digital Rights is producing not an exactly annual but we do produce rankings on digital platforms and tech communication companies. We do use the UN Guiding Principles and International Human Rights as our framework. But what we do is actually we ask for transparency and we use that framework to identify the transparency standards.
So we want companies to actually tell users why are they not allowing a certain type of content. What is the process of content moderation? What appeal options do users have? What is the process of the appeal? Can they follow up on an appeal? And our centers are really detailed. In the freedom of expression category it includes rules around user‑generated content but also advertising content and add targeting. And also government's requests to censor content. So there's a lot of areas we cover. We also ask for practice in our indicators but the baseline is we want companies to be transparent. I think there is a good way to kind of resolve this tension a little bit, when can we ask all the companies to abide human rights? Of course not. That is probably never going to happen. But we can start pushing them to be more transparent and that is probably helpful to hold them accountable and it's probably the first step to hold them accountable. That's been our approach. We've seen a lot of improvement. But I have to say lately it's been, in the last cycle, what we see of the big platforms is what we call stagnation. So there is clearly a need to more ‑‑ like do stronger push towards companies. Because if they're not even transparent, it's also very hard to come up with the right regulatory framework.
Brenda?
>> BRENDA DVOSKIN: I hear everyone's concern that, okay, these are private companies. It doesn't mean they can do whatever they want. I totally agree. I think I'm concerned about the same problem. How do we align what these companies do with the public interest? And I think sometimes in these conversations people use human rights just as a shortcut for the public interest and don't give any content to what human rights mean. I can understand that. That's attractive because it sounds good and global and there's so many human rights. It doesn't matter which rule you like better; you will be protecting a human right. Either freedom of expression or safety or dignity or what have you. But international human rights law is not just those values. It's specific balances between those values that have, like, sort of international law certified, good balance between freedom of expression and safety, and those balances do not necessarily reflect the public interest. As we have as a public, as a global society, we all want these companies to align with the public interest. We just don't have a framework that reflects what the public interest is. And the public also pushes a lot against human rights certified balances. Holocaust denial is a good example of that that. There have been so many campaigns to have Facebook and YouTube and everyone forbid Holocaust denial. That prohibition is not clearly compatible with international human rights law. And there are many other areas in which international human rights law and the public interest might be at odds in their regulation of platform governance.
So I'm concerned with the same problem. These companies should somehow incorporate the public interest and don't necessarily do so. This solution that they just adopt international human rights law, and particularly when it's just look at international human rights law, choose some standards and don't choose some other standards, that gives either a lot of discretionary power to these companies or to a body of experts charged with governing these companies. It either gives them a lot of discretionary power surrounded by legitimacy of the human rights discourse but actually they can just take whatever rights they like. Or they actually take their job seriously and apply all those of standards. And, okay, they chose one possible set of rules. It's not necessarily aligned with the public interest.
I think the public is calling for much more protection of counter‑values to freedom of expression than UN treaties currently provide.
There are other problems, but I won't say them just now.
I hope it makes some sense. I hope you see I'm concerned with the same problem. I'm worried about how the conversation shifts power from companies to experts, not necessarily from companies to the public.
>> VESZNA WESSENAUER: Thanks, Brenda. We have ten minutes left. If there are any questions from the audience, please feel free to add your question to the chat, if you don't want to speak up. You can also just raise your hand.
In the meantime, I want to ask a follow‑up question. We haven't even talked about another tension, the driving force of some of these platforms, which is their business model and what keeps them running, and then if we ‑‑ oh, we have a question in the room. Sorry. I don't see that.
>> VLADIMIR CORTES: Yes. We've here two, yeah, two participants. Go. Or one. Okay.
>> AUDIENCE: Hello. I'm Mohammad. I'm representing Iranian Audiovisual Regulatory Body. Actually, within the last year, we've been very concerned regarding content diversity and also human right issues. But the thing is that, honestly, we're living in the age of media concentration, and also the thing is that there are a lot of super apps there. And actually, they decide what people should hear and should know, mostly. The thing is that I consider media diversities requirement is like dealing with media concentration in the first place. And we have a lot of cases. How we should ensure that, with the rapid growth of super apps, that will lead, consequently, to the media concentration, the voice of a voiceless people could be heard? For example, what's happening in China regarding the Uyghur community, like Muslim minority, no one is hearing about that, because it's constantly being removed. And we are dealing with these issues even in our jurisdiction.
So basically, what do you think? How should we ensure this as a concern? Thank you so much for letting me answer.
>> VESZNA WESSENAUER: Maybe let's take the second question, as well, and then see who wants to answer. Can we have the second question?
>> VLADIMIR CORTES: No. I think it was just one question.
>> VESZNA WESSENAUER: Okay. So is there anyone who would like to address this question?
>> VLADIMIR CORTES: Can I jump in quickly on this?
>> VESZNA WESSENAUER: Sure.
>> VLADIMIR CORTES: Thank you. I think, yeah, there are other type of avenues that we can take and perhaps I think in concentrations, we also like thinking in two proposals. One, it's a multistakeholder approach which we call the social media councils, also as an oversight body, to governments, to states, and to the social media companies. And there is another one, actually, I'm just going to say we're going to have this session around unbundling. Unbundling is a competitive remedy as a way to open market to separate somehow the hosting of content but also to around content moderation. Perhaps that's going to be relevant in terms of regulators in terms of competitive regulators to really somehow decentralize content moderation, recognizing also the diversity around these systems of governance and the asymmetries and the differences that perhaps different companies may have.
But just to say that, yes, we strongly believe that human rights can be a certain way and a certain pattern and recognizing these concerns. I totally agree also with Brenda that sometimes this can be a facade to saying that they're complying but at the end they're just not doing it. And for sure, when we're thinking and recognizing diversity and these other voices, perhaps this proposal on unbundling and separating the hosting and the content and opening and establishing a measure on the competitive side, perhaps that will open, also, another type of discussion and another approach on how to guarantee this diversity and not recognizing just the big concentration that we are seeing already now, and how, yeah, it may affect also other types of rights. Thank you.
>> VESZNA WESSENAUER: Thanks. Are there any other questions for the speakers? If there are no questions, I think we can use the remaining six minutes for a round of concluding remarks.
So today's session was about best practices in content moderation and human rights, and in the light of what we discussed, I want to ask our speakers to conclude the session with a very short one‑minute wrap‑up sentence and say what you think is the right approach or what you want people to remember, or it can be also a question. So I'm going to call you by your names and we're going to start with Berges again.
>> BERGES MALU: I think first off thanks to Peter for organizing all this. That aside, I think going back to my key points, one is I think transparency is very key from a platform perspective. For every action that you're taking, whether it's taking down content or the reasons that you have a certain feature being launched, whatever else, users depend on platforms such as ours for sharing content and their thoughts with the world. And the lack of transparency is very problematic. That's one. The second is we strongly believe that while context is king, and giving the user the ability to propose content in various languages is very essential, part of being on the internet, and it democratizes the internet, but the ability to say whatever you want is not part of that deal. As a private platform we have the right to decide on what content should go out there and get that virality and take down content which we believe is either harmful for the Commonwealth or other users on the platform.
My third piece is there needs to be more global discussions aren't how the internet gets regulated because you're seeing on the internet different countries coming out with regulations, foreign intermediaries, and platforms like us. You're seeing the U.S. is trying to look at regulating the social media space. India has tried some of it. You're going to see that happening more and more unless there is some level of global consensus on how we can take this forward.
>> VESZNA WESSENAUER: Thank you. Brenda.
>> BRENDA DVOSKIN: Sorry. Just thank everyone for organizing the panel. And thank you all for your work for the companies, for being here, and explaining what you do and why. Thank you Civil Society organizations for putting out proposals. I think those public justifications, those public proposals, are what allows the public, academics, organizations, to have a more informed idea of the options out there, contrast them, see what works, and why. And push in the things that should be better and learn from the things that are working. So just thank you all for your work.
>> VESZNA WESSENAUER: Thanks. Abby.
>> ABBY VOLLMER: Yes. Thank you, Peter, for organizing; Veszna for moderating; Brenda, Berges, and Allison for being great panelists; and everyone here for joining.
Just to wrap up, in my view, human rights provides a really useful place for companies to look. I agree with the concern around the de facto place that private companies are holding today in terms of controlling speech, and I think that's where looking at how governments have checks on them, it can be, you know, it's not so easy to just delegate that kind of rule‑making to companies where they don't have those same kinds of checks. So I think it's important to be careful about how we say what must happen or what should happen. But I think using human rights is useful for a lot of companies. And in applying those standards, thinking about context, as Berges mentioned, least restrictive means, which goes along with proportionality that Vladimir mentioned, and also transparency as Berges mentioned many times, and Veszna, our course, as well, those are some really useful elements of how platforms can look to moderate content in a way that helps protect users' rights and also echos some of what human rights law has to offer.
>> VESZNA WESSENAUER: Thank you.
Allison and Vladimir, you have one minute.
>> ALLISON DAVENPORT: Quickly, I thank everyone that everybody else thanked. I think my main point here is that when we're regulating we want to be flexible to different types of content moderation models and that human rights is just sort of one aspect of the things that we can do to sort of curb the abuses online. Obviously, we mentioned transparency, harmful business models, stuff like that. I agree that human rights is not the end‑all answer, but it is a quiver in our arrow, or an arrow in our quiver.
>> VLADIMIR CORTES: Thank you very much to Peter and Abby and everyone to organize this session and everyone who is here now attending in person. I would just say any approach to the challenges of the digital ecosystems needs to preserve the multistakeholder approach. I really appreciate the participation of companies, Civil Society, academia, governments, technical community, and the Private Sector. We need to keep this dialogue and thinking about other alternatives. I also encourage everyone to review the principles in regards to transparency, accountable, explainability, and other elements that we think is important that companies should also recognize as a starting point and as a way of protecting and putting at the center users and also the relevance for civic space, from GRULAC. Thank you very much.
>> VESZNA WESSENAUER: Thank you everyone. Our time is up. So thanks, everyone, for joining. And I think there will be a report back from Peter soon. So thanks again.
(Session ends at 11:31 AM Central Time.)