You are here

IGF 2020 – Day 10 – WS254 The interaction of platform content moderation & geopolitics

The following are the outputs of the real-time captioning taken during the virtual Fifteenth Annual Meeting of the Internet Governance Forum (IGF), from 2 to 17 November 2020. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

 

>> JYOTI PANDAY: Thank you, Secretariat.  Welcome, everybody to workshop 254, "The Interaction of Platform Content Moderation and Geopolitics."

My name is Jyoti Panday, with the Georgia Tech School.  We have Internet Governance challenges at the IGF and elsewhere.  The goal of this workshop is to deepen the content moderation at the intersection of state and private power and for the knowledge sharing around content moderation in different jurisdictions.  Representatives from technology companies and civil society to discuss new ideas for introducing transparency and content moderation policies.

In January of 2020, Soleimani, he was a commander and a member of IRGC which has been designated a terrorist organization by US, Saudi Arabia, and others.  There was strong reaction and millions of individuals and organizations to share the views about the killing.  With an estimated 24 million active users and as one of the few western built social media apps not banned from the government, Instagram is an important communication tool in Iran.

In the days following the enter, accounts of Iranian newspapers, news agencies, at least 15 Iranian journalists and several human rights activists and celebrities were censored or suspended by Instagram.  The parent company, Facebook clarified that it removed commending or supporting groups labeled foreign terrorist organizations by the US State Department.  Although the company later restored some profiles, posts containing informational references about Soleimani's death were deleted permanently.  The story of Instagram around Soleimani's killing demonstrates several things with Internet platforms and user speech rights.  There's dependence on platforms like Facebook and Twitter which serve as the online public square.

It shows how platforms like Instagram have unprecedented capacity to monitor and regulate individual expression, as demonstrated in the censorship of posts published by media professionals and unaffiliated users.  Facebook's decision to go beyond US terrorist sanctions listed highlights the tensions that can arise when balancing standards defined and enforced within platforms with global standards or those agreed by the industry.  As more and more of the social interaction moves online and given the role that private entities have in deciding what narratives are available in the public discourse, it's crucial that we examine the content moderation practices of platforms.  Platforms cannot be ‑‑ cannot claim to be neutral arbiter without considering the broader significant application of their actions.

Whether we consider action by Facebook and Twitter against accounts, allegedly backed by the Chinese government to spread disinformation on Hong Kong protests, or recent action by the Vietnamese government where Facebook's local servers in Vietnam were taken offline earlier this year, slowing local traffic to a crawl, until Facebook agreed to significantly increase the anti-state force for local users, or we consider Facebook's informal agreement with the Israeli government to work together to address incitement on its platform.  It is increasingly apparent that transnational social media companies that are not elected bodies are emerging as an influential force in international and geopolitical affairs.

Platform content moderation standard, business practices and the relationship with nation states effectively arbitrate which narratives can reach the global public, while content moderation are essential functions of a platform's business, policies and practices of global platforms carry with them the capacity to deshape the dynamics of the public discourse and also changing the way political power can be organized and exercised across borders.

In the absence of transparency and accountability, the rules and the procedures for content moderation established and enforced by private platforms pose a threat to democratic culture as they can severely limit participation and impact the individual use of platform users particularly minority groups and marginalized communities at risk.

This workshop will examine how platforms content moderation standard are reconfiguring the responsibility, accountability and power in societies.  Our focus on content moderation, slices go through the claims of neutrality and deserving certain legal rights and obligations.  It allows us to understand how platforms can uphold consistent policies in the face of competing societal expectations, different experiences, cultures, or value systems.

By understanding moderation as a fundamental aspect of a platform service, we can ask new questions about the power and society.

This workshop seeks to deep dive into one specific aspect of content moderation, uneven enforcement of community standards.  Panelists will examine content moderation practices to highlight how constantly evolving standards and guidelines can contribute to differential treatment of similar content on platforms.  The aim of the discussion is to draw attention to how inconsistency of enforcement of content moderation standards can reinforcement existing power disparities.  Uneven enforcement can lead platforms to become a proxy battle in which disputing narratives and activities emerge and collide, and content removal and account suspensions, lack of procedural transparency, algorithmic bias, and it contributes to eroding trust of users on both side of these opposing narratives.

To better understand these issues, we bring together experts and regional perspective examine how platforms, content moderation standards are reconfiguring the responsibility, accountability and power.  Since we have 90 minutes we have structured the discussion in three parts.  I will open the discussion by opening up with a round of questions.  We will follow this with panelists reactions on questions posed by me, and we will follow that with Q&A from the audience.

I encourage attendees to use the chat to discuss as we move along but please use the Q&A to discuss questions for the panelists.  I would like to introduce Pratik Sinha who is the founder of Alt News, what has been your experience of moderation of problematic content and behavior on platforms run by large social media platforms like Facebook and Twitter?

>> PRATIK SINHA: Hello.  Thank you for having me here.  As a fact checker and watching what is going on in the US, and places like Facebook and Twitter monitoring what Mr. Trump says, and nothing like that happening here.  Basically, what people in India ‑‑ and I'm speaking on behalf of the fact checking community.  What we feel is that that ‑‑ you know, the countries like India, Pakistan, Bangladesh are not a priority for companies like Facebook, Google and Twitter.  And this is a point that we have raised again and again.

US being the home country for Facebook, Google and Twitter, that country remains a priority, while similar issues that arise in India, Pakistan, and Bangladesh are not taken care of.

Basically, what happens is that ‑‑ and there's also very narrow world view.  I want to start with quoting a speech of Mark Zuckerberg.  He was speaking to NBC where the issue of political fact checking came into picture, you know?  Facebook said that they are going to fact check politicians.  His response was the decision to not fact check politicians comes from the point of view that politicians are already overly scrutinized, and that is where the basic problem lies.

You know, there are people who are going into and heading these organizations they have a very narrow world view.  To make a statement that politicians are already overly scrutinized is ‑‑ you know, disregarding what is happening in countries where there is not much freedom, press freedom.

In India, hardly any of the mainstream media news organizations, whether it's on TV or print, they ‑‑ you know, they don't fact check politicians, especially not the Prime Minister or the home minister, who are the most powerful sort of ‑‑ politically the most powerful people in the country.  It comes from a very narrow worldview.  I can say that for many countries.  There's similar issues of press freedom, Pakistan, Bangladesh, et cetera.

So you know, the problem starts at policy level, where the people who are making policies in these organizations ‑‑ and I don't want to sort of just talk about Facebook.  This is something that Zuckerberg stated in an interview, and sort of, you know, showed his ignorance about, you know, how world politics works, but the issue is not just with the Facebook.  The issue is across Facebook, Twitter and Google where they don't seem to understand what is the impact that hate speech and misinformation can have on countries which have limited press freedom, which have governments which have authoritarian tendencies, where the media steps back and then it is the civil society which steps in and then there's this huge amount of propaganda that is launched against civil society activists without the social medias stepping to control the data.  Not only from the point of view of a fact check and somebody who is working on both misinformation and hate speech, what happens is that politicians first created a divisive environment through misinformation and hate speech.

The platforms either don't act in the case of hate speech ‑‑ for example, the latest example of the former White House advisor who called for beheading of two US officials and Facebook decided not to take down his page.

Now, that is something that would be discussed in the US, because it is, you know, Mr. Bannon calling for beheading.  But similar things that happen in India won't even be discussed.  Things happen in India apparently are of not that much value.

And these people will not be censored.  And when we fact check them ‑‑ let's say, a very recent case of a minister in the Indian state of Assam, he claimed that certain people in opposition, raised slogans of Pakistan ‑‑ kill Pakistan.  It's called Pakistan's endeavor.  It's equal to hail Pakistan.  We fact checked it.  Facebook put a tag and then took it away and saying we don't fact check politicians.

The first organization to fact check was Alt News and the other is Boone Live which is a Facebook fact checking partner.  And when you ‑‑ when you talk about, you know, people, these two organizations who have limited reach and none of the mainstream media fact checked it.  People think that, you know this actually happened and that ‑‑ that, you know, there were some people who ‑‑ you know this is a party which has majority of backers from the ‑‑ from Muslims as the majority of the backers.  And they are the ones raising slogans in favor of Pakistan and they are quote/unquote internationals.

This is how it plays out.  If your view of politics is one that is sitting in Palo Alto or wherever these policymakers are sitting and there's not enough of a local view coming in as to how their decisions impact the democracy in different countries, then we are, you know ‑‑ we are getting in the situation where, you know, countries like India are going from bad to worse.

And, you know, right now ‑‑ just now, the election in called Bihar got over.  Bihar has one‑third the population as the US.  But there were these Facebook ‑‑ Facebook and Twitter went out of their way to ensure that, you know, the impact ‑‑ I'm not saying that they did a good job but at least there was an attempt to ensure that, you know, there's less impact of misinformation, but at the same point in time, a chief minister in ‑‑ went ahead and gave a speech in Bihar, he stated in his speech that those who have inter‑religion marriages and his claim is that a lot of those are sort of propaganda, as in a lot of those are not genuine relationships.  If they indulge in that, then he used a phrase that ‑‑ (Speaking native tongue).

We will make sure that they are killed.  It's a euphemism for killing someone.

And something like that happens and nobody even talks about it.  And they continue to have his page and broadcast things.  It's a law and order issue as well.  I'm not denying that India has a political issue, and not everything can be handled by platforms, but platforms also ‑‑ but platforms have increased responsibilities in countries which are, you know, going towards authoritarian tendencies.

Yes, those are my initial comments.

>> JYOTI PANDAY: Thank you.  We have Varun from Facebook.  I'm sure he will want to raise some issues that you raised.  But before we get to Varun, I would like to introduce Ms. Marianne, Diaz with Derechos Digitales.  Deferring to local context, is what Pratik pointed out, do you see similar situations playing out where you work?

>> MARIANNE DIAZ: Yeah, I actually want to start by echoing one of the main points that Pratik said, which is that we don't feel like a priority.  We are not a priority for these platforms and particularly from Latin America standpoint, what I have found as a researcher, is that there is ‑‑ that there are both issues of over moderation and under moderation.  And there's discourse, for instance, that has been taken down because somewhere else, for instance, let's ‑‑ certain language is considered a slur.

But in the conversation, in the context of this happening, it is just a friendly conversation.  It happens a lot ‑‑ I'm from Venezuela, and certain language that is ordinarily everyday use in Venezuela, can be considered a slur in Spain, for instance, but this is a conversation happening between both Venezuelans.  And so there's no reason for accounts to be suspended even, which happens like every week to friends of mine, and they are talking to me.

That's one thing.  On the other hand, we have on the moderation, which is like ‑‑ in the broader perspective, is a way bigger issue.  We can't really flag content for misinformation from Latin America, because that's like a US priority.  That's for US elections.  Actually the hashtag ‑‑ even somebody may have noticed this.  The hashtag is elections 2020.  It's like it's the only elections happening this year.

And I'm working ‑‑ one.  Products I'm currently working on is monitoring social media content regarding the upcoming Venezuelan elections.  I will use the world "elections."  Not really, but okay.

One of the things they have said to us at platforms is that they can't fact check content that we bring to them, because even the ‑‑ while passing through this filter, like we are researchers for this work, and they can't really fact check content from Latin America or most ‑‑ or from anywhere else than the five key countries because they don't have the capabilities.

And Pratik was saying that this starts from policy level.  For me, I believe that this is mainly an economic issue.  And when I say it's an economic issue, I say that this responds to the economic model of the platforms.  The platforms are trying to look for a certain homeostasis in the platform, which requires that enough content is left there for people to engage, but also that some content is taken down, even though it should be protected by freedom of speech standards, just because it makes people uncomfortable.

And these response to just the ability to have the larger number of users engaging in the platform at a given time.  The fact that this response to economic driven incentives and not to freedom of speech standards or human rights standards in general is causing main of these issues.

So when you have that tied to the fact that these places that are private places are performing a public service function, like, they are ‑‑ they are acting as public places, like, they are ‑‑ they are performing functions that are usually just like a square town and then ‑‑ and this is our private spaces but they cannot be entirely related as private spaces.  Because if we allow that to happen, then this economic model is going to basically just shape the policies.  And so it is before ‑‑ just one or two steps before we get the policies, we have to think, like, why are these policies in place.

And so for a place like Twitter ‑‑ I'm very focused on Twitter because I'm very focused on Venezuela, Twitter is big in Venezuela.  Misinformation is rampant for other issues and it's unbelievable that I can, for instance, flag as misinformation something regarding the COVID, that is a global issue.  It's not a Venezuelan or Chilean or Brazilian issue, but it should be allowed because these platforms are not national level.  This is global.  It's really ‑‑ it can be really hard to really distinguish where specific content is coming from.  It's a global Latin America form.  It just gets retweeted and shared and information comes and goes.

And then they try to actually apply these policies in that way.

And so what I think is that while the economic model for these platforms remains the same, these issues are going to remain no matter how much we change the terms and the conditions, and no matter how much we speak with the platforms to shape policies in a different way.

>> JYOTI PANDAY: Thanks, Marianne.  We will touch, you know, on the issue of the lack of local resources in more detail as the conversation moves forward.

But it's interesting that you pointed out that when platforms are ‑‑ you know, take the hands off and say that we can't intervene, then governments can step in and we are increasingly seeing governments seeking to regulate platforms and Germany being, you know, a country that has introduced a whole wave of legislation or is considering introducing legislation and we are very glad to have Amelie Pia Heldt who is a researcher with the Hans Bredow Institute.  Hi, Amelie, has the intervention by the German government to enhance users against hate speech and provide more clarity on the way platforms handle and moderate unlawful content been successful in your opinion?  What is your assessment?

>> AMELIE PIA HELDT: Thank you very much.  Thanks for having me.  I think what I meant to say here as an introduction will resonate pretty much with what has already been said before.  So if we look at how the regulatory context actually shapes content moderation, I would say it's ‑‑ it's really decisive in that most of these companies coming from Silicon Valley, we have a very US‑centric approach, and that goes back to constitutional rights or just the ‑‑ you want to look at the first amendment, and the fact that in the US, Congress is not allowed to regulate speech or at least not content based.

And this model has been exported and is ‑‑ or at least what we see in the comparison with Europe, is that first of all, we have speech restricting laws coming from criminal law.  The second thing is the separation of private and public actors or state actors.  There's the same separation here, but private actors can become sort of the functional equivalent of state actors in certain ‑‑ in certain situations.  And that would also lead to this horizontal effect of freedom of expression, which might be applicable between place forms and their users, actually.

So I think that the regulation or ‑‑ there's sort of a push back to this first amendment centric policies, and then ‑‑ and so the first is probably ‑‑ or what I can talk about more satellite formal part is the EU pushing back with laws or more regulations and the very first from Germany, because Germany adopted in summer of 2017, this law Network Enforcement Act, NetzDG, which makes it mandatory for companies to ‑‑ or platforms to actually have a complaint tool for users to flag content that would be forbidden under the law.

They also need to remove unlawful content within 24 hours and they have to publish transparency reports twice a year.

Now, how effective has this law been actually in the past two years?  I'm pretty skeptical, because we still are lacking data in the sense that we know how much has been removed according to the law.  But we actually don't know how much of ‑‑ say, you take incitement to violence.  We don't know when Facebook or any of those companies actually say in their transparency reports the next ‑‑ I really mean the NetzDG reports, not the general ones that they removed X cases because of incitement to violence.  We don't know if it's ‑‑ these cases would have also been removed under community standards.

So there's no way for us to know how much influence the laws had on the community standards and therefore, I mean, Facebook also sort of tried to ‑‑ well, not apply the law because they sort of hid their ‑‑ this complaint tool according to NetzDG with the legal information of the ‑‑ of the platform, so users couldn't ‑‑ haven't have easy access to that.

Which ended up with Facebook having, I think like 500 cases first half year, which was ridiculous.

Obviously, they still remove content if it's incitement to violence, but if they do, it's under the community standard.  Does the law have any effect or not?  One positive thing, I guess is that Facebook has way more ‑‑ or the platforms have way more content monitors and people who are trained to actually look at German laws and know if there is an infringement of German law.

Now, this goes back to this whole question of which part of the world is important enough in their eyes to have content moderators that are capable of actually checking if there has been a breach of the law.  And that's something where, I guess, investing in more content moderation or ‑‑ in moderators that are trained, that know cultural differences and also legal differences plays a big, big role in this conversation.

And it's an investment that I guess not everyone is ready to make.

And that would require, I guess ‑‑ so content moderation has been more of a topic in the past years because we know more about how the moderation works, about who are the moderators and where do they work, and that their work is probably like the ‑‑ the cheapest workforce in those companies.

But if you actually see that as a service, as something that users actually value to know what type of content this is, why they see it, is it something that has been fact checked or not, this brings ‑‑ this brings another value to the platform, I would say.

So going back to the NetzDG, the evaluation is not really helpful, I would say.  There's April amendment now that is waiting to ‑‑ or that will probably be adopted soon.  It also ‑‑ so it increases the transparency.  The transparency obligations for platforms.

Unfortunately ‑‑ and that's a final remark.  Mat forms will now have to disclose to which extent they shared their information with research, and I think it would have been a good moment to actually insert a sort of rights or access to data for research that would have been really helpful in ‑‑ because more research just makes better ‑‑ or more data makes better research and it would actually help regulators to make better informed laws in a sense.

So that's from Germany or from Europe.

And we see that now with the digital services act coming up next month.  So, yeah.

>> JYOTI PANDAY: Thank you for raising so many important issues.

We had ‑‑ I think over the past few years, we have seen increasing, you know, attention being given to the role of human content moderators, with scholars like Sarah Roberts conducting extensive research to bring, you know, the conditions actually to the public's understanding.  But we also see simultaneously, you also talked about research and the importance of having access to data to understand the impact of these policies.  And this are have been some recent controversies where Facebook has attempted to ‑‑ I don't want this panel to pick on Facebook as a platform, but in terms of the scale and its influence, you know, there are very few platforms that match up.

So in terms of, you know, platforms taking back control over their datasets is a constant issue that keeps coming up and I'm sure we will discuss this more.  Varun, since you are here as a representative of a platform, do you want to talk about, you know, some of the challenges of enforcing content moderation decisions.  How is Facebook tackling moderation on platforms like WhatsApp that don't have community standards, for example?  What is the role of local resources when you are drafting policies or when you are enforcing them?  The floor is yours.

>> VARUN REDDY: A lot of important points raised by the panelists here.  Thank you so much for having me.  I will focus my initial intervention on transparency around enforcement of the Facebook content policies and standards and also touch upon briefly about how we work in the local context into both policy development, as well as enforcement.

As you know, Facebook services more than 2 billion people around the world, to express freely on the platform in multi languages and cultures and countries.  We want to ensure a platform that is safe and makes people feel empowered what is important to them.  To enable this, we take a role to prevent our key abuse of the platform, quite seriously.  We developed community standards.  These standards take into mind what is and is not part of the platform and panelists are familiar with these standards.

These standards are not developed in a vacuum.  We speak to experts and work inside the company and outside the company as well.  Experts in fields US as law, human rights, public safety and technology.

And we not ‑‑ we don't consult experts on just one country or one region.  We reach out to people with limited expertise around the world.  We have a stakeholder engagement team that speaks to these experts as necessary.

This also reflects in the localization that goes within these one set of global standards.  For example, within India, I'm focusing on India because Pratik sort of mentioned that probably Facebook doesn't care about countries in India like India, Pakistan and Bangladesh.

If you look at Facebook's hate speech policy, three years ago, they had protective characteristics.  These include race, nationality, ethnicity, you know, a few of the characteristics on those lines, demographic where people have made it difficult to change.  We heard consistent feedback from our community and from stakeholders within India that caste an important aspect of hate speech in India.  And so we have experts who are on caste and activists working that space to understand the issue and include caste been Facebook's hate speech.  And that's how we build it into the seemingly one set of global standard.

And Marianne brought up the important topic of slur, how the same slur can be positive ‑‑ in one country, but be very derogatory in some other country.  And also the usage of slur itself can have different meanings.  For example, certain words have been claimed.  Dyke is a word that is reclaimed and there are other words that are constantly reclaimed and then new meanings of ‑‑ new meanings emerge for those words over time.

So we keep ‑‑ we keep an eye on this.  We attract these transport on the platform and experts on the ground and then work those into our policies.  We have slurs for different countries and different languages.

The challenge comes in when we're looking at enforcement.  Often when we enforce, we are limited by context.  We can only factor in the specific piece of content or, in all the context in which that word has been used within a post or a comment.

That then introduces its own challenges in terms of understanding, you know, is this positive ‑‑ was this slur used in a positive context or a negative context?  If it's clear, yes, we do go ahead and enforce, according to the policy.  And if it's unclear, then certain sometimes we default to safety and remove content and certain times we default the speech depending on the slur and the context.

Enforcement also is done not by someone sitting in the US and no the having any context about India or, you know, or not having any language skills.  We have over 35,000 people who work on safety and security of users on the platform.  And these teams are based in offices around the world.  They speak over 50 languages and they are equipped with the tools and trainings to enforce the policies objectively and consistently.  And we have algorithms to, you know, check the quality of these enforcement.

And we rely heavily on AI systems and machine learning systems and they do come with their own challenges because, you know, there's certain things that machines are quite good at detecting, nudity, graphic violence, machines can do an excellent job, probably better jobs than humans according to the policy.  But when it comes to hate speech, bullying and harassment, machine learning can only go so far because context plays an important role.

And this is where humans come into the system or how we have a unified set of standards.  Over and above this, there have also been ‑‑ Amelie made an important point about ‑‑ in a sense, I take it was a backhanded compliment when she mentioned, you know, it's not quite clear whether Facebook is enforcing the law or taking down the content in the community standards.

It shows that law and our community standards don't need to be, you know, adversary.  They can work synergistically.  There are several content that's harmful and there's no date about that.  We don't have a quarrel.  And I also go back and talk about Marianne's point about economic incentives not being aligned with the need to take down harmful content on the platform.

That's something that I agree with.  That's not something I have seen within Facebook.  It's within our interest to ensure a safe and, you know, welcoming environment for our users to come on the platform and engage and share what's most important for them.

We have seen that when this hate toxicity and there's other kinds of harmful content.  It doesn't enable them to bill communities and empower them.  It's not true where ‑‑ that, you know, the incentives stop us from going off bad content.  Economic incentives apart from doing the right thing, it also takes down harmful content.

So there are these synergies that can be done between local law and the regulation and the platform policies.

And if we have taken down content under our own policies without having to invoke or rely on the German law, then that's a testament that we do take removing harmful content on the platform quite seriously and our community standards often go above and beyond many local law requirements.

For example, bullying and harassment, no country's legislation defines bullying and harassment in granular a manner as Facebook's policies do.  And I can point to several different policies where there are ‑‑ Facebook's policy far outpace any local law in any country given the restriction.  These are the points I wanted to make.

The one last point, sort of the underlying tension is we don't know what Facebook is doing.  We don't know what their motives are.  We don't know what the systems are.

I would like to draw this panel's attention to the fact ‑‑ to all the transparency initiatives we have taken over the years.  One the first transparency measures was publishing user data.  This goes back to 2013.  Since then we published and updated these numbers twice every year.  By the initial focus was only on government‑requested user data.  We since then expanded this report to include data related to volume of content restrictions based on local law, the intellectual property rights infringement and we also started putting out numbers related to community standards enforcement.

And we have been updating these numbers twice a year and since August 2020, we have committed to producing quarterly reports.  So over the years we only increased the transparency as we built up our systems and figure out what is the best information to people out there and the most useful information to put out.

The other metric that I would like to draw this group's attention when we talked about the community standards enforcement report, transparency in that area, we were releasing numbers related to just six policies only on Facebook.  The latest report has numbers related to 12 policies on Facebook and 10 on Instagram.  So we are constantly increasing the metrics that we are publishing and being transparent about as we affect the systems at the back end and we have these metrics in consistent manner.

We have law, governance, economics and sort of ‑‑ to look into the metrics we were releasing, advisors whether we were on the right path or not, because when you are putting out numbers, they probably won't be useful if you are not putting the right kind of metrics if you are not putting the will accurate metrics.  So this expert group ‑‑ the different expert group went into our disclosure reports and sort of came back with a second recommendation and they were not on the right path, and they have asked us to include data on content by users.  We have included that in our latest reports.

And we have also committed beyond getting this expert panel to sort of advise us if we were on the right track and if we were disclosing meaningful metrics.  We also committed to having our numbers audited by an independent auditors and we hope to, you know, finish this process and have them on board sometime next year and then share those results once they finished their audit.

So these are transparency measures that we will be building over the years and we continue to push the envelope and be, you know, be the industry leader in terms of explaining what we do to keep the platform safe, both to the general public, as well as the regulators and the civil society stakeholders are here.

>> JYOTI PANDAY: Thank you.  That was a pretty useful overview.  I hope at some point you will touch upon WhatsApp and encrypted systems and private groups and how are these community standards and transparency standards, you know, applied to, you know, less open communication channels that Facebook controls.

But moving on to our next panelist, I'm excited to introduce Professor Tarleton Gillespie with Facebook and with the department of information science at Cornell University.  Professor Gillespie is not only mentioned in almost every paper research on content moderation that I have come across in my career and the author of the influential book "students online."

Welcome, professor.  As someone who has been tracking the evolution of content moderation for many years now, what are some of the broader challenges around transparency, accountability and access to information regarding the operations of the biggest Internet platforms?

Another question I hope that you will reflect on is how does the growing secularization of Internet speech, terrorism, elections impact the state of online speech.

>> TARLETON GILLESPIE: Thank you so much.  I'm really honored to be a part of this conversation and I have been paying attention to this for a while, but it's so helpful to get to hear about this perspective from many different places in the world.  It's one of the failings that we know platforms have struggled with and it's one of the failings that scholarship and policy struggle with.  So thank you for organizing.  This.

There are sort of three points I will make and one I will go quickly through because it echoes one of the things that was said before.  The first thing that I think is important to remember, the platforms are in their adolescents and I don't mean that in any way to excuse what I mean as continued missteps.

What I mean is to say that the kinds of things that we are asking about, both asking of platforms and struggling with, I think are the kind of things if we look at the history of the modern journalism industry, it's taken 200 years to sort out these issues and we haven't finished.  It's a reminder that these things move slowly, and it's what helps me kind of reconcile the points that I hear so far, on the one hand the concern that we heard Pratik say, which I completely degree with, that the large platforms don't take serious the communities far from the US, the specific political needs and what Varun was saying about what Facebook is doing in response to this.

The worldviews that Pratik was talking are long standing and they take a long time to change that.  Means that we need to push them to change, right?

But, you know, with we hear, like, that Facebook is now thinking about caste as a vector through which there could be hate speech and inequity, you know on the one hand, that's terrific.  I'm is glad that's there and some way Facebook is leading among some of its colleagues to address that.

To say that a social media platform has been in India for more than a decade and learning that now is kind of an astounding reminder that those worldviews upon which these platforms start are incredibly persistent, right?  That that is sort of shocking and not surprising that it would take that long to get to that.

But it's important here and I want to echo something Jyoti, that we focus on Facebook and I'm glad that Varun is here to answer for Facebook.  We have to think more broadly.  If we have to think about what type of policies should exit and you are going to ask the questions that you are asking with this panel, which is what kinds of problems happen when platforms and states partner up or political pressure is exerted.

You know, in some ways while Facebook is an important object for us to consider and YouTube and Twitter and TikTok and a handful of the ones that we that you can about all the time.  We have to think carefully about how these arrangements and problems apply to the next platform, the smaller platform.  We have to think about how they apply to web hosting and cloud computing and ‑‑ like you are pointing, encrypted personal networks.  Each of these raise the problem differently, but also because the next company that doesn't have the sort of ‑‑ the political sort of impact of Facebook or YouTube and Twitter and doesn't have the resources and even the history of having thought about this will be even less prepared for the political pressure that can come or the problems that will spike in a country that they have just not conceived of well.

So the first point is that platforms are in their adolescents and this is an incredibly difficult question.  We can see that we have over moderation, and under moderation.  We have concerns about what is happening because the platform chooses to moderate and what is happening because the platform is obligated to moderate.

We know there are concerns for what happens when humans do the moderating and when the software does the moderating and we talk about a few platforms when really we have to think about many.

The second point I will move quickly through because it's been said clearly and very well by the other panelists.  These platforms that we're concerned about now began in the US, and began with an American political framework as their ground.  And let's add maybe sort of like optimistic or youthful version of the American sensibilities.  So the idea of ‑‑ that these are sort of spaces that if the platform gets out of the way the enthusiasm of web users will produce fantastic conversation is ‑‑ is lovely, but sort of painfully naive.  And it's taken a long time to ‑‑ to not only recognize that even ‑‑ even in the best conditions and even in the American conditions that's an optimistic view, but that  was deeply unprepared to understand how these services would play in different cultures in conflict environments, in places with different sort of political tensions, racial tensions, what have you.

And it also means that some of the things that the platforms did early to think about their obligations around political violence or terrorism were structured around the things that the US was focused on early on.  So very focused open Al Qaeda and ISIS, and not understanding a whole array of forms of political violence and extremism that we need to worry about.

Even in the face of kind of clear international missteps, you can point to Myanmar, and you can point to manipulation coming out of Russia and other places.  I think even in the US, the platforms are only beginning to really change their perspective and I agree when Pratik brought up the question of how platforms address, you know, the president of the United States and this is how the platforms think of themselves as being against the pressures of political leaders.  That's a very new thought for them.

It shows for how long the platforms have thought about the bad behaviors they had to deal with was imagined to be trolls.  It was imagined to be misogynistic harassers and spammers and maybe foreign interests and that's the US‑centric way to think about, it but a fundamental belief that users were acting authentically.  I think that was a persistent notion.  That was not just American but it emerged out of early dreams of web.  And it's made it very hard to recognize that even sort of public figures might want to misuse the platform, if that's even what I to describe it as.

Here's the third point I wanted to make because I haven't heard it yet and I want to add it to the mix.

You know, we know that content moderation over the last 10 or 15 years at these platforms has grown in scale and complexity, in part because the platforms are trying to respond to the immense scope that they have ‑‑ that they have engaged in.  And one of the things that's happened for the biggest platforms that we are talking about is a kind of move towards an industrial approach to moderation, right?  Tens of thousands of moderators, software running constantly, overseen by a policy team that mostly deals with rule changes and emerging problems, and public backlash, but that the front‑line moderators and the identification software is doing a lot of the work.

But, of course, even in those places those teams continue to handle certain people and certain cases to make direct decisions like President Trump is an easy example in the US, but there's certain kinds of users and kinds of content that still rise to a more ad hoc assessment, right?

So either cases that get escalated or cases that are highly sensitive and demonstrably problematic and in the midst of a controversy and the public.  So it's not as if we use the US president as an example, it's not as if the front‑line moderators at a team in Hyderabad, they are not being thought about.  They are being thought about in‑house.

And that gap between how platforms moderate the many and how platforms moderate the few, I think, has been growing as a gap.  And while this group will care a lot about what that means for political figures, I think in a lot of ways, that gap developed more about influential content producers, right?  So the way YouTube thinks about it is not what do we do about any old person who posts a video that might be against our rules and then President Trump.

What they think about is like hundred people who bring in millions and millions of dollars.  They are treated differently.  They have access to the team.  They have special handlers, right?  They have people watching that.  Or, you know, a highly charged, you know, commentary figure who is always running into trouble and they need to be ahead of, because if they aren't, it will be in "The New York Times" of the next day.

And so when I hear about the kind of concerns that we have and the international sector, sometimes it's the ‑‑ what happens to any old speech, right?  Someone who is accused of a slur and that's ‑‑ and doesn't understand the cultural context.  That's a concern about how people are lost inside a very large scale system that is largely doing this through software and human moderators and is going to be ‑‑ it's going to be difficult for them to be sensitive to culture and language and context, and then other concerns like what do you do about the leader of a political party, who is ‑‑ because they are prominent, it's not easy to say, well, we will just take down any old post that they give.  But, of course, they are engaged in something that according to other standards would look like hate speech and incitement.

The last part of the question was:  What happens as with see increasing state involvement and concern around the kind of political security of these platforms, around violence and extremism?  And I think in some ways that kind of adolescence that we are seeing ‑‑ in some ways, the platforms are getting more responsive.  They are recognizing some of their international obligations.

What happens when a government or a state comes to them and says you seem engaged in the effort to think about terrorism and extremism.  We are also.  Let's partner up, wouldn't it be good if you partner up.  Or we provide a list of terrorist organizations and that will be helpful to you because you don't want to be judging, like, which some new group that's emerged halfway across the globe that you know enough.

But then, of course, it opens platforms up to another kind of vulnerability, right, where they have to judge those partnerships and judge what kind of pressure or what kind of influence or what kind of ‑‑ what they are missing, when they engage in that partnership, even if they engage for the right reasons and are in good standing.

I come back to the notion that he with shouldn't be surprised because the tension about what it means when private actors, whether that's a social media platform or news orgs have a large influence on what people hear and the shape of the news in a given political environment.

They are, of course, of interest to governments that want that public space to ‑‑ to sound a certain way.

So while the platforms are new at thinking about this and there are dimensions of social media platforms that we haven't seen before, the tangle about both wanting healthy participation, and not healthy influence, it's not a new question, but an old question.

>> JYOTI PANDAY: You raised several issues, what kind of content they censor and remove from the platforms and what type of content they decide to leave up.  I think in the Indian context, a lot of the controversy has really been, you know, more towards why they are not acting on certain content.  And this is where we begin to see the growing influence of governments, whether it is, you know, private platforms in India being hauled up in front of a committee to explain the political bias in their decision making or being threatened, with you know, changes to the intermediary liability guidelines in India.  We are increasingly seeing these play out in the regulatory context as well.

Now we move on to the more concrete aspects of our conversation.  We seek to deep dive and highlight the gaps that leave them open to manipulation.  So this is the question to all the panelists and feel free to jump if you think you can answer this in the context of the area you work in.  How do the variety of political and regulatory context shape the different ways in which content moderation decisions and enforcement of community standards take place on platforms?

And we already touched on some aspects of this, but I'm intrigued to know if, you know, the threat of regulation is making platforms step in more, or are they taking an approach towards enforcing and sticking to global standards so that they are not caught in associating terms at a local level?  Anyone?

Amelie, Marianne sorry to call you.

>> PRATIK SINHA: I can comment from the Indian perspective.

>> JYOTI PANDAY: Yes, go for it.

>> PRATIK SINHA: For example, the ease of business impacts how corporates work.  For example, in India, it is increasingly ease of business for friends of the government, you know?  That is how things are increasingly going.  If you are friendly, if you are north critical of the ruling government, then it's easier to do business.

India has seen again and again how institutions are misused to conduct raids, et cetera, on opposition parties on corporate bodies, on backers of journalistic institutions, investors, et cetera when it's seen that ‑‑ that a certain organization is expressing a thought that is critical of the ruling government.

Now, a lot of this governs how ‑‑ how platforms work.  Recently a top official in Facebook India resigned.  "Wall Street Journal" did a story on political partisanship.  And one of the top officers in Facebook right now has had a history of actively canvassing for Prime Minister Narendra Modi.  How do these people end up in the top most positions in these organizations?  And the way I understand that is because ‑‑ because in India, ease of business is ease of business for friends.

So you have to have friends who are close to the government, to ensure that you do business well.

And this ‑‑ you know, and this goes to a lot of crony capitalism, for example.  There's a company called Reliance India, which started a mobile service, it's called Geo and within three years.  They are one of the biggest ‑‑ biggest ‑‑ you know, they have the biggest market, and it's ‑‑ it's estimated that another two or three years they will have almost 50% market.  And that's unseen of that kind of rise for a company.  It's unseen, unless you are close to the government.

And so that is one of the reasons why again, I bring the issue of the countries who are going through a authoritarian regime and how things change in these case, and there are a lot of regulation decisions ‑‑ it ‑‑ it is not a justification that Facebook uses.  That's what a Facebook, Twitter, Google, any of these ‑‑ it's not a justification that they can use.  But a lot of them act because they want to keep the ruling government happy.

And those are ‑‑ those are the driver of regulation decisions.  So recently, you know, in the "Wall Street Journal" article, they spoke about the politician from a southern state in India who has cases of repeated hate speech.  Somebody like that in the US would be taken down, without a shadow of doubt.

Because in India it brings so much political pressure, this person was not taken down.  So I want to sort of refer to the point that was brought up earlier.

There are a couple of things that go against that argument in emerging its of software.  I have been a software engineer ‑‑ I was also a software engineer for the past ten years.  If you look at the feature that Twitter has brought in right now that you cannot retweet right away.  You have to retweet twice.  Now that seems like a feature which was not thought through.  This is happening.  You bring this in the middle of an election.  And what do they say?  It is a temporary feature.  It's only for the US elections.  While elections are happening in some part of the world every day.  They specifically said this is a temporary.  They bring in a half‑baked feature, which is ‑‑ and why did they bring in the feature?  They want to reduce engagement.  Finally they realize it's the engagement, which is driving misinformation and they want to reduce it and this is the case for all platforms.

So they say it's a temporary feature.  There are things that show poor software engineering.  And there are many other software products which are well thought through, and so I will give another example.

When we are fact checking, the same kind of technology can be used to mark popular content.  This video is copyright.  They have a database of videos or audio copyright and someone has posted.  This let's try this because this is a violation of copyright.

The same principle can be used to mark misinformation.  This video or this image and in India, a large majority of videos and images are ‑‑ they have the same sort of tool for promoting misinformation.  These are being reduced for misinformation.

If you mark that, you can use certain amount of automation to mark all of those.

You know, saying, that okay, you know, one is marked and so let's help ‑‑ mark all of those.  At least semiautomatically, but those are not well meant ‑‑ even if they are not there.

I mean, I can go on about the number of things that can be done but that will take up a lot of time.

Bar but, yeah, these are the two points I wanted to make.

>> JYOTI PANDAY: Thanks.  Does anybody else want to weigh in on this question? 

>> AMELIE PAI HELDT: Yes.  Thanks a lot.  I would jump in with the copyright example, because I think it's a very interesting one.  There have been mistakes there as well.  So while you can use content ID to find songs, they by no means any of these are recognized the exemptions you have.  So there's a lot of work about fair use in the US, but also there are other exemptions in the EU or other parts of the world, which are not taken into account.  And this brings me really to the broader question of global platforms and local laws, because it's really ‑‑ it's complicated.  I mean, it's not something you can just ‑‑ it's not something where we can also as lawyers say, well, you just have to respect the laws of the country where you are, because eventually it will ‑‑ it will ‑‑ some countries will use to that to have laws that will be highly ‑‑ that will censor what people say.  And indeed in some parts.  World, it's important to allow people to freely talk in ways they don't have elsewhere.

I wanted to bring in ail thought here.  I don't know if you heard of this case in Austria.  It's versus Facebook and yesterday, there has been a ruling of the Austrian court, which has confirmed actually that platforms need to take down content globally, if it's considered illegal.

I mean, it's really ‑‑ this will be a challenge where, like, we don't really know how to handle this, and Varun said, the laws and the community standards overlap, but sometimes they don't.  And sometimes it's something that is more of a moral standard or more of a social norm, but it's not a regulation.

And one of the participants raised the example of a picture in Indonesia that was considered obscene.  I don't know if this showing nudity, apparently or transparent clothes and see‑through clothes.  And I don't know whether this is something that is forbidden by law or to show that, but in many parts of the world, it's not.  I mean, and so this leaves us ‑‑ is it a question of community standards, of moral, social norms, where you are, where you see it, or is the question of law because at the end maybe law reflects which kind of norms we want to have clear and ‑‑ and applicable and where we expect compliance by corporations.

So this dilemma of global platforms and local laws will ‑‑ I mean, will stay for a while.

>> JYOTI PANDAY: Right.  I think Shirley in the chat, you know, mentioned this example, where transparent panties in Indonesia are considered culturally not acceptable, but YouTube refused to take down content because, you know, it doesn't go against the community standards.  So, of course, these tensions will continue.  But speaking of, you know, other ‑‑ I mean, I'm actually interested in know about formal and informal relationships that have developed around these cultural norms and that have probably not really been incorporated into the community standards, but, you know, platforms act on it, or, you know, there may not be a local law prohibiting it but platforms stay away from certain kind of content because they ‑‑ it's too dynamic for a particular market.

Do you have any examples off the top of your head?  Tarleton?  It would be amazing to hear some of your experiences on this.  Marianne, you too.  Feel free to jump in.

>> VARUN REDDY: I can jump in quickly here.  The attention to the local laws and global standards are something that we at least ‑‑ me as part of a team that deal with this, feel every day being out in different countries.

The way Facebook does this is we have a standard global process for government requests.  The process the same in every country around the world, right?  And these ‑‑ they are there to have the national human rights policies and we are also signatory to the global network initiative, which has framed a set of principles that ensure due process and transparency and that are linked to human rights principles.

So we are audited on that.  So we follow this global process of receiving the requests from government and complying accordingly.

When we receive' request from government to remove a piece of content, we assess first against our own community standards.

In some cases, as Amelie noted and I can attest to, the local law and the community standards overlap.  In those cases we treat that government request as any other user report because we sort of assess any user reports again our own policies and remove them to viable policies.  We receive a government report, and we assess it against the standards and if it violates and it removes a piece of content, there's no other action needed.

But in case that piece of content does not violate the standards, then we go to the next step, then we do a dual review.  Do this as access ‑‑ the request itself is violated, which it's coming from the empowered regulator in the country, whether that request adheres to the process laid down in the law.  Once that requirement is satisfied, we think that the content is unlawful as claimed by the government or whatever regulatory agency is sending.

Both of these prongs are satisfied, then we may go out and redistribute the content in that specific country where the content is, and allow us to be locally unlawful.

Now, Amelie touched on a point and mentioned how there are these court cases where the courts have adjudicated that you should remove a piece of content if it violates law, and true, we don't have an answer to this.  And this raises the question of extended jurisdiction.  One country's law and courts exercising rights over users and, you know, subjects in other countries.  And this is something that I believe will play out for a few years before we arrive at a satisfactory solution for multiple stakeholders, the governments and the courts, right?  It's something for all of us to watch and probably, you know, contribute thoughts and push the envelope that area.

In terms of ‑‑ I also want to quickly touch upon what Pratik mentioned, which is certain interventions being US centric, for example, platforms have taken certain actions that they otherwise haven't taken because of the US election.  This ties back to what Professor Tarleton said.  I want to go back and take a bigger view on that.  It's not just the platforms.  Online speech itself is quite new.  It's probably three or four or five decades old.  Online mass speech is even younger, right?  Probably a couple of decades when these platforms started taking off.

Human speech offline, has evolved over millions of years.  The social norms.  We hold back when you are talking.  And we are more free when we talk to close friends on certain topics.  So these have evolved in cultural context offline.  To expect all of that no directly translate into platforms and platforms to come to these problems of online speech, like overnight is very hard.

Some of the interventions, for example, there's a lot of focus on what Facebook or other platforms have done in the US election context, but what people forget is we are learning with every election around the world.  We have rolled out new features and new interventions and ways to squash harmful content with every election.

As we build out the teams and the playbooks and the policies, we get better and better.

So we do this and it translates to other contexts as well.  That's we build and push the envelope in terms of the interventions overall for users around the world.

>> JYOTI PANDAY: Right.  So in the interest of time, I would just remind panelists to please keep their comments briefly ‑‑ to be as brief as possible.

Also, I would like to end this discussion with the focus on outcomes.  And you know, it would be useful if the panelists ‑‑ that platforms could take to limit comments as a tool in the geopolitical, cultural value conflicts around content moderation.  So what are some of the opportunities, limitations associated with proposals for fostering greater transparency and accountability?  The enforcement of platform content moderation standards?

We know, for example, we have a Facebook oversight board that has been recently constituted.  Professor Gillespie, how effective do you think this board will be in, you know, providing users a voice to be able to challenge content moderation decisions going forward?

If you could.

>> TARLETON GILLESPIE: Yes, yes, I will be quick.  The kind of problems that this panel is bringing up, the kind of complex relationship that platforms have and will have with states and with a variety of states all at once, it is a hugely important question and I don't think that the oversight board is anywhere near the thing that will answer it.

I think the oversight board ‑‑ we don't have to get into, like, what it's going to be good for, but it's not built for this question.  It's built for Alex Jones, right?  It's built for like, we have to have a way to answer for someone would objects to having been removed.  It's not built to handle what stays up or the proximity of government, and it's nowhere near of that question.  It's of a family of things that a number of platforms, a number of gestures that platforms have made in the last four years, that are steps in the right direction, better than nothing.

But they also suffer from the American centerism.  I'm really struck by Marianne's point that the #election2020, doesn't neat US, that's as much American's problems as platforms, but it's a problem.

One the things that we have seen is that as the platforms recognize their global footprints, right, which they have had for a long time, and grew extremely rapidly, and have tried to respond to everything from formal legal requests that come from states and informal gestures of if you want to be a good citizen, you should do the following things, that they are responding but they are responding ‑‑ it's almost like, you know, if you have ‑‑ if you have made a massive oil spill, you start to clean up wherever you can.  Right, someone goes there's a problem here and you start to address it.

Platforms are saying you created a serious problem here.  This is what our political environment struggles with.  Please come do something and then you begin to see action, but that the next question, which is the kind of, like, if that was adolescent, it's time to move to maturity, is to say that ‑‑ configuring that for being a private actor in a complex political environment, in a number of countries all at once, whose politics are complex, not the same as the US, and changing, requires a discussion that doesn't just happen at the platforms.  I know that some people have been pushing for kind of a human rights set of principles.  I think that was a really promising start of a conversation, because it says we're going to need standards that exist sort of outside any one platform and outside of any one of these national contexts.

When I go back to thinking about journalism, I think about, you know where did the norms and the professional obligations of journalism come from?  Those things had to be built over time.  But every time I think about, it I run into the problem of, like, these sound very nationally and culturally specific, right?

We grew these things around previous information industries in national contexts that could think about the way that it honors certain cultural values and it responds to certain kinds of political pressures and, you know, to have created a speech environment that acts in, you know, 180 countries all at once, is just a kind of question that ‑‑ that I don't know that the old answers work for.

Yeah.  I wish the next sentence were here's what we should do.

(Laughter)

But I don't know what that one is yet.

>> JYOTI PANDAY: Okay.  No.  Even if you don't have, you know, a set of steps we can take, it's good to channel the discussion and, you no he, highlight some of these tensions.  Actually to talk about some of the different cultural norms and the interaction with, we have Urvan Parfentyev, with the Russian association of electronic communications.  We invited him to talk about what are some of the issues that he's been ‑‑ he runs a hotline that deals with censorship issues.  Urvan, what do you ‑‑ just a request that we have ten minutes left and I would really like to touch on it at least one of the Q&A from the audience.  So please keep it as brief as possible.

>> URVAN PARFENTYEV: Thank you.  I'm Urvan, from Russian association of federal communications.  We do run Russian center which has a hotline and deals with different type of content.  Some of them, are, of course ‑‑ they have to be political centered, like, for example, the issues with different types of hate speech, terrorist issues and so on.

And, you know, while we are cooperating with the hotlines all over the world, including Europe and the US and so on, we see that we already have not just social network, or any other platform, which has placement, but, in fact, it's global.

And in many cases, we do deal with the issues that are influenced by the state of play.

So professor we mentioned the issues dealing with US elections including Russia's influence in 2016 and so on.  And, you know, we feel that there is a problem, that there are actually no standards.  Social networks and media hosting, realize that they can't apply their standards, the local standards and the Chinese standards and so on, they try to create a substitute for the global standards.  So the question is don't we have to think about the global governance of this issue?  That maybe the solution of this tension between global coverage and local laws might be kind of the U.N. Convention or something, which covers the basic issues of how the information has to be treated and basic requirements for the Internet platforms that deem to be global.

So for example, that might be an obligation to have the local moderators in some micro regions.  For example, okay, you may have tens of thousands of moderations ‑‑ of moderators but just in one country.  And these people might deal with their views, the issues and so on and would, again, get this conflict of social and political cultures.

So, maybe we should think of the global governance with such requirement to the platforms that claim to be global that have many users from different micro regions, and standards which should be kept in their moderation activities, for example, respect to some cultural issues in some regions.  So this is actually the point and maybe the question to be the full extent, I think that they represent different areas of the globe.

Thank you.

>> Thank you, Urvan.  Does anybody on the panel want to respond, reflect?

>> TARLETON GILLESPIE: I will say a quick thing but it doesn't ‑‑ it doesn't conclude, but I do think that if we start to look for governance mechanisms, whether that's United Nations, whether that's ‑‑ you know, wherever that comes from, I think a lot of times we tend to think about wishing there were standards for speech, right, which things count.  And I think that is tempting around specific issues, right, when we think about hate speech, but it's also worrisome, because it's hard to imagine how that ‑‑ how any standard could fit, right?

I think the thing that we have under explored is whether there would be ways for global organizations like the UN to offer guidance about process, right?

And the calls for accountability and transparency are one piece of that.  So what must you be accountable to acknowledge that is being done or not being done, but there are other aspects that we can imagine that say ‑‑ and maybe this is something like what Urvan is saying.  You know, if you serve people of a language, then you have to have X number of people with language expertise and you have to say what you have.

There's other kinds of transparency, besides what request did you get and what do you add to that.  It requires a subtlety that doesn't say that every platform has to act the same way but acknowledges that the ‑‑ simply taking on the role of moderation, simply opening up the ability for people to use something in a country with a very political context, means you have certain obligations to the fact that you are there, and those obligations need to be answerable, right?

And that focus on process as well as we have a specific law that requires certain things to come down, how do we tell you about it, right?  And what must you do in response to it, maybe it's a piece of the puzzle that we haven't spent as much time on.

>> JYOTI PANDAY: 90 minutes is really too short a time for us to touch on all of the complexities of content moderation.  I couldn't have hoped for a better panel.  I would like to conclude this session with pointing the panelists to the Q&A.  We have a couple of questions from the audience.  Some are addressed directly to panelists.  Amelie, there's a question from you, from Professor Milton Nueller, he's asking if it's not clear or whether it's the community guidelines or community standards or the law that is actually leading to it, then what is the usefulness of the particular laws so if you want to respond to that?

>> AMELIE PIA HELDT: Yeah, I will be brief.  I already answered in the chat, but just to say the law does not oblige platforms to prioritize the law, like criminal provisions over community standards.

If you take hate speech, hate speech is inciting violence and insults, and then they can remove content under the definition of hate speech and they don't have to say which German law they think this actually might have infringed.

So there's no ‑‑ they need to apply the law but they don't need to prioritize it.

>> JYOTI PANDAY: Right.

>> AMELIE PIA HELDT: I hope this helps.

>> JYOTI PANDAY: So time for concluding statements.

I think if ‑‑ to be brief, you know, it would be really useful to kind of get a sense of a wish list from each of our panelists on one or two things that they hope our platforms can do in the image context to improve the content moderation of what they think is going to be useful in ‑‑ especially in the context of this growing state influence over the content moderation powers of private platforms.  I will start with you Marianne, since I feel like I have really shut you out of this panel.

>> MARIANNE DIAZ: It's okay.

Yes, I wanted to really underline the importance of transparency in content moderation.  I believe that they are having a lot of fundamentals in terms from coming from no transparency at all and transparency reports that are every now and then to real transparent mechanisms where people can actually see what has happened to their content.

And this is ‑‑ my main example, I'm not coming after Facebook.  I know Facebook has been improving and working a lot on this but it's that 2015, we had a video that showed the ‑‑ the circumstances under people ‑‑ in are which people were being tortured in Venezuelan prisons and it was flagged by Facebook for previous censorship, which that you couldn't publish the link.

And, of course, if there is no content at all, you can't really address how the content is not being allowed because there's no mechanism for that.  I think that that's the way that ‑‑ that is still to go in terms of transparency.

When there are unusual behaviors regarding content removals that are not like, okay, your content was flagged because this is nudity or whatever, that's an explanation at least.

But there are still, like, a spectrum, a wide spectrum of why some items are taken down that can seem political.  There's no response to that.  There's still content that just disappears, and it's no explanation.  And I think that transparency is still ‑‑ like, there has been a lot of progress but there's still a way to go.

>> JYOTI PANDAY: Pratik, do you want to go next?

>> PRATIK SINHA: My wish would be that, you know, with we know that the platforms have taken some step in the US elections just because we expected things to happen.  They knew what happened in 2016 and then they knew that there's going to be repeat, especially considering who the president was.  Everybody expected the claims that the US is making.  It's no surprise to anyone.  So basically the platforms learn from history.

That is the same thing that the platforms feed to do in other countries.  Learn from history, and we can't be working independent of the political content of a country.  And who they hire and who they don't hire.  That's something that the platforms need to take care of.  That would determine how the platforms would work in a specific country.  That's about it.

>> JYOTI PANDAY: Thanks.  Amelie.

>> AMELIE PIA HELDT: Thanks.  I would just like to echo what has been said about governance structures and I could think that if there was sort of a transfer to an independent body or a third body to oversee some of the decisions, I think that could be a great way to move forward.  And also, I think, indeed, we need to focus more on procedural rules and less on content based rules and I hope that digital services act might be sort of a blueprint for that and instead of having, like, even more speech restricting laws, rather focus on the process.

>> JYOTI PANDAY: Professor Gillespie?

>> TARLETON GILLESPIE: Yes, I think we have seen platforms learning from, you know ‑‑ truly learning that their public footprint is greater and more complex that they thought and that's good and we should see more of that.

I think that one thing we can do is help push that sensibility, not just no the largest platforms, which are already deeply engaged in having to be political actors in these ‑‑ in at least the big countries where they see themselves as playing in a market but that offering up best practices that help a private intermediary navigate these tensions and do it well, such that best practices doesn't just speak to four companies but speaks to the next 100.

>> JYOTI PANDAY: Thank you, professor.  Varun, our hopes rest with you.

>> VARUN REDDY: Thank you.  I have a wish list.  There's personal reflections after being in the space for almost four years now.  The first is the great acknowledgment and the defining these definitions itself.  I see a lot of commentary where an episode or a piece of content is taken down and then there's immediate implication of models or they took it down for this reason and this they took it down for whatever reason.  We should step away from that and this is to an extent what Amelie and Professor Gillespie is saying.  We should look at the specific governance rather than the specific definitions themselves.

So this acknowledgment of complexity and scale will lead to more nuanced engagement and push the content as well.

The second wish list is more for civil society and academic research.  There are some countries where obviously the research, the engagement with this issue is quite advanced.  The US definitely comes to my mind, and it's quite natural because a lot of these platforms have come from there.

We do want ‑‑ at least I want more of this research to come out from different parts of the world, so that, you know, it's also informed by these cultural context and local social political contexts as well.

This research and this additional new thinking will then inform and push the envelope in terms of what platforms and the ability to learn and, you know, improve the systems.

And the final thing is just a call for greater collaboration.  I think this panel discussion was a great example of that.  I personally have some takeaways.  I will go back and read up on some of the works of the panelists that I have not previously engaged with.  And that will spread within the country and within my teams and engage with industry peers as.

So this greater collaboration between civil society, of governments and with academia and industry will ‑‑ it's definitely top of mind for me at the end of this conversation.

>> JYOTI PANDAY: Thank you, Varun.  And on that note, we are already five minutes over time.  Thank you so much for my panelists for showing up early in the morning, in the afternoon, evening.  Thank you to Juan Carlos Lara who woke up especially early to help me moderate.  And thank you to the Secretariat to make sure we have access to links and for not cutting me off even though we went five minutes above time.

Thank you.  Enjoy your weekend, everybody and hopefully we will stay in touch.

It's Diwali here in India.  Bye.

>> Bye.

>> Thank you so much.

>> Thank you, everyone.

 

Contact Information

United Nations
Secretariat of the Internet Governance Forum (IGF)

Villa Le Bocage
Palais des Nations,
CH-1211 Geneva 10
Switzerland

igf [at] un [dot] org
+41 (0) 229 173 411