IGF 2022 Day 1 Lightning Talk #36 How to localize tech policies and achieve platform accountability – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR: We can get started. Thank you for joining us. I want you to raise your hand if you know who Timothy McVay is. If you've heard the name before. And secondly, for those of you who have a YouTube or a Facebook account, I want you to raise your hand if you've read either Facebook community standards or YouTube's community standards. If you have read them?

Okay. Nice. So I'm going to confess. I have not read them before. I didn't even know where they were on the web site.

Now, I will explain ‑‑ the title of our report is who is Timothy McVay, why localization matters in social media platforms. My colleague will explain why he's attorney to our story here. Not many of us have read Facebook or YouTube's content moderation policies. I want you to think about why we haven't read these policies that are there to protect us especially for minorities and more vulnerable groups. And why it's so difficult to find them. The policies is one of the most basic aspect of transparency in content moderation. And platforms rely on end users to report. Use this to document content across the site. In the languages they speak. The quality of end user's understanding of the terms of engagement impact the value chain entirely. This is made more difficult in Amharic, Arabic, Hindi, the four languages. Because there are not enough content moderators like in German. Per capita, it's disastrous how poorly supported what Meta calls minority languages but global majority languages. For the global majority, global south to get them hooked on these products but don't provide access to the same platform as English speakers.

I want to share with you. You'll see a link, a QR code we've put in some seats. It has a link to the full report. You can see in four different parts. Specifically to Amharic, Bengali, Hindi and Arabic.

The spoiler alert is the quality of Facebook and YouTube's content moderation policies are far below standard that an average reader would consider acceptable according to eight of our translators that went through each and every one of them. We'll hear from them in a minute. This not only inhibits and makes it impossible to understand what the policies intend to communicate. It goes against the principles and transparency accountability. These were written in 2018 which both Meta and Google have endorsed. And due process in content moderation. What I want to highlight here is our findings show that both companies are going against section 2 which stipulates that companies need to publish clear and precise rules that end users can easily understand.

The same focus is not given to speakers of these languages. The findings also suggest that companies are going against Section 3 which stipulates users should have access to the rules, policies, notice, appeal. And reporting mechanisms in these languages and dialects they speak. Clicking on Zoolu and it takes you to English. It's not translated at all. I want to emphasize here the bigger problem is that we're not offering the rule of law. We're not offering these platforms at all to be accessible for the majority of people in the world. So it's no longer that women are being kicked off line, minorities are being pushed off line. Content moderators have conflicting issues they want to tackle. Most users of Facebook and YouTube are not engaging with the platforms in English. So it's critical we're able to understand the policies because this is the most basic aspect of content moderation that impacts all other processes.

Now, this shows that there's a ‑‑ let's go back. We might be missing a slide. We only have 30 minutes to go through the whole thing. It's a 60‑minute presentation. This shows that there's a significant gap in languages that are offered. So when Facebook and free basics went after most of the African continent around Southeast Asia, they wanted the platform to be used. The content moderation and the policies that show people how to use the platform. How to talk to each other, what is allowed and what isn't is vastly different. For example, with Facebook supports 112 languages. But only 76 are translated for content moderation. YouTube, 71 but only 52 are translated. Twitter, which supports 48 but only 36 are translated for. This used to be a lot worse two years ago. This is actually a step up.

Hindi and Arabic we chose because Hindi is most widely spoken in India. The largest user base on Facebook and YouTube. And Arabic we picked as the most widely spoken. One of the most widely user base. In India, 350 million monthly active users on Facebook. And 467 million monthly active users on YouTube which makes it the single largest market. And weird Meta call it's a minority language. In Egypt alone, there are approximately 46 million users on the platforms. So just to explain for Amharic and Bengali, the translations for these languages were extremely poor. Machine translation was used and Bengali translation did not use the correct dialect. We wanted to assess what the quality was like and the impact. What is the real world impact on this?

The report also discusses how translation impacts the entire value chain of content moderation and platform governance from end user's ability to report content to moderator's ability to detect and remove harmful content, to machine learning and regulators' knowledge about what content is on the platform and what's allowed to be on the platform. We wanted to understand how editorial decisions in the English source language is being used. So, for example, the user's reading level are reflected and understood in the translation. We also wanted to document whether and how the translations may use bias language to explain the policies in translation. Giulia, I'm going to hand this over to you for the second part.

>> GIULIA BALESTRA: Okay. So I get to do the fun part which is talking about the key findings. As Dragana mentioned, our research found that translations of these moderations across the four languages analyzed where below the quality standard that can be considered acceptable by average users. Most translations contained a number of quality issues and this had an impact on users' ability to understand the content. For most of the languages that we looked at, the translation was actually so poor that users had to refer and we have Atnafu here who can talk about this. Had to refer to the English source text to understand the policy in their own language. As a reminder, these policies are ‑‑ let me take a breath. Sorry. Are intended for users to understand what is acceptable and not acceptable on these platforms. So we can imagine that confusing and potentially misleading translations mean that it's impossible for non‑English speaking users to make informed decisions about the content that they post, share and see online. And this has an impact on content moderation and platform governance more broadly.

So we can only assume that providing clear and useable translations of these policies is a crucial task. And should be considered a minimum standard. And we can look at the key findings one by one. I actually have it here. Sorry.

So the first finding is that a lot of these policies had translations word by word instead of translating for meaning. Most of the text showed regular and systematic mistranslations of terms not recognizable to users and speakers of those languages. Even when translation of these terms may have been technically accurate. One example of a technically accurate translation that's incoherent was the translation of Facebook insightment of violence policy. There was an mistranslated term call used to describe call for violence. Meaning invoking and insighting violence. Now, the Arabic translation refers to these calls as phone calls which creates confusion and the translated policy erroneously states phone calls invoking violence are prohibited rather than insighting violence online. Of course, this is a serious inaccuracy. First of all, it communicates that only phone calls instead of insighting violence more generally will be restricted online. And this implies other calls to violence may be acceptable. The second point is also that in translating calls as phone calls, users might be misled to think that audio calls on Facebook may be monitored. Which obviously contradict other policies which state that calls and chat are encrypted.

The second finding is the lack of contextualization. They are exclusively U.S. centric. And no adaptation to the specific social political cultural or religious content. The initial question and the title of our talk who is Timothy McVeigh is a reference most of our reviewers had when looking at the policies. It comes from I think it's Facebook policy on dangerous individuals which states that supporting and praising dangerous individuals is prohibited. And used the example of Timothy McVeigh who some of you know is an American terrorist involved in the 1995 Oklahoma city bombing. As one might expect, the example did not resonate with any other reviewers and any speakers of those languages outside the U.S.

So this type of lack of contextualization can be potentially more dangerous. If a policy is something I cannot relate to where I don't understand, how can I decide whether the policy applies to me and the content I'm creating sharing or viewing online. We've actually seen in the real world. The third finding is that systematic errors and in grammar and punctuation were common. I'm not going to cover all of them. Focusing on the Amharic translation, there are mixes of singular and plural form and a lot of punctuation errors. And this results in text that doesn't reflect the regional source text but also which lacks clarity. These can seem minor issues. However, they do contribute to making the policies just readable and less clear and lead to some misunderstanding. I forgot to move the slides.

Technical language was often trance literated in these policies. It means we changed the words from one language into another one using similar sounding letters or different characters. And in the process, the meaning might get completely lost. If we do transliterate text, this requires users to have a high understanding of English. Very technical terms here. So in order for users to understand the words that are being transliterated, they would have to understand the words in English first which was not the case. So concept like scams or content or other examples that we have there, these words would require explanation or contextualization. And omitting can create barriers for comprehension. One more aspect we looked at was bias that was reflected in the translation choices. Language choices are not neutral and very often they do reflect existing power dynamics favoring the privileges and more powerful groups at more vulnerable and minority groups. This has the risk of exacerbating inequalities. The choices we make around regional or local dialect, the specific expressions we decide to use or specific words, they all can be and reflect a political choice and we should think about who is included and excluded by these choices and how these might have an impact on whether users decide to see these policies applicable to them or not.

It is important to mention there are significant differences between the Facebook and YouTube community standards and guidelines in terms of how good the translations are. How inclusive and how useable for users. This may also vary between one policy and the other between the same platform. I mentioned some of the differences here. As said, we have a full report available. You can find it online. If you are interested about this work, you can talk to us any time.

And moving to almost the end. In conclusion, as an organization that works on localization and making technology more accessible in minority languages, translation and localization are difficult and complex. And based on this research and our experience, we have recommendations to improve the quality and usability of these policy translations. The first one is creating translation pros hes that serve and work with end users and use human centered and participatory approaches to translate in meaning. So users can understand the content they are presented to and make better choices online. This can look like involving and working closely with communities to translate and localize content. Is the policy intended to protect users and their rights?

Or a policy to protect platforms?

And the time and effort we put into translating and localizing the content. Being clear about the specific audience targeted, the language standard, the dialect choices and what that implicitly means. And a third aspect which is, of course, very important to us is about context and localization. So closely working with end users and localizing examples using the policies so they are understandable and relevant to a specific audience. And this can look like hyper localizing these policies and the examples making sure they speak to users and the communities that are potentially effected.

And we might have time for discussion.

>> DRAGANA KAURIN: This one's a lot louder. Maybe we can do a quick intro. We jumped right into it. My name is Dragana Kaurin. Executive director of Localization Lab.

>> GIULIA BALESTRA: And I'm Giulia Balestra, program manager at Localization Lab.

>> I'm Jamie.

>> Hi, everyone. I'm Atnafu Brhane from here.

>> DRAGANA KAURIN: And a contributor with Localization Lab. For a long time now. I can't believe you didn't lead with that. We wanted to invite Jamie today and Atnafu to talk about what is the impact of not making this accessible for people. Maybe the first question I'll hand over to you since you work directly on it. More broadly, what were your impressions?

And what kind of an impact does this have on women and LGBTQ folks. What kind of an impact does it have on minorities when we don't have an even plane on these platforms?

>> ATNAFU BRHANE: First, I didn't know the policies. There were policies available in America. While reviewing those policies. I'm not Amharic speaker and use the English version to understand the Amharic version. The thing is I think the platform made that available to just check their list that they are doing something in Africa where conflict is rampant because of the situation on social media platforms. So no context in America translation as well as there are examples that we don't understand. I don't think we know the guy. I learned something new while reviewing the translation. In general, the policy doesn't help the speaker focus here in Ethiopia.

>> So I think for us, something that we have seen and also been concerned about is that generally the platform's not investing in human content moderation in this region and the effect it has on harm reduction for people at risk. So I'm from Kenya, for example, and we've had a political history around elections. So our national commission of integration came up with a hate speech lexicon that platforms could use to flag content. Weeks before our election, global voices did an experiment to test the Facebook's preparedness. And all of these advertisements flew right through. And if they are not taking those kinds of things into consideration, it puts people at risk because you've seen how a lot of insightment to violence and proliferated around those political times in our country. And the effect that has in the short‑term with the elections and this information. But also in long‑terms effect in terms of people like women, girls and ethnic minorities in countries like ours. But something else we have been concerned about is the impending and increasing online hate and violence against LGBTQ people in Africa and Ghana, for example, over the past year since the evening of the LGBTQ bill, there has been increase in hate online against LGBTQ people. If they are not able to access those kind of policies to flag the content, it really makes harm reduction difficult in the online space. And that kind of harm is transferred to the off line as well. And when I think of women as well, I look at, for example, their non‑concentual policies. If it's not accessible in other languages, it's really difficult for victims or survivors affected by that to flag that kind of content and prevent it from being spread online.

>> DRAGANA KAURIN: In the report, we have some specific examples in each of the languages. Especially this part. It's so poorly translated that it makes it easy to harass women. Makes it very easy to abuse people who aren't easy access to policy mechanisms that would get people to say here's the policies that says you can't put this image of me up here. If it doesn't exist in your local language, why would we expect people to jump on to the English one?

>> GIULIA BALESTRA: They are not easy to navigate. I've read them in English and other languages I speak. And sometimes I'm still unclear of what I should be doing online. And so I think thinking about the quality of translation is one step but also thinking about how to make these policies really useable and accessible to people is another step that would help.

>> DRAGANA KAURIN: One final question. These two enormous companies that are extremely wealthy, why are they using machine translation for content moderation?

And how do we expect ‑‑ better translations with contextualization. The title came because two of the translators literally wrote as a comment saying who the hell is Timothy McVeigh?

I thought it was a provocative title. It was interesting that everything like that was in here. Better translations aren't enough, what is the over arching issue here?

Who are the content moderations for?

Who is the platform really for?

And why is it we can only push Facebook/Meta to do something better once civil society gets together to point issues out like this. So this really easy answer. I'd love to hear from both of you on where you see this going. And she goes into some specific examples. This is a symptom of a disease. How do we tackle a bigger issue?

>> So tackling the bigger issue is for a lot of civil society organizations working on this, we're looking more at the effect of what comes after platforms that are applying the content moderation policies with a conflict in Ukraine. We saw a more proactive responses from platforms. And other places that have had similar conflict. So I think looking at the fact that, for example, in Africa, companies like Meta out sourcing their content moderation to third parties. And them being able to do that when they overwork, under pay these workers. They are not held accountable for it. Now, for example, the current case with Devon who is suing Facebook. Meta is distancing themselves and saying this person is not our employee. He's an employee of Summersource. But their policies apply not only to their workers but to their consultant workers as well. So I think we really have to look at things at the root. When we say we want them to not only rely on automated content moderation, we have to look at how human content moderators are being treated already. How they are being treated and working conditions impact the ways they are able to flag these kinds of content, respond to the content. If that's not being addressed, we'll keep dealing with the same things.

>> ATNAFU BRHANE: Talk about the content moderation. Meta has now moderate contents in three local languages. Not only of conversation on Facebook polarized, it is the content moderator itself in Kenya. They are so polarized and benefit their background. This has been always a problem. There was a content moderator whom I know who does part‑time job as a news anchor for nationalist media who broke the news that able to control this distance. Not neutral at all. And have been raising these issues for ‑‑ it doesn't mean that they don't have presence here in Ethiopia. They support our work that we are doing here. But that's not enough. Need to scale up their work, their moderation. At the same time, they need to give priority for a country. After minor issue, it's a huge one. It's affecting everyone's lives.

>> DRAGANA KAURIN: That's what they said. We want to put in more effort and don't want to be complicit in crimes again. The same thing happened. If I can add to that listening to civil society and from vulnerable groups who are always on the end of abuse with lack of attention like this. And want to thank you for coming. We have some stickers and some pamphlets on the back table here. If you want to get involved, we hope to do more work like this in other languages. Especially if you have funding. We have 7,000 contributors that work on translating. Resources. Starting to do more research. We're a grass roots organization and would love to work with you. So there's some information here. Thank you, everyone, for coming.