IGF 2020 WS #342 People vs machines: collaborative content moderation

Time
Monday, 9th November, 2020 (17:20 UTC) - Monday, 9th November, 2020 (18:50 UTC)
Room
Room 3
About this Session
This session explores the complexities of content moderation at scale and potential implications for trust in the internet. While some platforms deploy technology to automatically moderate information, others enable their users to participate in moderation practices. In this session, speakers will discuss the kind of support, architecture, norms, and other systems that will be needed for communities to be able to do this work effectively and safely.
Subtheme

Organizer 1: Allison Davenport, Wikimedia Foundation
Organizer 2: Anna Mazgal, Wikimedia
Organizer 3: Justus Dreyling, Wikimedia Germany
Organizer 4: Jan Gerlach, Wikimedia Foundation

Speaker 1: Robert Faris, Civil Society, Western European and Others Group (WEOG)
Speaker 2: Mira Milosevic, Civil Society, Western European and Others Group (WEOG)
Speaker 3: Marwa Fatafta, Civil Society, Asia-Pacific Group
Speaker 4: Mercedes Mateo Diaz, Intergovernmental Organization, Latin American and Caribbean Group (GRULAC)

Moderator

Anna Mazgal, Civil Society, Eastern European Group

Online Moderator

Allison Davenport, Civil Society, Western European and Others Group (WEOG)

Rapporteur

Justus Dreyling, Civil Society, Western European and Others Group (WEOG)

Format

Round Table - U-shape - 90 Min

Policy Question(s)

Trust and democracy:
How can policy support participative, collaborative content moderation that creates trust in platforms and the internet?

Freedom of expression and harmful content:
What kind of architectures promote people's ability to address disinformation, incitement to violence, and other types of content that can harm society?

Safety online:
Where do users need to be supported through tools to address harmful content without being harmed themselves?

Content moderation online is defined by the tension between a need to address societal ills on one side and the imperative to protect freedom of expression. Starting from the premise that internet users should be involved in content moderation, this workshop seeks to address this tension and find out what factors of public policy, social norms, and platform architecture are ideally suited to promote content moderation by online communities and what kind of support they may need.

SDGs

GOAL 9: Industry, Innovation and Infrastructure
GOAL 10: Reduced Inequalities
GOAL 16: Peace, Justice and Strong Institutions

Description:

Large internet platforms for user generated content increasingly rely on automated systems to curate, promote, remove, or otherwise moderate information. Design features, architecture, and user interfaces that define the nature of platforms also influence how and to what degree a user base or community can take editorial control and jointly decide about the kind of content they want to allow in channels, websites, and forums. Certain models of content moderation allow the users of platforms or forums to ensure the quality of content and enforce their social norms, i.e. community standards or rules, in a collaborative manner. This can be very effective: research on harmful content on Wikipedia, for instance, has shown that content moderation by communities can work, but also that there are some aspects where platforms need to support them. Different kinds of communities, including from different regions and backgrounds, may apply different quality standards to information they want to see in the spaces where they meet online. At the same time, public policy such as intermediary liability laws, has a large impact on a platform’s ability to hand over editorial control to its users, i.e. to allow them to upload and moderate content in the first place. This workshop explores the interplay of social, technical, and policy systems that enable a decentralized, collaborative approach to content moderation. A particular focus of the conversation will lie on how online communities can address content that is considered harmful or potentially illegal, e.g. misinformation, incitement to violence, or terrorist content. The session convenes experts from academia, platform representatives, civil society, and intergovernmental organizations to discuss how regulation can foster good content moderation practices that respect freedom of expression and democracy while also effectively curbing societal ills online.

Questions that we want to explore with the speakers and participants: - What different models of community content moderation are there? What are their pros/cons? - Are certain types of architectures better suited to address harmful content? - How can regulation support communities to do content moderation well? - What constitutes good content moderation? What aspects of it can communities do better than automated systems?

Expected Outcomes

The co-organizers expect the conversation to identify a few key factors that shape content moderation and need to be considered by lawmakers as they draft public policy for the internet. In addition, we’re expecting the conversation to yield advice for the operators of platforms who want to empower online communities to make decisions about the content they see.

Just like the topic, the session is meant to be very participative. The deep-dive into the research on content moderation on Wikipedia will illustrate the topic and provide various examples for the participants to engage with and ask questions about (during the Q&A part).

Relevance to Internet Governance: This workshop is relevant for policymakers and platform hosts alike who create rules that govern how people can engage with content online. Platforms constitute large parts of the content layer of the internet and the liability rules that they are subject to and the terms of service and architecture that they develop directly affects millions of people’s experience online.

Relevance to Theme: This workshop is relevant for and will contribute to the Thematic Track “Trust” by making a contribution to the debate about moderation of content that is considered harmful or potentially illegal, e.g. misinformation, incitement to violence, or terrorist content. A democratic, trusted internet requires the participation of its users, including in decisions about content moderation and quality of information.

Online Participation

 

Usage of IGF Official Tool.

 

Agenda
  • Intro and framing of the topic
  • Part 1.1: What is collaborative content moderation? The example of Wikimedia EN
  • Part 1.2: How content moderation affects freedom of expression and other human rights?
  • Part 2.1: How can policy support participative, collaborative content moderation that creates trust in platforms and the internet?
  • Part 2.2: The role of architectures in promoting people's ability to address disinformation, incitement to violence, etc.
  • Part 3: Commentary 
  • Part 4: Q&A with the audience