Diversity of opinions, freedom of speech and exchanges of views form the core of democracy. The Internet has made it easier to exercise these rights, but it has also created new opportunities for abuse. Illegal and harmful content is now easily spread online, sometimes with dire consequences for democracy itself and for human rights.
While multiple jurisdictions have enacted legislation to deal with illegal online content (e.g. terrorist content, child sexual abuse), the situation is more complex with harmful content (e.g. misinformation and disinformation). To start with, there is no one definition of what harmful content means. Nor is there a general agreement among stakeholders on how such content should be dealt with. If for illegal content there are usually clear legal obligations for Internet platforms to remove such content or block access to it, the debate continues on who should deal with harmful content and how.
Let’s take the example of misinformation and disinformation. The spread of such content – in particular in political contexts – can influence electoral processes and undermine trust in democracy. It can also pose threats to individual and public safety (as we have seen with the surge of misinformation during the Covid-19 pandemic). Internet platforms have for long been under pressure (from governments and other stakeholders) to take action against misinformation and disinformation. In response, they have implemented a wide range of policies and measures such as fact checking and labeling of content, enhancing transparency rules (e.g. to make it clear who is publishing certain content), and even suspending users’ accounts over breaches of policies.
But such measures have also led to a series of questions around roles and responsibilities. Where does the responsibility of Internet platforms start and where does it end when it comes to tackling harmful content? Should platforms be left to determine on their own what is harmful content and how to deal with it? Or should legislators step it and establish limits as to what platforms can and cannot do? These and similar questions bring into focus one key issue: the need to ensure a proper balance between guaranteeing freedom of expression and fighting harmful content.
In this session of the IGF 2021 parliamentary track, we will be looking at examples of actions taken around the world (by platforms, governments and other actors) to tackle the spread of harmful content online. We will discuss the implications of such actions for freedom of expression. And we will look at whether legislative action is needed to create predictable frameworks for addressing the challenges of dealing with harmful content.
- Guilherme Canela De Souza Godoi, Chief, Freedom of Expression and Safety of Journalists Section, UNESCO
- Richard Lappin, Head of EMEA, Organic Content Policy, Facebook
- Amalia Toledo, Lead Policy Specialist for Latin America and the Caribbean, Wikimedia Foundation
- Kian Vesteinsson, Research Analyst for Technology and Democracy, Freedom House
Moderator: Courtney Radsch, MAG member
This session is dedicated to members of parliaments and parliamentary staff. Registration is required and can be done via this online form.