Description: This session will use innovations in the deliberative method to assess the strengths, shortcomings, and effects of three policy instruments that address disinformation and content moderation at scale in the European Union. It seeks to compare these approaches using a methodology that relies on objective ground truths and a series of deliberations conducted prior to IGF. Participants will identify cross-regional confluence points with regulatory and other actions that are being undertaken outside Europe, and develop best practices that cut across geographies. Ultimately, the session will help develop informed solutions that maximize the possibility for freedom of expression and democratic discourse while mitigating the harmful consequences of disinformation in online spaces. The conversation will center on France’s Law Against Manipulation of Information (2018), the UK House of Commons Digital, Culture, Media & Sport Committee’s ‘Online Harms’ white paper and proposal (2019), and the EU Code of Practice on Disinformation (2018). These three policy instruments represent distinct regulatory and self-regulatory approaches to content moderation and the proliferation of disinformation online and offline. In the months leading up to the session, two or three small-group deliberations will take place online, with the use of specially commissioned Balanced Briefing Materials and an automated smart moderator tool designed and tested at Stanford. These exercises will use the deliberative method. The materials, generated in the preparation phase, consider trade-offs between policy options on governing disinformation. These sessions will include IGF participants as well as other Internet governance stakeholders. Before the deliberation, we will survey our sample of stakeholders using questions related to the policy instruments. Those who take part in the deliberation will be re-polled immediately afterwards; their changes of opinion represent the new conclusions the public might reach if they had the opportunity to deliberate through an informed and fact-based process. We expect to demonstrate that debates based on shared ground-truth briefing materials provide a basis for informed decision making in the realm of content governance and freedom of expression online as a whole. Building on these online deliberations, the session at IGF will be structured as follows. (1) Introduction and overview of deliberative method (10 min.): The research team will open with an overview of the briefing materials, the rules governing the deliberation, and the performance of the automated moderator tool. Members of the research team will also briefly discuss findings and lessons from the deliberation on NetzDG at IGF Deutschland in 2018, and encourage participants to review the briefing materials for that session separately. (2) Expert discussion of deliberation results (60 min.): The organizers will present a snapshot of the results of the deliberative polls, focusing on reported changes in participants’ positions on different components of each instrument and level of knowledge following the deliberation. Invited experts familiar with the three laws and their links with policies being developed in other regions will assess the deliberations’ contributions to outlining best practices for addressing disinformation. (3) Debrief and Q&A (20 min.): The organizers will summarize the session, announce next steps, provide a brief preliminary assessment of the applicability of the deliberative method to the global discussion on disinformation, and allow space for any additional questions. The session builds on numerous successful implementations of the deliberative method, including the Deliberative Poll on the European Union (2009), a pilot deliberation on multi-stakeholder collaboration for extending Internet access to the next billion users (IGF 2015), a deliberation on encryption (IGF 2016), and the recent IGF Deutschland (2018), at which participants debated the German ‘NetzDG’ law using the same methodology.
Expected Outcomes: The deliberative method is geared toward producing practical outcomes. Past Deliberative Polling exercises provide strong evidence of significant and measurable knowledge gains and changes in opinion among participants. We expect that this will be the case for this workshop as well, as not all participants will be conversant in all three policies at the outset. The workshop will produce the following outputs: (1) polling results measuring changes in levels of knowledge and preferences among participants, (2) a set of Balanced Briefing Materials with multiple uses outside of the deliberative process (e.g., comparative analysis of policy instruments), and (3) a report on the findings. The workshop will also lay a foundation for further deliberative exercises on the development of the three policies. The results of the workshop will form the basis for advisory opinions on these policy instruments, which will play a direct role in defining best practices for future legislation, particularly in cases where legislative proposals have yet to be formulated. Finally, in the broadest sense, the workshop will showcase the utility of a novel methodology to carry out an informed discussion and analysis of laws that govern online content. The method helps counter misinformation on existing and proposed policy instruments, guard against cognitive barriers that could marginalize or exclude individuals, and lead to reasoned decision-making. All three instruments discussed were published in 2018-19; this session would be their first comparative multi-stakeholder assessment. This will help distinguish the exercise from sessions that tackle disinformation as a broad issue without explicitly addressing policy instruments as well as from those that analyze a single instrument in isolation. We are happy to collaborate with other workshop organizers in the same field to ensure that our session is complementary and to drive collaboration in this space beyond the IGF.