IGF 2023 WS #401 How to democratize the internet – new regulatory approaches

Subtheme

Human Rights & Freedoms
Internet Shutdowns
Non-discrimination in the Digital Space
Rights to Access and Information

Organizer 1: Julia Iversen, GIZ
Organizer 2: Nina Bendzko, Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH
Organizer 3: Franziska Jakobs, 🔒Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH

Speaker 1: Helani Galpaya, Civil Society, Asia-Pacific Group
Speaker 2: Owono Julie, Civil Society, African Group
Speaker 3: Gemma Galdon Clavell, Technical Community, Western European and Others Group (WEOG)
Speaker 4: Ben Scott, Civil Society, Western European and Others Group (WEOG)

Moderator

Julia Iversen, Technical Community, Western European and Others Group (WEOG)

Online Moderator

Franziska Jakobs, Technical Community, Western European and Others Group (WEOG)

Rapporteur

Nina Bendzko, Technical Community, Intergovernmental Organization

Format

Other - 90 Min
Format description: The session will commence with a panel discussion (40 minutes) that will present different approaches, experiences, and food-for-thought around innovative and novel approaches to democratic platform and AI regulation. After this debate, participants will be invited into breakout groups to capture, discuss, and evaluate their experiences, viewpoints, and insights. In a final plenary session, main discussion points are collected, and recommendations and next steps jointly produced.
5 Minutes Welcome by Moderator, Setting the Scene, Agenda
40 Minutes Panel
30 Minutes Peer Exchange in Breakout groups
15 Minutes – summarizing findings and presentation of follow-up process

Policy Question(s)

How can standards and regulatory democratic approaches look like to ensure a safe and secure digital environment for all and what role should the involved actors play?
Which roles and responsibilities can different stakeholders assume when applying novel regulatory approaches to ensure democratic platforms and inclusive AI?
Which novel approaches of (self)regulation may work and what may be their shortcomings?

What will participants gain from attending this session? Participants will gain insights into novel approaches to (self) regulation, regulatory cooperation, tools, and methodologies to tackle issues such as disinformation, harmful content and GBOV and make platforms, social media, and the AI behind them more democratic, pluralistic, and diverse while ensuring a safe and secure digital environment at the same time. The aim is to provide participants with inspirations and ideas for approaches that they can apply hands on in their work context after IGF 2023, no matter if from academia, international organizations, government, private sector, or civil society. To do so, participants will moreover get the chance to exchange and discuss different roles, responsibilities, and room for action for different stakeholders

Description:

States and societies worldwide are facing challenges online, such as targeted disinformation, hate speech, surveillance by the state, or business models impeding the freedom of expression. Particularly platforms, including social media, are often (mis)used to influence participation and public opinion forming, e.g., through accelerating the spread of disinformation, harmful content, gender-based online violence (GBOV), and political micro-targeting.
How can the Internet and in particular platforms, social media, and the AI behind them become – or stay – more democratic, pluralistic, diverse, and safe places for all? How can especially vulnerable groups and individuals, who are often attacked and silenced by hate online, be protected from harm? How can we train AI to be more diverse, pluralistic, pacifistic and avoid bias, the spread of hate and discrimination? How can we hold the humans and organizations behind platforms or AI accountable?
Helping to foster the IGF’s open and multistakeholder character as a space for dialogue and debate, this session brings together stakeholders from academia, government, private sector, and civil society from different continents who may not know the final answers to these big questions but have developed novel approaches which could constitute a solution. This session will present a variety of innovative examples of (self)regulation, from Oversight Boards and scientific research on content policy to feminist development policy, algorithmic audits, the use of memes for transparency, accountability or collective redress, and public litigation – all there to protect human rights and freedoms.
The need for new policies, cooperation and approaches will be discussed, such as possibly new and different roles and responsibilities of different stakeholders in the establishment of a safer and more accountable digital environment and participatory and rights-centred governance of platforms.

Expected Outcomes

This session will result in action-based policy recommendations that would guide development actors working in the field of democracy protection in the virtual space, compiling the examples presented plus those which session participants will contribute.

Hybrid Format: The session will be organized fully hybrid, with a mix of speakers on-site and online. In order to ensure a good quality of collaboration, virtual collaboration tools will be used in the break-out sessions to document the discussions and present them in the plenary.
The virtual collaboration tool will also serve to make the session more interactive, allowing for small activities online to enable participants to get-to-know each other in a time-effective way.
The moderators will ensure that all participants online and on-site will have access to the virtual collaboration tool and provide inputs. Some of the breakout groups (depending on the number of participants) will only take place on-site, to enable a different level of interactions between participants (while still recording main findings on the virtual collaboration tool). 
Planned virtual tools for collaborative working might be Conceptboard or Miro. For voting or initial icebreakers might be used Mentimeter.