IGF 2023 WS #425 Governing generative AI-enabled disinformation


Artificial Intelligence (AI) & Emerging Technologies
Chat GPT, Generative AI, and Machine Learning

Organizer 1: Brenden Kuerbis, 🔒Georgia Tech
Organizer 2: Jyoti Panday, 🔒
Organizer 3: Han Seungtae, Georgia Tech

Speaker 1: Brenden Kuerbis, Civil Society, Western European and Others Group (WEOG)
Speaker 2: Santiago Lyon, Private Sector, Western European and Others Group (WEOG)
Speaker 3: Nayana Jayarajan, Intergovernmental Organization, Western European and Others Group (WEOG)


Brenden Kuerbis, Civil Society, Western European and Others Group (WEOG)

Online Moderator

Jyoti Panday, Civil Society, Asia-Pacific Group


Han Seungtae, Civil Society, Asia-Pacific Group


Panel - 90 Min

Policy Question(s)

How do we balance the need for innovation, expression and the development of new technologies like generative AI with the potential risks they may pose to individuals and society?

What is an appropriate policy framework (e.g., organizational, national, transnational) and what roles should various stakeholders play in governing generative AI-enabled disinformation?

How do we effectively verify the source and authenticity of generative AI produced content, especially in the context of disinformation campaigns?

Is there a need for further collaboration between government agencies, civil society organizations, and online platforms to develop effective strategies for identifying and mitigating generative AI-enabled disinformation?

What will participants gain from attending this session? Workshop participants will be exposed to speakers who clearly identify, articulate, and evaluate the threat model concerning generative AI-enabled disinformation activities, and engage in exploration of the incentives, challenges, risks, and benefits of various governance responses.


Recent progress in and widespread commercialization of large language models (LLMs) appears to facilitate large scale production of digital content that is seemingly indistinguishable from human-created content. Some fear that generative AI technology will drastically enhance adversaries' capabilities to create disinformation, false information deliberately spread to deceive people, and encourage a new type of influence operation that is more effective. Whether this is true or not, the expanding capabilities of LLMs and AI-generated content might be exploited, and in theory, may enable an adversary to influence a victim and erode confidence in communication and institutions through social and psychological manipulation.

AI software providers acknowledge concerns over the social impacts of increasingly complex (and presumably more impactful) generative models, including AI-enabled disinformation, and have pursued varying tactics to mitigate perceived or actual risk. For example, delaying more advanced model releases, training models to restrict certain uses of LLMs, or watermarking models’ outputs to establish origin. Other software providers are developing content provenance, which may help address disinformation by enabling the verification of the source of information and its authenticity. Meanwhile, online platforms (e.g., social media, news organizations) have years of experience and developed capabilities to identify and mitigate disinformation. Government bodies have created regulatory obligations, and along with civil society organizations have expended significant effort to develop codes of practice on disinformation, shed light on disinformation campaigns and increase digital literacy.

This workshop brings together key stakeholders concerned with generative AI-enabled disinformation to explore the potential threat as well as possible governance solutions.

Expected Outcomes

A key outcome would be for participants to decide if an ex ante approach (for instance, focused on model creation or use) or an ex post approach (for instance, focused on verifiable content dissemination, education) is more appropriate for dealing with generative AI-enabled disinformation.

An important outcome and followup process would be to establish awareness and ongoing assessment of incentives, challenges, risks, and benefits of policy and technical approaches being considered.

The organizers expect to edit and publish an edited transcript of the discussion.

Hybrid Format: IGP did a Town Hall and Workshop in Hybrid mode at the 2021 Poland IGF. In both cases, the onsite moderator and online speakers had no trouble coordinating through the Zoom interface. The sequence of speakers will be worked out in advance, so both types of speakers will know when they are expected to start. For discussion, the onsite moderator will keep track of hands raised in the room, and the online moderator will carefully watch for virtual hand raises through the meeting platform. The two moderators will alternate between in-room and online questions.