Session
Classroom
Duration (minutes): 75
Format description: A 75-minute open forum in a "classroom" setting is ideal for starting a conversation on an already adopted and available instrument (Council of Europe Guidance Note on countering the spread of online mis- and disinformation through factchecking and platform design solutions in a human rights' complaint manner) because it allows for structured yet open discussion. This format encourages focused interaction, providing ample time to introduce the topic, outline key points, and engage the audience. The classroom setting fosters a collaborative atmosphere, while the raise-of-hands method ensures that all participants have the opportunity to contribute and voice their opinions. This approach helps clarify any doubts, stimulates deeper engagement, and promotes diverse perspectives, enabling a more comprehensive understanding and progression of the conversation. The duration of 75 minutes will be sufficient to analyse topics that need reflection from substantial but also from technical point of view.
This session will address the challenges deriving from the use of Artificial Intelligence tools generating and spreading disinformation and the distinctive threats these pose to democratic dialogue. The quality of public debate is threatened at various level, ranging from false content spreading at large scale, unlikely to be tackled solely by human intervention, to the propagation of false information by individuals who consider it as true and share it in good faith.
Starting with a presentation of the Council of Europe Guidance Note on countering the spread of online mis- and disinformation through factchecking and platform design solutions in a human rights’ compliant manner, participants will discuss practical measures policymakers and stakeholders can take, such as support for fact-checking, platform-design solutions and user empowerment. The session will also examine the role and responsibilities of digital platforms in both the dissemination of false AI generated information and the promotion of quality journalism.
Bringing together media professionals, AI experts, policymakers, and other stakeholder, the conversation will highlight AI’s dual role—both as a potential vehicle for producing and distributing disinformation when misused, and as a tool for enhancing fact-based information and play a positive role in enabling a safe, inclusive and favourable online environment for participation in public debate.
Panellists will share their experiences, challenges, and strategies for combating AI-driven disinformation and the efforts put in place to maintaining trust in news production. The discussion will also address the challenges arising in this context from the rising use of generative Artificial Intelligence systems, including key technologies like deepfakes, highlighting the need for regular updates and careful vigilance in understanding disinformation.
To foster an interactive and inclusive dialogue, audience members — both onsite and online — will be encouraged to ask direct questions and actively engage in the discussion.
The session will conclude with a summary of key takeaways, reinforcing the importance of assessing AI potential benefits and risks in countering the spread of online mis- and disinformation.
Council of Europe
Speaker 1: David Caswell, Product developer, Consultant and Researcher of computational and automated forms of journalism
Speaker 2: Chine LABBÉ, Editor-in-Chief and Vice President of Partnerships at NewsGuard
Speaker 3: Iva NENADIC, Assistant Professor in Journalism at the Faculty of Political Science, University of Zagreb, and Research Fellow at the Centre for Media Pluralism and Media Freedom, European University Institute, Croatia
Speaker 4: Olha PETRIV, Artificial intelligence lawyer, Center for Democracy and the Rule of Law (CEDEM), Ukraine
16.10
Targets: Disinformation undermines trust in the media and threatens the reliability of information that feeds public debate. Combating disinformation and ensuring a healthy, plural and reliable digital environment is crucial for upholding democratic values. Addressing the risks posed by AI and expanding the benefits that can derive from its use to enhance transparency, accuracy, and accountability in the information landscape, directly serves the UN goal of ensuring public access to information and protecting fundamental freedoms.
Practical guidance and recommendations to policymakers and stakeholders (including governments, regulators, industry, journalists, civil society, researchers, and users) on countering the dissemination of online disinformation through fact-checking and platform-design solutions, all in a human-rights compliant manner, and with due regard to user empowerment, is an important precondition for addressing the negative impacts of AI on disinformation.