IGF 2023 WS #57 Lights, Camera, Deception? Sides of Generative AI

Time
Tuesday, 10th October, 2023 (08:30 UTC) - Tuesday, 10th October, 2023 (10:00 UTC)
Room
WS 5 – Room B-2
Subtheme

Cybersecurity, Cybercrime & Online Safety
Cyberattacks, Cyberconflicts and International Security
Disinformation
Misinformation
New Technologies and Risks to Online Security
Online Hate Speech and Rights of Vulnerable People

Organizer 1: Man Hei Connie Siu, 🔒International Telecommunication Union
Organizer 2: Ananya Singh, USAID Digital Youth Council
Organizer 3: Vallarie Wendy Yiega, 🔒
Organizer 4: Keolebogile Rantsetse, 🔒
Organizer 5: Neli Odishvili, CEO of Internet Development Initiative
Organizer 6: Markus Trætli, The Norwegian University of Science and Technology

Speaker 1: Flavia Alves, Private Sector, Western European and Others Group (WEOG)
Speaker 2: Hiroki Habuka, Civil Society, Asia-Pacific Group
Speaker 3: Bernard Mugendi, Civil Society, African Group
Speaker 4: Vallarie Wendy Yiega, Private Sector, African Group
Speaker 5: Olga Kyryliuk, Civil Society, Eastern European Group

Moderator

Man Hei Connie Siu, Civil Society, Asia-Pacific Group

Online Moderator

Neli Odishvili, Civil Society, Eastern European Group

Rapporteur

Wai Hei Siu, Private Sector, Asia Pacific Group

Format

Round Table - 90 Min

Policy Question(s)

A. How can international collaboration promote ethical guidelines and generative AI technologies to harness their potential for positive applications in various fields?
B. How could the prevention, detection, verification and moderation of generative AI content be improved through innovative interdisciplinary approaches and research?    
C. What are the opportunities/impact of generative AI commercialisation on the economy, society, and cybersecurity, including accessibility, affordability, and intellectual property, and what policies and regulations could promote data sovereignty and responsible data use?

What will participants gain from attending this session?
Newcomers to Internet governance will learn about deepfakes and generative AI, how to detect manipulated content, their consequences and influence on disinformation, alongside comprehending the value and downsides of such technologies, their effects on public trust, information integrity, and societal stability. Participants with backgrounds and interests in deepfakes, generative AI, data economy, policy and disinformation will gain familiarity with policy landscapes related to misinformation, legal frameworks, and ethical considerations, and will also uncover challenges and opportunities arising from generative AI commercialization. Both online and onsite participants will contribute ideas that enable them to actively contribute to policy investigations, create strategies to reduce public risk, increase trust in policy and government, discuss ethical dilemmas, and propose solutions to deepfake-related problems. Participants can also engage with policymakers, tech developers, media, and civil society to explore international collaboration and partnerships, discuss mechanisms to unleash the potential of and generative AI, alongside holding stakeholders accountable for malicious creation and dissemination.

Description:

Generative AI advancements brought opportunities for more applications and use, so are there applications that can support political communication? While there are advancements,  the rise of privacy abuse and disinformation campaigns puts generative AI back in the negative spotlight in 2022. Fake journalist accounts and far-right communities long used deepfakes for coordinated inauthentic behaviour, but more disinformation from generative AI surged recently, including circulations of fake videos during India's 2020 elections and AI-generated faces in fraudulent Twitter influence campaigns.  International news organisations heavily warned against the dangers of using AI-generated media for cyber and foreign influence operations, for influencing and discrediting political information and decisions, especially targeting women. However, while legal frameworks and regulations are lacking, as are digital literacy programs to nurture abilities to detect manipulated content and develop holistic communication strategies, is the situation as dire as it sounds?

Some deepfake videos are of poor quality, but technological advancements allowed for more convincing forgeries. Despite the significant manual processing required, potential rewards are significant from widespread media popularity. More commercial deepfake and generative AI content companies emerged, creating quality content for entertainment, training and more. While it is understandable that these services raise moderation and free speech concerns, since there are insufficient effective policies or partnerships to take stakeholders in deepfake creation accountable, could these services potentially bring more benefits than claimed harm, or even fight disinformation?
Also, Generative AI holds substantial potential for data value creation globally, offering a range of positive impacts and opportunities across various sectors and domains. For instance in agriculture through analyzing weather, soil, and crop data, generative AI can provide insights for optimal planting, and harvesting strategies, enhancing productivity. In addition, generative AI can process environmental data to model climate change scenarios, helping governments and organizations make informed decisions to reduce environmental impact. 

This workshop will tackle problems, and misconceptions, alongside opportunities from increasing deepfake and generative AI content commercialisation, and investigate policy landscapes to develop strategies that mitigate risks to public trust, information integrity, and societal stability. While, generative AI has the potential to revolutionize data-driven insights across numerous sectors, contributing to economic growth, sustainable development, and improved quality of life. However, its implementation should consider ethical considerations such as data privacy, and local contexts to ensure its positive impacts are maximized. The panel will discuss the benefits of developments in AI and the use of data.  Ethical dilemmas related to such technologies’ commercialisation and disinformation will then be explored, including creation, intellectual property rights, liability, and regulation enforcement across borders. We will touch upon the struggles to establish continental protection policies, cross-border data flows and missing links in infrastructure and capacities that foster misuse of technologies.  Innovative approaches that put these technologies to their full potential of good use, balance freedom of speech and deepfake-related harm prevention will also be highlighted throughout.

Expected Outcomes

The workshop aims to foster international collaboration, drive technological advancements, and inform policy-making processes for the responsible use of generative AI technologies, while addressing disinformation challenges and safeguarding digital integrity. Stakeholder inputs contribute to developing ethical and policy guidelines for policymakers, developers, and content creators to promote positive applications and prevent misuse. Identifying challenges and opportunities in detecting misleading information and generative AI can lead to innovative research approaches; and discussions on commercialisation can help develop policies and regulations that promote responsible use, protect individuals' rights, and deter malicious activities when using deepfakes and generative AI. A survey will be conducted, and with participants’ questions, a report on workshop results will be published on the IGF website for participants and interested parties. The reports in collaboration with private sectors will be shared with Internet Governance and youth communities to raise awareness, share findings, and inspire further action.

Hybrid Format: In the Q&A session, both remote and onsite participation is welcomed and highly encouraged in this workshop. With remote participants, the onsite and online moderators will work together to ensure the smooth flow of online participation, such that the online community will have opportunities to engage in discussions and raise questions with an alternating pattern between onsite and remote participation. Online participants could input their questions into the QnA function of the video conferencing platform, and the online moderator would moderate the flow, providing online participants with the opportunity to have their questions answered by our speakers. Online collaboration tools, such as Mural, will be used for interactive exercises and brainstorming sessions; online polling tools will gather instant feedback from both onsite and online participants; and designated hashtags will promote online discussions and insight sharing on social media platforms, promoting engagement and extending the workshop's reach beyond the event.