IGF 2023 WS #57 Lights, Camera, Deception? Sides of Generative AI

Time
Tuesday, 10th October, 2023 (04:00 UTC) - Tuesday, 10th October, 2023 (05:30 UTC)
Room
WS 3 – Annex Hall 2
Subtheme

Cybersecurity, Cybercrime & Online Safety
Cyberattacks, Cyberconflicts and International Security
Disinformation
Misinformation
New Technologies and Risks to Online Security
Online Hate Speech and Rights of Vulnerable People

Organizer 1: Man Hei Connie Siu, 🔒International Telecommunication Union
Organizer 2: Ananya Singh, USAID Digital Youth Council
Organizer 3: Vallarie Wendy Yiega, 🔒
Organizer 4: Keolebogile Rantsetse, 🔒
Organizer 5: Neli Odishvili, CEO of Internet Development Initiative
Organizer 6: Markus Trætli, The Norwegian University of Science and Technology

Speaker 1: Deepali Liberhan, Private Sector, Western European and Others Group 
Speaker 2: Hiroki Habuka, Civil Society, Asia-Pacific Group
Speaker 3: Bernard Mugendi, Civil Society, African Group
Speaker 4: Vallarie Wendy Yiega, Private Sector, African Group
Speaker 5: Olga Kyryliuk, Civil Society, Eastern European Group

Moderator

Man Hei Connie Siu, Civil Society, Asia-Pacific Group

Online Moderator

Neli Odishvili, Civil Society, Eastern European Group

Rapporteur

Wai Hei Siu, Private Sector, Asia Pacific Group

Format

Round Table - 90 Min

Policy Question(s)

A. How can international collaboration promote ethical guidelines and generative AI technologies to harness their potential for positive applications in various fields?
B. How could the prevention, detection, verification and moderation of generative AI content be improved through innovative interdisciplinary approaches and research?    
C. What are the opportunities/impact of generative AI commercialisation on the economy, society, and cybersecurity, including accessibility, affordability, and intellectual property, and what policies and regulations could promote data sovereignty and responsible data use?

What will participants gain from attending this session?
Newcomers to Internet governance will learn about deepfakes and generative AI, how to detect manipulated content, their consequences and influence on disinformation, alongside comprehending the value and downsides of such technologies, their effects on public trust, information integrity, and societal stability. Participants with backgrounds and interests in deepfakes, generative AI, data economy, policy and disinformation will gain familiarity with policy landscapes related to misinformation, legal frameworks, and ethical considerations, and will also uncover challenges and opportunities arising from generative AI commercialization. Both online and onsite participants will contribute ideas that enable them to actively contribute to policy investigations, create strategies to reduce public risk, increase trust in policy and government, discuss ethical dilemmas, and propose solutions to deepfake-related problems. Participants can also engage with policymakers, tech developers, media, and civil society to explore international collaboration and partnerships, discuss mechanisms to unleash the potential of and generative AI, alongside holding stakeholders accountable for malicious creation and dissemination.

Description:

Generative AI advancements brought opportunities for more applications and use, so are there applications that can support political communication? While there are advancements,  the rise of privacy abuse and disinformation campaigns puts generative AI back in the negative spotlight in 2022. Fake journalist accounts and far-right communities long used deepfakes for coordinated inauthentic behaviour, but more disinformation from generative AI surged recently, including circulations of fake videos during India's 2020 elections and AI-generated faces in fraudulent Twitter influence campaigns.  International news organisations heavily warned against the dangers of using AI-generated media for cyber and foreign influence operations, for influencing and discrediting political information and decisions, especially targeting women. However, while legal frameworks and regulations are lacking, as are digital literacy programs to nurture abilities to detect manipulated content and develop holistic communication strategies, is the situation as dire as it sounds?

Some deepfake videos are of poor quality, but technological advancements allowed for more convincing forgeries. Despite the significant manual processing required, potential rewards are significant from widespread media popularity. More commercial deepfake and generative AI content companies emerged, creating quality content for entertainment, training and more. While it is understandable that these services raise moderation and free speech concerns, since there are insufficient effective policies or partnerships to take stakeholders in deepfake creation accountable, could these services potentially bring more benefits than claimed harm, or even fight disinformation?
Also, Generative AI holds substantial potential for data value creation globally, offering a range of positive impacts and opportunities across various sectors and domains. For instance in agriculture through analyzing weather, soil, and crop data, generative AI can provide insights for optimal planting, and harvesting strategies, enhancing productivity. In addition, generative AI can process environmental data to model climate change scenarios, helping governments and organizations make informed decisions to reduce environmental impact. 

This workshop will tackle problems, and misconceptions, alongside opportunities from increasing deepfake and generative AI content commercialisation, and investigate policy landscapes to develop strategies that mitigate risks to public trust, information integrity, and societal stability. While, generative AI has the potential to revolutionize data-driven insights across numerous sectors, contributing to economic growth, sustainable development, and improved quality of life. However, its implementation should consider ethical considerations such as data privacy, and local contexts to ensure its positive impacts are maximized. The panel will discuss the benefits of developments in AI and the use of data.  Ethical dilemmas related to such technologies’ commercialisation and disinformation will then be explored, including creation, intellectual property rights, liability, and regulation enforcement across borders. We will touch upon the struggles to establish continental protection policies, cross-border data flows and missing links in infrastructure and capacities that foster misuse of technologies.  Innovative approaches that put these technologies to their full potential of good use, balance freedom of speech and deepfake-related harm prevention will also be highlighted throughout.

Expected Outcomes

The workshop aims to foster international collaboration, drive technological advancements, and inform policy-making processes for the responsible use of generative AI technologies, while addressing disinformation challenges and safeguarding digital integrity. Stakeholder inputs contribute to developing ethical and policy guidelines for policymakers, developers, and content creators to promote positive applications and prevent misuse. Identifying challenges and opportunities in detecting misleading information and generative AI can lead to innovative research approaches; and discussions on commercialisation can help develop policies and regulations that promote responsible use, protect individuals' rights, and deter malicious activities when using deepfakes and generative AI. A survey will be conducted, and with participants’ questions, a report on workshop results will be published on the IGF website for participants and interested parties. The reports in collaboration with private sectors will be shared with Internet Governance and youth communities to raise awareness, share findings, and inspire further action.

Hybrid Format: In the Q&A session, both remote and onsite participation is welcomed and highly encouraged in this workshop. With remote participants, the onsite and online moderators will work together to ensure the smooth flow of online participation, such that the online community will have opportunities to engage in discussions and raise questions with an alternating pattern between onsite and remote participation. Online participants could input their questions into the QnA function of the video conferencing platform, and the online moderator would moderate the flow, providing online participants with the opportunity to have their questions answered by our speakers. Online collaboration tools, such as Mural, will be used for interactive exercises and brainstorming sessions; online polling tools will gather instant feedback from both onsite and online participants; and designated hashtags will promote online discussions and insight sharing on social media platforms, promoting engagement and extending the workshop's reach beyond the event.

Key Takeaways (* deadline 2 hours after session)

International Collaboration and Ethical Guidelines: International collaboration is crucial to establishing ethical guidelines and policies for the responsible use of generative AI technologies. By working together, stakeholders can harness the potential of these technologies for positive applications in various fields while addressing challenges related to misinformation and data integrity.

Innovative Approaches for Responsible AI: Innovative interdisciplinary approaches and research are needed to improve the prevention, detection, verification, and moderation of generative AI content. These efforts can help mitigate the risks associated with deep fakes and generative AI, promote responsible data use, and contribute to a more secure and trustworthy digital environment.

Call to Action (* deadline 2 hours after session)

We urge governments and international organizations to prioritize the development and implementation of ethical guidelines and policies for the responsible use of generative AI technologies. This includes fostering collaboration between stakeholders from various sectors to promote positive applications, prevent misuse, and protect individuals' rights.

We call upon researchers, academics, and innovators to focus their efforts on advancing interdisciplinary approaches and research to enhance the prevention, detection and verification of generative AI content. By fostering innovation and collaboration across fields, we can develop effective tools and strategies to combat the spread of manipulated content, safeguard digital integrity, and promote trustworthy information in the generative AI age.

Session Report (* deadline 26 October) - click on the ? symbol for instructions

Lights, Camera, Deception? Sides of Generative AI | IGF 2023 WS #57 REPORT

The report of the discussion about generative AI reveals several important points about how this technology can be harnessed for the benefit of society. The speakers emphasized the need to make generative AI technology more accessible and affordable, especially in rural areas. In these regions, internet connectivity can be a challenge, and ensuring that hardware and software platforms are affordable is crucial. This accessibility issue is particularly relevant in East Africa and some parts of Asia.

Another key takeaway from the discussion is the importance of designing generative AI solutions with the end users in mind. The speakers provided an example of an agricultural chatbot that failed because it couldn't understand the language used by local farmers. This highlights the need to consider the local context and the preferences of the people who will ultimately use these AI solutions.

Data sharing was also highlighted as a vital component of generative AI development. The speakers mentioned the work of the Digital Transformation Centre in creating data use cases for various sectors. Sharing data among stakeholders is seen as a way to build trust and promote the development of solutions that can effectively address development challenges. An example of this is the Agricultural Sector Data Gateway in Kenya, which allows private sector access to different datasets.

Public-private partnerships were identified as a crucial element of generative AI development. Both the private and public sectors have their own data, and building trust between these sectors is essential for successful data sharing and AI development. The speakers pointed out that collaboration is essential, with public and private partners working together, as seen in the transport industry where the public sector handles infrastructure while the private sector focuses on product development.

Localized research was also emphasized as necessary to understand regional-specific cultural nuances. It was noted that there is a lack of funding and a shortage of engineers and data scientists in certain regions. Localized research is vital for addressing the specific needs and challenges of those regions.

Transparency in the use of generative AI was highlighted as essential. The speakers used the example of "Best Take Photography," where AI generated multiple pictures that could potentially misrepresent reality. To ensure ethical use and avoid misrepresentations, transparency is presented as crucial.

The need for more engineers and data scientists, as well as funding, in Sub-Saharan Africa was stressed. Developing the capacity for these professionals is crucial for advancing generative AI in the region.

Public awareness sessions were also deemed necessary to discuss the potential negative implications of generative AI. The example of "Best Take Photography" was used again to illustrate the risks of generative AI in creating false realities.

Government-led initiatives and funding for AI innovation, particularly in the startup space, were presented as essential. The Startup Act in Tunisia was cited as an example of a government initiative that encourages innovation and supports young innovators in AI. It was argued that young people have the ideas, potential, and opportunity to solve societal challenges using AI, but they require resources and funding.

Lastly, the discussion highlighted the potential risks of "black box" AI, where algorithms cannot adequately explain their decision-making processes. This lack of transparency can lead to the spread of misinformation or disinformation, underscoring the need for transparency in how AI models make decisions.

In summary, the conversation about generative AI underscored the importance of addressing various challenges, including accessibility, affordability, human-centered design, data sharing, public-private partnerships, collaboration, localized research, transparency, capacity development, public awareness, government initiatives, and the risks associated with opaque AI models. These insights provide a roadmap for leveraging generative AI for positive impact while mitigating potential pitfalls.