IGF 2023 WS #11 Responsible Innovation & Ethical Use of Generative AI

Subtheme

Artificial Intelligence (AI) & Emerging Technologies
Chat GPT, Generative AI, and Machine Learning

Organizer 1: Joy Wathagi Ndungu, 🔒Carl Zeiss AG
Organizer 2: Dominika Janus, NA
Organizer 3: Noha Abdel Baky, 🔒
Organizer 4: Rebecca Ryakitimbo, 🔒Mozilla/ksgen

Speaker 1: Joy Wathagi Ndungu, Private Sector, Western European and Others Group (WEOG)
Speaker 2: Noha Abdel Baky, Civil Society, African Group
Speaker 3: Dominika Janus, Civil Society, Eastern European Group
Speaker 4: Bolutife Oluyinka Adisa, Civil Society, African Group
Speaker 5: Nuttall Mark, Technical Community, Asia-Pacific Group

Moderator

Joy Wathagi Ndungu, Private Sector, Western European and Others Group (WEOG)

Online Moderator

Noha Abdel Baky, Civil Society, African Group

Rapporteur

Dominika Janus, Civil Society, Eastern European Group

Format

Round Table - 60 Min

Policy Question(s)

A. How can we ensure that ChatGPT and other Generative AI technologies are developed ethically while protecting against potential harms to individuals and society?
B. What policies and practices should be put in place to ensure transparency and accountability in the development and deployment of Generative AI technologies, particularly with regard to issues of bias, privacy, and security?
C. How can we promote a more inclusive approach to the development and deployment of Generative AI technologies, particularly with regard to ensuring that these technologies are accessible and beneficial to a wide range of stakeholders, including marginalized communities and developing countries?

What will participants gain from attending this session? Participants and attendees of the workshop on "Responsible Innovation & Ethical Use of Generative AI" will gain a deeper understanding of the ethical challenges posed by these technologies and the potential impact they can have on society. They will learn about best practices and potential solutions for addressing these challenges and promoting the responsible use of these technologies. They will also have the opportunity to engage in dialogue and exchange perspectives with experts and stakeholders from a range of sectors, gaining valuable insights into the diverse viewpoints and experiences related to these issues. By attending this session, participants will leave with new knowledge, tools, and strategies that they can apply in their own work to promote the ethical development and deployment of AI and emerging technologies.

Description:

Large Manguage Models have transformed natural language processing and made it possible to create effective AI tools for a variety of uses. However, their effects bring up urgent issues that necessitate focus and coordinated efforts. Experts, researchers, politicians, and business representatives will have a forum to engage in roundtable discussions on the following important issues during this workshop:
1.Fact-checking and Accountability: The growth of LLMs has caused an increasing amount of bogus facts to be produced. The credibility of information, public figures, and governments are all significantly harmed by this. Let's examine the repercussions and dangers of spreading incorrect information, emphasizing the value of fact-checking and creating systems for responsibility.
2.Intellectual Property: LLMs, like GPT-4 may violate intellectual property rights since they have unrestricted access to copyrighted data. Using copyrighted information as training data for language models has ethical and legal ramifications, which will be covered in this session.
3.Reduced Bias in AI-Generated Content : Beyond biased training data and algorithms, prejudice is a problem. The session will examine how AI tools might reinforce biases, culture, and values in the larger society environment in which they are used. In addition to discussing the value of auditing, evaluating, and transparency in identifying and correcting biases, participants will look at several strategies for promoting inclusion and fairness in AI-generated material.
4.Privacy & Cybersecurity: In the age of language models, privacy and cybersecurity are important.Large language models (LLMs) are widely used, which creates issues with cybersecurity and privacy. We will go over the potential dangers and weaknesses of LLMs, such as user consent, data privacy, and the possibility of malicious use. To reduce the risks posed by LLMs, discussions will center on protecting private data, making sure it is protected, and putting effective cybersecurity measures in place.

Expected Outcomes

1. A summary report of the workshop proceedings and key insights, which can be shared with a wider audience and used to inform future policy and practice in this area.
2. A set of best practices and potential solutions for promoting the responsible development and deployment of ChatGPT and other generative AI technologies, which can be shared with relevant stakeholders and used to guide future policy and practice in this area.
3. Potential follow-up events or processes, such as additional workshops, webinars, or policy briefings, to continue the dialogue and collaboration on these critical issues.
4. We hope this workshop will lead to a broader public debate on this issue, to more educational and research initiatives, and to stronger legislative action, underpinned by multidisciplinary & multi-stakeholder engagement. It's time for AI providers to take up their responsibility.

Hybrid Format: 1. We will have both an onsite and online moderator that will oversee all aspects of the moderation. They will be in sync and in contact at all times.
2. We will use polling, chats, Q&A time-blocks and the overall roundtable feel will ensure all can participate
3. Miro Board, Slido