Time
    Thursday, 12th October, 2023 (04:30 UTC) - Thursday, 12th October, 2023 (06:00 UTC)
    Room
    WS 1 – Annex Hall 1
    Subtheme

    Artificial Intelligence (AI) & Emerging Technologies
    Chat GPT, Generative AI, and Machine Learning

    Organizer 1: Takeshi Komoto, Google Japan
    Organizer 2: Jim Prendergast, πŸ”’
    Organizer 3: Samantha Dickinson, πŸ”’

    Speaker 1: Christian von Essen, Private Sector, Western European and Others Group (WEOG)
    Speaker 2: Jenna Manhau Fung, Technical Community, Asia-Pacific Group
    Speaker 3: Neema Iyer, Private Sector, African Group
    Speaker 4: Luciana Benotti, Civil Society, Latin American and Caribbean Group (GRULAC)
    Speaker 5: Lucia Russo, Intergovernmental Organization, Intergovernmental Organization

    Moderator

    Takeshi Komoto, Private Sector, Asia-Pacific Group

    Online Moderator

    Jim Prendergast, Private Sector, Western European and Others Group (WEOG)

    Rapporteur

    Samantha Dickinson, Technical Community, Western European and Others Group (WEOG)

    Format

    Round Table - 90 Min

    Policy Question(s)

    How will AI be trained to recognize and desexualize content, and what safeguards might be put in place to ensure that algorithms are not biased or discriminatory? What measures are needed to protect freedom of speech and prevent censorship? How will search engines ensure that the desexualization of content does not inadvertently perpetuate harmful gender stereotypes or reinforce existing power imbalances?

    What will participants gain from attending this session? Participants can expect to learn about the latest advancements in the field of AI and machine learning, including how AI bias is being identified and mitigated, as presented by industry, academia, and international organization experts. They will also learn about mitigation measures for AI and machine learning techniques that will help prevent bias in search results. International norms and standards and academic research on inclusive AI will be discussed. We will discuss the importance of transparency in search algorithms and how the use of AI and machine learning can help to increase transparency and provide more reliable results. Participants will have the opportunity to ask questions and engage in discussions which will hopefully lead to a deeper understanding of how AI and machine learning are revolutionizing the world of search engines and the impact this can have on businesses and consumers alike.

    Description:

    AI algorithms are trained using data that may reflect various kinds of bias, including gender bias or other forms of discrimination. Algorithms may replicate those biases in their outputs and recommendations, leading to AI systems that perpetuate harmful gender stereotypes and discriminate against women, people of color, and other marginalized groups. Such biases can include oversexualization of search results, including exposing children to inappropriate content. Addressing these concerns is of great importance. To address these biases, generative AI systems can be trained to identify and correct them. This can be achieved by diversifying the training data, incorporating ethical principles, continuously updating and refining the algorithm, and providing transparency. Proposed Agenda Welcome and Session Goals - Charles Bradley (Adapt) Opening Remarks Bobina Zulfa (Pollicy) on her research into Women and AI Lucia Russo (OECD AI Observatory) on the OECD AI Principles and ways to ensure AI plays a positive role in closing the gender gap.  Christian von Essen and Emma Higham (Google) on how Google engineers identified biases and reversed a trend of showing oversexualized search results. Interactive Discussion Moderator: Charles Bradley (Google) Panelists, including Jenna Fung (Asia Pacific Youth IGF) and the audience explore potential uses for this type of AI training and the key policy questions. Conclusion Quick summaries and a discussion on potential paths forward.

    Expected Outcomes

    We place great importance on making this an interactive session, and will follow the presentation with an audience dialogue to gather insights on the effectiveness leveraging AI to counter bias, including gender bias. We hope this discussion will provide potential avenues to explore in follow-up work.

    Hybrid Format: All participants, physically or virtual, will be required to log into Zoom, so we can manage the queue in a neutral manner. Our onsite and online moderators will work together closely to ensure that questions and comments from both groups are addressed. We recognize the unique challenges that remote participants may face, such as time zone differences, technical limitations, and differences in communication styles. To address these challenges, we will encourage our speakers to use clear and concise language, avoid technical jargon, and provide contextual information during the session. Furthermore, we will explore the use of polling tools such as Mentimeter or Poll Everywhere to gather feedback and questions from both onsite and online participants in real-time. By taking these steps, we aim to create an inclusive and engaging environment that caters to the needs of all participants, regardless of their location or mode of participation.

    Key Takeaways (* deadline at the end of the session day)

    There are many ways that AI is already being used to increase gender equality.

    The meeting provided insights into the practical applications of AI in promoting gender inclusivity, while also raising important questions about ethics, biases, and the impact of AI on diverse communities.

    Call to Action (* deadline at the end of the session day)

    There have been multiple efforts that have developed valuable sets of principles around AI and many touch on gender. It's time to move to implementation.

    Session Report (* deadline 9 January) - click on the ? symbol for instructions

     

    The meeting provided insights into the practical applications of AI in promoting gender inclusivity, while also raising important questions about ethics, biases, and the impact of AI on diverse communities.

    The discussion also highlighted the importance of continuously evaluating AI systems to ensure they behave fairly across different identity groups and that small-scale data can be sufficient to identify biases and take corrective actions.

    Emma Higham and Christian von Essen from Google spoke about how AI is being used to make search results safer and more inclusive. They discussed Google's mission to organize the world's information and ensure it is universally accessible and useful. They emphasized the need to understand user intent and prevent users from encountering explicit or graphic content when it is not relevant to their queries. They explained how AI, such as BERT and MUM, is used to improve search results and address biases in AI systems. They also mentioned using AI to identify and provide assistance for users in crisis.

    Bobina Zulfa, a researcher, discussed the intersection of gender and AI in the context of Africa. She highlighted the challenges faced by African women in accessing and using AI technologies and the need to redefine the notion of "benefit" in technology development. Bobina emphasized the importance of fostering liberatory AI that empowers communities and questioned the impact of AI technologies on privacy and consent.

    Lucia Russo from the OECD discussed the OECD AI principles, which are non-binding guidelines for trustworthy AI development adopted in 2019. These principles include value-based principles promoting sustainable development, human-centered values, transparency, safety, and accountability throughout the AI life cycle, as well as recommendations to governments. The first two principles promote inclusion and reducing inequalities.

    Lucia highlighted various policies and initiatives implemented by different countries to promote gender equality and inclusivity in AI. For instance, in the United States, efforts are made to improve data quality and increase the participation of underrepresented communities in AI and machine learning. The UK promotes inclusivity and equity in AI development through programs like Women in AI and Data Science. Netherlands and Finland have developed guidelines for non-discriminatory AI systems, especially in the public sector. The OECD also launched a catalog of tools for trustworthy AI, including tools to reduce bias and discrimination.

    Jenna Manhau Fung, from the Youth IGF, shared insights from the youth perspective. She noted that younger generations are generally positive about AI's potential to address gender bias and promote inclusivity. Jenna emphasized the importance of engaging coders in policy-making and the need for international standards to guide AI development. She also mentioned her personal experience with a small-scale writer's content not appearing in Google search results due to certain policies, highlighting the need for inclusivity for all content creators.

    In response to a question about fine-tuning and diverse training data, Google representatives, Christian von Essen and Emma Higham, explained that addressing biases in AI models involves both fine-tuning and improvements in the initial training data. The process is cyclical, and feedback from users plays a crucial role in making the models more inclusive.

    Overall, the conversation addressed both the challenges and opportunities of using A.I. to promote gender inclusivity and the importance of policies, principles, and independent audits in this context.