Check-in, get the location details and join this session at the IGF Schedule.

IGF 2025 Lightning Talk #245 Advancing Equality and inclusion in AI

    Council of Europe

    • Sara Haapalainen, Council of Europe, Hate Speech, Hate Crime and Artificial Intelligence
    • David Reichel, EU Fundamental Rights Agency.
    Speakers
    • Ivana Bartoletti, Vice President, Global Chief Privacy and AI Governance Officer at Wipro 
    • David Reichel, Head of Data and Digital Sector, EU Fundamental Rights Agency.
    • Mr. Bjørn Berge, Deputy Secretary General of the Council of Europe
    Onsite Moderator
    Sara Haapalainen, Senior Project officer, Council of Europe, Hate Speech, Hate Crime and Artificial Intelligence
    Rapporteur
    David Reichel, EU Fundamental Rights Agency
    SDGs

    5. Gender Equality
    10.3
    16.3
    16.6
    16.b


    Targets: The digital world expands progressively connecting society and individuals, engaging more of their time, and responding to more of their needs. Ensuring respect of human rights in combating discrimination and assessing the potential role of AI in relation to these phenomena is crucial. Combined, these efforts contribute to the emergence of a culture of peace and cooperation, conducive to social and economic development. Protecting human rights in the use of AI systems has a definite impact on ensuring gender equality and eradicating poverty, providing quality education and reduce inequalities, build sustainable cities and communities, ensure durable peace, effective justice and strong institutions, and harness partnerships for the SDG Goals.

    Format

    The set-up can enable all participants to hear the views of speakers working with groups most often affected by discrimination and bias, as well as women affected by AI technologies such as biased algorithms that reinforce gender inequalities or deepfakes, and raise questions. A few interactive questions (e.g. with the use of Mentimeter) will seek to motivate and trigger participants to reflect on the risks of AI-discrimination and how to redress it from perspective of groups affected. 

    Duration (minutes)
    20
    Description

    The session will present measures that can be taken to operationalise safeguards and remedies against discrimination in use of AI systems, engage with groups most at risk and equip human rights supervisory bodies.

    The Study on the impact of artificial intelligence systems, their potential for promoting equality – including gender equality - and the risks they may cause in relation to non-discrimination adopted by the Gender Equality Commission (GEC) and the Steering Committee on Anti-discrimination, Diversity and Inclusion of the Council of Europe (CDADI) in 2023, and numerous other studies have highlighted the risks that AI systems pose to equality, including gender equality, and non-discrimination, online and offline, in a variety of sectors. These range from employment, through the online targeted distribution of job adverts, to the provision of goods or services in both the public and private sectors such as online loan applications, or to public security policies or the fight against fraud. For example, the report Bias in Algorithms from the EU Agency for Fundamental Rights shows how easily speech detection algorithms can be biased against certain groups.

    The groups that are most affected by bias in AI systems are very often the same groups and individuals as those at risk of discrimination in society. These groups, as well as women,  also experience structural inequality, and struggle to meaningfully participate in forums that develop, deploy and regulate new digital technologies and promote inclusion in AI. 

    The CoE and EU are jointly building the capacity of equality bodies and representatives of groups most affected by discrimination, including by biases in AI systems. The EU and CoE want to provide a platform at IGF to share ideas on how to ensure sufficient safeguards against discrimination and access to effective remedies

    Session Report (* deadline 6 July) - click on the ? symbol for instructions

    The session presented measures that can be taken to operationalise safeguards and remedies against discrimination in use of AI systems, engage with groups most at risk and equip human rights supervisory bodies.

    The use of AI and algorithms may perpetuate, reinforce and even create inequality and discrimination. This can happen for a variety of reasons, such as, biased or unrepresentative training data leading to racial profiling in policing or higher errors in face recognition technologies.

    To avoid discrimination while allowing AI to increase efficiency and automation of tasks, equality needs to be promoted in and through the use of AI and informed by the views of those impacted. To make this possible, there needs to be regulation as well as guidance on how to apply the regulation in practice combined with strong oversight.

    Key points:

    • AI systems pose risks to equality, including gender equality, non-discrimination, online and offline, across public and private sectors.
    • The European legal instruments, such as the Council of Europe and the European Union laws on AI, digital services, data protection and anti-discrimination, provide a good roadmap for AI systems to be consistent with human rights, including non-discrimination and promotion of equality.
    • National human rights bodies, and civil society organisations are key partners to address discrimination, advancing equality, and supporting victims of algorithmic discrimination.

    Call to action:

    1.  It is an opportunity to use the tools in existing European regulations to assess the impact of AI on human rights and equality to ensure better and more sustainable AI systems that ultimately build more just societies.
    2. If prevention fails and discrimination by AI systems occur, ensure access to remedies to restore rights and provide justice for those being discriminated. Human rights institutions, equality bodies and CSOs can play an important role here to inform victims of their rights and to request testing of AI systems which requires they have access to documentation of the AI systems.
    3. Provide practical guidance on how to carry out human/fundamental rights impact assessments on AI systems before deployment of AI systems.