Time
    Tuesday, 10th October, 2023 (00:45 UTC) - Tuesday, 10th October, 2023 (02:15 UTC)
    Room
    WS 5 – Room B-2

    Organizer 1: Timea Suto, 🔒International Chamber of Commerce
    Organizer 2: Rose Payne, International Chamber of Commerce
    Organizer 3: Meni ANASTASIADOU, ICCWBO

    Speaker 1: Prateek Sibal, Intergovernmental Organization, Intergovernmental Organization
    Speaker 2: Owen Larter, Private Sector, Western European and Others Group (WEOG)
    Speaker 3: Thomas Schneider, Government, Western European and Others Group (WEOG)
    Speaker 4: Clara Neppel, Technical Community, Western European and Others Group (WEOG)
    Speaker 5: Maria Paz Canales, Civil Society, Latin American and Caribbean Group (GRULAC)
    Speaker 6: Nobuhisa NISHIGATA, Government, Asia-Pacific Group
    Speaker 7: Suzanne Akkabaoui, Government, African Group
    Speaker 8: OECD_Karine Perset, Intergovernmental Organization, Intergovernmental Organization

    Moderator

    Timea Suto, Private Sector, Eastern European Group

    Online Moderator

    Rose Payne, Private Sector, Intergovernmental Organization

    Rapporteur

    Meni ANASTASIADOU, Private Sector, Western European and Others Group (WEOG)

    Format

    Round Table - 90 Min

    Policy Question(s)

    To what extent are existing policy frameworks fit to address recent developments in AI? What is the role of different stakeholders in moving existing AI principles into practice? Data disparities further compound the challenges of data access and availability that inform a multitude of socioeconomic processes, including crucial aspects for trusted AI adoption. What are the policy frameworks necessary in addressing the data divide to foster human-centric, ethical, fair and trustworthy AI solutions? How can AI reshape development and mitigate risks that impede innovation and inclusive growth? What are the current building blocks and elements necessary to harness this potential?

    What will participants gain from attending this session? The workshop will offer a comprehensive overview of existing policy and regulatory frameworks aimed to guide the development and implementation of ethical, fair and trustworthy AI and foster a discussion on how such efforts are addressing the continuous development of AI technologies. Discussions will bring together different perspectives on the cross-cutting considerations to take into account when moving from principles to action in implementing guidelines for AI governance. In addition, the participants will also understand the role of international collaboration in facilitating the responsible development and deployment of AI to harness its full benefits for inclusive and sustainable growth. The workshop will also allow for an exchange of best practices, identifying commonalities across existing approaches and explore opportunities for future collaboration.

    Speakers

    1. Ms Suzanne Akkabaoui, Advisor to the Minister on Data Governance, Ministry of Communication and Information Technology, Government of Egypt
    2. Ms Maria Paz Canales, Head of Legal, Policy and Research at Global Partners Digital, Civil Society
    3. Dr. Seth Center, Deputy Envoy for Critical and Emerging Technologies, U.S. State Department
    4. Ms Gallia Daor, Policy Analyst, Organisation for Economic Co-operation and Development (OECD)
    5. Mr Owen Larter, Director, Public Policy, Responsible AI, Microsoft
    6. Dr Clara Neppel, Senior Director, IEEE European Business Operations, Institute of Electrical and Electronics Engineers (IEEE)
    7. Mr Nobu Nishigata, Director, Computer and Data Communications Division Telecommunications Bureau, MIC-Japan (Ministry of Internal Affairs and Communications)
    8. Mr Thomas Schneider, Ambassador and Director of International Relations (OFCOM) & Chair of Council of Europe Committee on AI (CAI)
    9. Mr Prateek Sibal, Programme Specialist, Digital Innovation and Transformation, United Nations Educational, Scientific and Cultural Organization (UNESCO)

    Reference documents

    Description:

    AI is a general-purpose technology that holds the potential to increase productivity and build impactful solutions across numerous sectors, driving unique development needs as varied as healthcare, to transportation, education, agriculture and more. However, its design, development, and deployment pose challenges, often surrounding the role of humans, transparency, and inclusivity. These risks, if left unaddressed, can impede innovation and progress, undermining the benefits of AI deployment and the trust necessary for the adoption and use of AI technologies. Recent advances in, and the overwhelming popularity of user-friendly generative AI, have exponentially amplified its power to spur both beneficial and harmful change. As AI continues to evolve, it is essential to strike a balance between realising its full potential for socioeconomic development, while ensuring that it aligns with globally shared values and principles that foster equality, transparency, accountability, fairness, reliability, privacy and a human-centric approach. International multistakeholder and multilateral cooperation is necessary to ensure the effective uptake and implementation of such principles. To this end, many organizations have developed governance frameworks to guide the development and implementation of ethical, fair and trustworthy AI, including OECD’s principles on AI, UNESCO’s recommendations on AI ethics, G7 and G20 declarations, the EU’s AI Act, ongoing work at the Council of Europe to draft a convention on AI, human rights, democracy and the rule of law, the African Union’s efforts to draft an AI continental strategy for Africa, in addition to numerous principles and guidelines developed by various stakeholders. This workshop will bring together a diverse panel of speakers, to discuss whether existing principles are fit to address recent developments in AI, and uncover the barriers in moving from the adoption of AI principles and guidelines to their implementation, focusing on encouraging more widespread adoption of existing recommendations globally.

    Expected Outcomes

    The session will aim to uncover tangible contributions necessary to move from principles to practices of AI global governance. The speakers will navigate the necessary actions needed to establish collaboration for the deployment and stewardship of trustworthy AI, based on existing principles and recommendations.

    Hybrid Format: Prior to the session: to ensure speakers and attendees get the most out of the session, regardless of their chosen way of participation, organizers will make use of the session’s page on the IGF website and social media channels to share preparatory material and kick-start a dialogue. A preparation call will be organised for all speakers, moderators and co-organisers so that everyone has the chance to meet and prepare for the session. During the session: the moderators are experienced in animating multistakeholder discussions and will complement each other in merging onsite and online speakers and attendees to the optimum. Onsite participants will be encouraged to connect to the online platform to stay informed and engage with discussions in the chat. Following the session: moderators will encourage participants to make use of the IGF website and social media channels to share further comments and contribute to the session’s report.

    Key Takeaways (* deadline at the end of the session day)
    The session discussed existing AI guidelines, principles and policies. Speakers shared lessons learned from their development, adoption and implementation. They stressed the need for comprehensive, inclusive, interoperable and enabling policies that help harness AI’s developmental and socio-economic benefits, operationalize globally shared values and remain flexible enough to be adapted to local specificities and cultural contexts.
    Call to Action (* deadline at the end of the session day)
    Set comprehensive, inclusive and interoperable AI policies by meaningfully involving all stakeholders across all levels of the AI policy ecosystem: responsible development, governance, regulation and capacity building.
    Session Report (* deadline 9 January) - click on the ? symbol for instructions

    Introduction and key takeaways

    AI, as a general-purpose technology, carries the potential to enhance productivity and foster innovative solutions across a wide spectrum of sectors, ranging from healthcare, transportation, education, and agriculture, among others. However, its design, development, and deployment introduce challenges, especially regarding the role of humans, transparency, and inclusivity. Left unaddressed, these risks can hamper innovation and progress, jeopardising the benefits of AI deployment, while undermining the crucial trust required for the widespread adoption of AI technologies.

    Against this context, the session convened a diverse panel of speakers who explored the current state of play in developing AI governance frameworks. The speakers recognised the progress of international efforts to guide the ethical and trustworthy development and deployment of AI. Notable examples referenced included the OECD's AI Principles, UNESCO's recommendations on AI ethics, declarations from the G7 and G20, the EU's AI Act, the NIST AI Risk Management Framework, ongoing efforts at the Council of Europe to draft a convention on AI with a focus on human rights, democracy, and the rule of law, the African Union's endeavours to draft an AI continental strategy for Africa, and a plethora of principles and guidelines advanced by various stakeholders.

    As AI continues to evolve, panellists suggested the need to harness its full potential for socioeconomic development, while ensuring alignment with globally shared values and principles that prioritise equality, transparency, accountability, fairness, reliability, privacy, and a human-centric approach. The panellists agreed that achieving this equilibrium will necessitate international cooperation on a multistakeholder and multilateral level. A key takeaway was the necessity for capacity building to enhance policymakers' awareness and understanding of how AI works and how it impacts society.

    The session recognised, among others, the merits of self-regulatory initiatives and voluntary commitments from industry, applauding their agility and effectiveness in advancing responsible AI development. The discussions advocated for interoperability of approaches to governance and suggested that any policy and regulatory framework must be adaptable and grounded in universally shared principles. This approach was seen as vital to navigate the ever-evolving technology landscape and to accommodate the unique demands of various local contexts and socio-cultural nuances.

    Overall, comprehensive, inclusive, and interoperable AI policies were recommended, involving all stakeholders across the AI policy ecosystem to promote responsible development, governance, regulation, and capacity building.

    Call to action

    There was a resounding call for comprehensive, inclusive, and interoperable AI policies. Such policies, drawing upon the collective expertise of all stakeholders within the AI policy ecosystem, can foster responsible development and effective governance of AI, as these technologies continue to evolve. This holistic approach would pave the way for a more responsible and sustainable AI landscape.