IGF 2023 WS #349 Searching for Standards: The Global Competition to Govern AI

Time
Tuesday, 10th October, 2023 (04:30 UTC) - Tuesday, 10th October, 2023 (06:00 UTC)
Room
WS 1 – Annex Hall 1
Subtheme

Artificial Intelligence (AI) & Emerging Technologies
Chat GPT, Generative AI, and Machine Learning
Future & Sustainable Work in the World of Generative AI

Organizer 1: Michael Karanicolas, 🔒
Organizer 2: Natalie Roisman, Georgetown Institute for Technology, Law & Policy
Organizer 3: Chinmayi Arun, 🔒Information Society Project at Yale Law School

Speaker 1: Kyoko Yoshinaga, Civil Society, Asia-Pacific Group
Speaker 2: Tomiwa Ilori, Civil Society, African Group
Speaker 3: Simon Chesterman, Government, Asia-Pacific Group
Speaker 4: Carlos Affonso Souza, Civil Society, Latin American and Caribbean Group (GRULAC)
Speaker 5: GABRIELA RAMOS, Intergovernmental Organization, Intergovernmental Organization
Speaker 6: Courtney Radsch, Civil Society, Western European and Others Group (WEOG)

Moderator

Michael Karanicolas, Civil Society, Western European and Others Group (WEOG)

Online Moderator

Natalie Roisman, Civil Society, Western European and Others Group (WEOG)

Rapporteur

Chinmayi Arun, Civil Society, Asia-Pacific Group

Format

Round Table - 90 Min

Policy Question(s)

1. How are different centers of power, including countries, approaching AI governance, and how are these models of governance influencing each other through processes of regulatory diffusion and convergence? 2. What role is and should industry play in shaping the rules governing AI? What role should governments play? How do these interact? 3. What particular values and structures should guide emerging international approaches to AI governance, and how can we ensure that these approaches support the best interests of the public at large, especially disempowered communities?

What will participants gain from attending this session? We intend to use this session to, first, introduce participants to the contours of global AI governance debates, including informing them about new rule-making trends across the major regulatory blocs, and their external impacts. However, we also hope to use this session to foster a conversation about the impacts of AI outside of the privileged minority, to allow participants to share their experiences and network together with potential collaborators on these issues. Ultimately, we hope that this event will serve as a baseline for mobilizing global action in support of a broader, multistakeholder approach to AI governance, and to boost existing international platforms and mechanisms for collaboration.

Description:

The expansion of AI has led to various models of governance emerging for these new technologies, including through national and subnational legislation, agency-based regulation, and industry standards. However, while no single state exercises global authority over the development and deployment of A.I., powerful centers of governance have begun to emerge, particularly through American, E.U., and Chinese efforts to influence the development of trans-national standards, as well as through efforts such as UNESCO's Recommendation on the Ethics of AI and the OECD’s Artificial Intelligence Principles. However, the dominance of advanced economies in early standard-setting means that harms from AI are predominately viewed through the lens of impacted stakeholders in a rich world context, at the cost of impacts which tend to be emphasized elsewhere. The purpose of this session is to foster a conversation about the future of A.I. governance, and how to ensure that standards development appropriately reflects the needs and concerns of all of the world’s people. The discussion will focus on the interplay between Global North & Global South in AI development, regulation, and distribution, including through procurement relationships and mechanisms of transparency and accountability for trans-national harms, as well as the cross-border impacts of legislative development and standard setting initiatives. It will also be an opportunity for participants to share information and network together regional efforts in this space.

Expected Outcomes

This session will take place as part of a series of programming and projects at Yale, Georgetown, and UCLA, along with partner networks, focused on supporting the development of global AI governance frameworks. These include Yale ISP’s Majority World Initiative and AI Governance Series, and UCLA ITLP’s Generating Governance series. The discussions will also be featured at the upcoming UNESCO Global Observatory on AI Ethics, and be disseminated through UNESCO’s networks. Throughout the session, a collaborative document will gather records of questions, as well as comments, observations, and other remarks made during and after the workshop, so that they can be integrated into follow up reporting. We hope to draw on the IGF’s convening power to add new voices and perspectives to this debate. To this end this session will also engage in network-building, enabling the participants and organizers to discover new voices and include them in future policy conversations.

Hybrid Format: The structure of this round table is intended to foster an inclusive conversation and promote constructive exchanges between both onsite and online participants. All of the sponsors have extensive experience at managing hybrid events featuring diverse and globally distributed audiences, and we are confident that we can provide an inclusive and engaging environment. Prior to the event, preparatory documents will be circulated to speakers and at least one coordination call will be held to ensure that each speaker is prepared and secure in their interventions. Planned interventions will be capped at time in order to permit fruitful exchanges with other attendees. Following these early interventions, we intend to open the floor for discussion and Q&A, in order to allow for as many perspectives and commentary as possible.

Key Takeaways (* deadline 2 hours after session)

Different jurisdictions and organizations are taking diverse approaches to AI governance. These regulatory processes are critically important insofar as they will likely establish our framework for engaging with AI’s risks and harms for the coming generation. There is a pressing need to move expeditiously, as well as to be careful and thoughtful in how new legal frameworks are set.

Learning from previous internet governance experiences is crucial. While discussions around how to prevent AI from inflicting harm are important, they will meet with limited success if they are not accompanied by bold action to prevent a few firms from dominating the market.

Call to Action (* deadline 2 hours after session)

A global governance mechanism that coordinates and ensures compatibility and interoperability between different layers of regulation will be needed. But successful regulation at a national level is indispensable. National governments will be responsible for setting up the institutions and laws needed for AI governance. Regional organizations, state-level regulation, and industry associations are all influential components of this ecosystem.

While industry standards are important, public-oriented regulation and a wider set of policy interventions are needed. As for self-assessment and risk-assessment mechanisms, while they may become critical components of some AI regulatory structures, they may not succeed without sufficient enforcement to ensure that they are treated as more than just a box-checking exercise.

Session Report (* deadline 26 October) - click on the ? symbol for instructions

Searching for Standards: The Global Competition to Govern AI

IGF session 349 Workshop Room 1

A global competition to govern AI is underway as different jurisdictions and organizations are pursuing diverse approaches ranging from principles-based, soft law to formal regulations and hard law. While Global North governments have dominated the early debate around standards, the importance of inclusive governance necessitates that the Global Majority also assumes a role at the center of the discussion.

A global survey reveals diverse approaches. The European Union's AI Act is the most prominent process, but it is far from the only model available. Singapore is amending existing laws and deploying tools to help companies police themselves, while Japan is combining soft-law mechanisms with some hard-law initiatives in specific sectors based on a risk-assessment approach. The US is considering a similar approach as it begins to create the frameworks for a future AI governance structure. In the Global South, several countries in Latin America and Africa are actively engaging in the AI discussion, with a growing interest in a hard-law approach in the latter.

These regulatory processes are critically important insofar as they will likely establish our framework for engaging with AI’s risks and harms for the coming generation. There is a pressing need to move expeditiously, as well as to be careful and thoughtful in how new legal frameworks are set.

Different layers of regulation and governance strategies will be critical for creating a framework that can address AI’s risks and harms. First, because AI is a cross-border form of human interaction, a global governance mechanism will be needed to coordinate and ensure compatibility and interoperability between different layers of regulation. While this global layer could take the form of a soft law (declaration or recommendation), a more binding document (e.g., a convention) could also be considered as an effective way to coordinate AI regulation globally. From UNESCO’s perspective, a UN-led effort is critical, not only because AI requires a global multi-lateral forum for governance but also because unregulated AI could undermine other priorities like sustainable development and gender equality.

Despite the need for global governance, successful regulation at the national level is essential. Ultimately, national governments are responsible for setting up the institutions and enacting and enforcing laws and regulations needed for AI governance.  Regional organizations, state-level regulation, and industry associations are all influential components of this ecosystem.

At the same time, industry standards may be the most common form of regulatory intervention in practice. In such a context, the industry should consider developing responsible AI as part of their corporate social responsibility or environmental social governance practices - including the implementation of guidelines or principles on AI’s uses and development, codes of conduct, or R&D guidelines, since the way in which companies develop and use AI will have a huge impact on society as a whole.

However, while it is important to raise industry standards and to involve companies in the regulatory process, we need to understand the incentives that drive these companies to work with AI, which is primarily to monetize human attention and to replace human labor. For that reason, industry standards should be complemented by public-oriented regulation, and a wider set of policy interventions.

As for self-assessment and risk-assessment mechanisms, while they may become critical components of some AI regulatory structures, they may not succeed without sufficient enforcement to ensure that they are treated as more than just a box-checking exercise. It is also important to keep in mind that different approaches may only be relevant to specific subsets of AI, such as generative or decision-making AI systems.

Small countries will face unique challenges in implementing effective AI governance. Small nations that regulate too quickly could end up pushing innovation elsewhere. These countries could establish their role in AI governance if they strategize and work together with like-minded initiatives or systems. While the deployment and design of AI are happening in the largest countries, we should be aware that AI will also be heavily used in other parts of the world.  Focusing on regulating not only the creation but also the use of AI applications will be key to the success of AI regulatory and governance experiences in small countries. Over the past decades, machine learning research and application have moved from public to private hands. This may be a problem, especially for small countries, as it shortens the speed of deployment from an idea to an application while limiting the ability of governments to restrict potentially harmful behavior.

Learning from previous Internet governance experiences is crucial to AI governance. While we usually think about AI as if it is a brand-new thing, we need to think about its components and break down what exactly we mean by AI, including infrastructure data, cloud computing, computational power, as well as decision-making.

We need to consider the impact of market power on AI governance, given that AI trends towards monopoly (large data, lots of computational power, advanced chips, etc.). While the discussions around how to prevent AI from inflicting harm are important, and issues of preventing exploitation are necessary, they will meet with limited success if they are not accompanied by bold action to prevent a few firms from dominating the market and various parts of the AI tech stack. AI governance  should focus on breaking down the various components of AI - such as data, computation power, cloud services, and applications - to redress monopolistic practices  and crack down on anti-competitive practices. This includes confronting consolidation in the cloud market and exploring public options. Regulators could also examine the possibility of forcing the handful of big tech firms that are providing the leading AI models to divest cloud businesses or eliminate the conflict of interest that incentivizes them to self-preference their own AI models over those of rivals.

Another valuable lesson comes from the early regulation of the Internet in terms of copyright and freedom of expression. We need to think about to what extent the modeling of personal data protection laws and the current debate on platform liability should influence the debate on the regulation of AI’s potential harms. The first generation of Internet regulation left us with much stricter enforcement of intellectual property rights than enforcement of privacy rights, a legacy of early prioritization of the harms that were deemed most urgent decades ago, but which persists to this day. This should be instructive about the need to be deliberate and careful in selecting how harms are understood and prioritized in the current phase of AI regulation, as these technologies continue to proliferate.