IGF 2023 – Day 2 – AI that We Want – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR:  What we have been seeing is like for maximizing this artificial intelligence positive aspect in society, there is a fundamental need in agreeing on responsible and ethical principles for the development and what I bring today as a proposition is part of the discussion also should be mindful and be grounded in what is coming from the international framework of human rights as an essential element for guiding this task of thinking about technical standards, legislation at large and other kinds of voluntary guidance that can be developed for governance of artificial intelligence.

In that sense, I have been working with my organisation in proposing a number of principle that's are linked with how we can make the international human rights framework applicable to the conversation of artificial intelligence and we have come with five principles, the first one to think that any kind of governance discussion on AI should be grounded in approaches that have been developed in terms of promotion and protection of human rights law as we have been discussing for other rounds of new and emerging technology that's have preceded artificial intelligence.

The second one is that develop risk‑based approach to the design, development and deployment of artificial intelligence.  I am sure that part of the discussion with the panelists today will be unpacking what we mean by risk and what we mean for those assessments.

The fourth one ‑‑ the third one, it's free and open and inclusive design, deployment and use of the artificial intelligence technology.  And then we also invite you to think about how we need to ensure transparency in the design, development and deployment of AI and hold the designers and deployers of artificial intelligence accountable for risks and harms.

Without more from me with this proposition, we want to hear from every one of the panelists, and we will start the first round of comments, and to talk about particularly the two policy questions that have been proposed by the organizers of the session in the MAG that invite us to think about in the first round of our conversation on the matter of how the global processes connect that have been discussed around the governance of artificial intelligence at the international level but also at the local level with a sight of regulating or guiding governance for greater good.

My first invitation to intervene will be for Ms. Alisa Ema, and I want to ask you how we can move from ethical principles and guidance to operational artificial intelligence coordination that it's effective in policy across jurisdictions.

>> ALISA EMA: Thank you, Maria for a kind introduction.  I'm honored to be on this panel.

And for the question, you are right, it's very important especially when we consider how actually the technology develops, not only the design, but also developed, deployed and used across the borders.  And, for example, here in Japan, it's, the normal case is that maybe we use the core AI model, for example, from the United States and then the deployer is the Japanese startup company but the vendor is the company and the user doesn't know who created this AI model and who is in the long supply chain. 

In that sense, it is really important to have the transparency about when we actually see this AI lifecycle, and not only the transparency, but it needs to be interoperable, and this work, the framework interoperability is mentioned in the G7 communicae in the 2020, but somehow this is kind of like a tricky word.

What does it mean by the framework interoperability?  What is the difference between the technical standards?

So the thing I am, what I interpret by that word is that we need to know that each country or each organisation or maybe even one of the company has its own policy and their own way of assessing their AI systems, and also evaluating the risk and making the Impact Assessments.  However, the legal system is different from country to country, and so each country's discipline should be respected and otherwise this global discussion won't work.

And also, each country has its own context.  For example, in Japan, we actually have the guidelines towards this AI, the utilization or the AI development, and not so much on the binding one.  So we actually goes with the non‑binding guidelines.  Also most of the Japanese companies actually looking to the public reputation and that kind of soft goal discipline actually really works, but that might be the Japanese case.

The other country, or the other organisation may have different aspect.  So it's really important to know what actually, what companies or what country has its own risk management framework or the risk assessment frameworks, and with that transparency and also exchanging the actual cases is really important.  And as I really appreciate that Maria raised the discussion on the risk‑based assessment.

What does it mean by risk?  So we can discuss at a high level risk or the low level risk.  For example, when taking consideration about the facial recognition system using the airport or maybe using the entrance of the building, the usage is totally different, but maybe using the same special recognition system.

So we need to look into the context and we need to take into who is actually using, who is benefiting from it, and who has the risk from it.  So exchanging the cases is really important, and in that way I think we can put all of these kind of abstract principles into more kind of living kind of discussion as making good practices.  I will stop here.

>> MARIA PAZ CANALES LOEBEL: Thank you very much, Alisa, and I want to continue that line of conversation inviting Clara to jump in on and how your experience about how 2:00 standards can account for these challenges, the ethical challenges and the principles that are posted, but also human rights international standards.

>> CLARA NEPPEL: Thank you for having me here.

So IEEE is an old organisation.  We were founded about 140 years ago, co‑founded by Edison.  So why would an inventor like Edison who invented electricity engage with others?  He could have been it alone.  I think it was the realization that in order to be accepted by society, you have to manage risks, and one risk at that time was clearly safety.  We started actually by dealing with safety and since then we are dealing with safety and security, but now with AI, we see that we actually need to redesign risk.

We have to move away from the more traditional dimensions of risk like safety and security and incorporate human rights this you just mentioned.  The question is how to do that.  We started very early on, and it is a bottom up approach.  We are the largest technical organisation in the world with more than 400,000 members worldwide, and this issue started to come up at an individual level early on.  These issues around what you just mentioned, bias and so on.  The question was how to deal with them.  So we started an initiative called ethical align design which identified the issues, tried to manage them with standards, but also with engaging with regulators.

Now, when it comes to standards, we moved to so called social technical standards.  What are they?  So these are from value‑based design to common terminology.  Value‑based design, what does it mean.  It means taking values of the stakeholders in that context that you just mentioned into account, and that will be different values.

Of course, human rights are always important, but you have different ways of how you are dealing with them.  And how you prioritize these values and actually translate them into system requirements and giving that step by step methodology for developers proved to be very efficient standard.

Common terminology, what do we mean if we say transparency.  It can be a completely different thing to a developer than for a user.  That is one of the standards of finding different levels of transparency.  Bias, the same thing, we all want to have eliminating bias from systems but we actually need bias, for instance, in healthcare.  We need to take into account the differences in symptoms for men and women because they react differently, for instance, when they have a heart attack.

So context is very important.  So we also complemented these standards with a certification, an ethical certification system and we tried it out with public and private actors.  What is very important, after all I think it was mentioned before is to start building up capacity in terms of training because we need this combination between technical expertise and expertise in social legal matters and so on.

So we as part of the certification process, we have a competency framework which defines what are the skills necessary for certifiers and we started working also for certification bodies, so to build up these ecosystem which needs to be there in order to make this happen.

So this bottom up approach needs to be complexed by a top down approach, the regulatory frameworks and we engaged with the Council of Europe, the European Union and OECD, very early on from the principles but also how to operationalize this regulation.

One example is now with the AI Act which is basically mandates certain standards where we also engage with the European Commission to see how we can map, let's say, the regulatory requirements to standards.  There is a report from the joint research centre that you can download.  Thank you.

>> MARIA PAZ CANALES LOEBEL: Thank you very much, Clara.  We will move now to hear a little bit from James that represents some perspective on the private sector experience and particularly following the flow of this conversation, Clara mentioned the values, the definitions of the values, but also the definitions of the terms of the frameworks that we will be using.

So in that sense my pro indication question is aside from the Government efforts, the multi-lateral and technical standard efforts we have been hearing, what are the current efforts that the private sector are conducting for reflecting some of these challenges of like finding ways to address responsible AI governance, and how those go with the conversation we are having here around ethical principles, but also human rights protection?  So let us know what is your take on that.

>> JAMES HAIRSTON: I think one of the places we began is to listen and sort of understand as the tools that we build are used in novel ways and as we explore sort of the new capabilities.  Learning from expert communities and academics and standards bodies, Governments around the world who are evaluating and testing, what are the new harms we haven't anticipated.  We know that we won't know them all ahead of time and we try to take an iterative approach and really explain what we are building and how we are building through tools like system cards and inviting open red teaming and evaluation of our tools.

But really understanding what is it that we don't know.  Where are the places in which languages are tools not performing well?  Where are the places where definitions have been discussed need sort of stronger concrete backing so that we know that as we are building these international conversations we are speaking the same language and able to cut through whether it's marketing by the private sector or areas that have yet to be fully defined that we are building from a common understanding.

I think another important role for the private sector that we really take seriously at open AI is just capacity building, and sort of building capacity for research teams of all types across civil society, human rights, organisations and Governments to be involved in this testing, to tell us what's working, what's not, capabilities that they would like to see or ones that are not working.

And so this is something that's going to be iterative.  We are clear when we do our disclosures at the release of new tools about all of the areas we are trying to solve for.  There are important research questions about the future of things like hallucinations and understanding where watermarking, now to solve for watermarking questions across text or different types of video or different types of outputs across LLM.

So our contributions I think begin with admitting what we don't know.  Admitting the places where there is a lot of work to do, trying to help with capacity building to work on safety and evaluation of these systems and really supporting work around the world by the public sector, private sector, civil society, academia to get the future of these tools right and ensure the conversations we are having around the world level into concrete action that ensure the long-term safety of artificial intelligence.  Thank you.

>> MARIA PAZ CANALES LOEBEL: Thank you very much.  I turn to Dr. Centre that represents the U.S. Government in this conversation, and I will be curious, like, particularly with the fact of the pressures that are coming from the broader public to Governments to turn to action in relationship with harnessing the power of artificial intelligence for good.  What is right now the perspective and the take of the U.S. Government about the more pressing challenges in the global governance of AI, and how those also relate with the actions and the collaborative work that you in the domestic level are taking also with the private sector with other Governments to effectively address the challenges that you identify as the most pressing ones.

>> SETH CENTER: Pressure is an interesting word to characterize the situation we are all in, not just Governments.

I think part of the reason why all of us are here and excited about AI and somewhat scared as well is because there is a sense we are in a transformative era.  Given that the IEEE was founded by Thomas Edison, I will start with a quote, I wasn't planning on it, but it's my favorite Thomas Edison quote.  It was asked at the turn of the 19th Century about 20 years after the lit bulb was developed what the effect of electricity was going to be on the world and he said electricity holds the secrets that are going to reorganize the entire life of the world.  You could apply that to artificial intelligence.  The problem with that analogy is at least in the United States it took several decades to get to a regulatory framework for electricity, and I think no one here thinks we can wait several decades to get to a governance framework that includes regulation for AI because of the pressure.

So with that being said, first of all, I commend an organisation like IGF for bringing together a diverse group of multi‑stakeholders like this to have a conversation about how to accelerate the pace of governance.  I think Japan in particular for hosting us and leading the G7 Hiroshima process and we saw the effort and pressure and the sway in which speed can create results through that process, and from that basis let me make four points and I have about 30 seconds to make each of the points.

Point one is perspective for all of us on AI governance.  I think we have a solid foundation based in a multi‑stakeholder approach to developing the principles for AI, the OECD principles from 2019, the G20 principles as well.  Within the United States, in the past couple of years, we have developed two frameworks that are extremely important and they touch on the human rights and value components of this as well, both of which developed with expensive consultation across the multi‑stakeholder community.  One is the AI bill of rights and the other is the National Institute of Standards and Technologies risk management framework.  That had over 240 consultations over 18 months with the multi‑stakeholder community to develop a framework for how to apply a safety and security framework to developing AI.

That's the kind of perspective we have to take to the challenges we have.  Why then if we have such a rock solid foundation are we having this conversation today?  The obvious answer is GPT has created a new socio cultural political phenomenon, a new moment.  In part it is the Sputnik all of us were waiting for when we were talking about AI to grip all of us into action several years ago.

But in part it's because it's raised all kinds of profound questions about safety, security, risk.  And so we have to take it on in a new and substantial way.  And that moves us into two problems or challenges.  One is it intensifies and accelerates all of our fears that emerge from the digital era.

It intensifies all of our hopes and opportunities that come from a technological revolution.  We need to get the balance right.  I think all of us accept that and that requires moving quickly.  For the United States speed meant we have to balance between moving between a regulatory framework eventually with getting governance action now.

Our choice in the interim was to move towards what were called voluntary commitments that touch on a framework of safety, security and trust which hold companies accountable for a whole series of efforts to welcome more transparency to protect security, to promote transparency to ensure that systems work as intended and that's basically our overarching architecture for what we are approaching this era.

We need clarity, speed and we have to act in this area of pressure.

>> MARIA PAZ CANALES LOEBEL: Thank you very much Dr. Center.  I will move in this part of the conversation with Thobekile Matimbe.  Particularly, a reaction from your side in terms of where is the Global South perspective in all of these conversations?  We heard of alternative path of dealing with artificial intelligence governance that usually are commonly led by Global North Governments or Global North organisation from the private sector, from the academia, from the industry, so what are the fundamental challenges and opportunities to build effective artificial intelligence governance that work for your own institutional context and socio political context coming from a country from the Global South and how do you experience the experience of these trends coming from abroad from the different sectors, the regulatory ones but also the ones related with different frameworks to address the issues of governance?  Thank you.

>> THOBEKILE MATIMBE: Thank you so much.  It is a pleasure to be here and part of this panel as well.

I will highlight that from a perspective of the Global South, I will maybe narrow it down to the African continent, where we are in terms of regulatory frameworks, we are at a place where we try to come up, when it to coming up with artificial intelligence strategies and what we have data protection laws that have just a drop in the ocean when we have automated decision making or algorithmic decisions, and looking at that kind of context, we are facing a situation where we are trying to catch up with regards to how we can ensure the protection of human rights when we are looking at artificial intelligence, the design processes as well as the use.

Because of that, you will find that it's important that that context is well understood and well centred.  When we are looking at artificial intelligence design and usage, we have to appreciate that there are definitely centers of power.  What I mean by centers of power when we are looking at who has the knowledge of technology, who was the technical design sort of like ownership.

You would see within the Global South we are lagging behind, and because of that there is a need for inclusivity, of voices from the Global South whichever processes are there even at a global stage, and there is need for inclusivity, of not just civil society, but as well inclusivity when we are looking at representation and Member States as well and their participation.

I think the Internet Governance Forum presents a good opportunity for a multi‑stakeholder discussion around AI and any other global framework that can come out of the global scene, and that is something that can be leveraged.

Looking at the regional level, maybe just taking it a level sort of like down from the global scene, you would find that from a regional perspective we have the African commission on human and people's rights that came up with a resolution 473 in 2021 urging states within the African continent to develop strategies and mechanisms and legislative provisions that ensure that rights are recommended when looking at the use of AI in the context of human rights.

And to date, since 2021 I would say like I highlighted in my earlier remarks that we did not actually have African states keeping abreast with developing policies and laws that ensure that rights are safeguards, but the real lived realities remain within the Global South where we find that there is sort of like lack of trust for use of AI because of inadequate policies.

We see that surveillance targeting human rights defenders remains a major concern.  We do see that discriminatory practices that come with the use of AI are still a lived reality on the continent, so it's something that needs to be addressed from a global perspective and understanding that context I will emphasize again that it something that is really important.

>> MARIA PAZ CANALES LOEBEL: Thank you so much.  Now, we have finished our first round of comments and answers from the panelists in this session.  So I open the floor for the questions that can come from the audience inside, but also I look to my colleague for knowing if there are any questions posted online.

>> MODERATOR: Yes, the chat is exploding, but only through my comments.  People are very shy still, so beautiful crowd out there use this opportunity to ask all of those questions you don't dare to ask on AI and governance, these are the right people answering your questions.

There is one question though, and that is reinteresting because it's posed by a target group you often forget.  It's a 70‑year‑old boy or more Faru from Bangladesh, and basically he is asking how can we ensure that AI regulation governance at the multilateral level is inclusive and child‑centred so that children and young people can benefit from AI while being protected from its potential harms?

>> MARIA PAZ CANALES LOEBEL: Thank you.  Some of the panelists are particularly motivated to take on that question?  I think that that question put in the centre the issue of a specific vulnerable community.  So when we design for policy, for governance around artificial intelligence, this is an example that the children can be considered, but there are also other specific communities.

So how will we design for being inclusive, for accommodations of particular needs for vulnerable groups in a considerate way that effectively provide governance that works for all of these different cases?  Clara.  Go ahead.

>> CLARA NEPPEL: I think that these are certain things which can be addressed both on voluntary level.  We see examples Lego implementing quite a lot of measures to ensure that the online presence and upcoming virtual environments, children are protected.  But, of course, here especially, I think that that it is important to complement these voluntary efforts with regulatory requirements, and one example is actually the U.K. children's Act because we all agree human rights children need to be protected but it is another question how is that implemented online?  And the U.K. code is an example of a regulatory framework setting up let's say the requirements, but when it comes to operationalize it, it was one of the standards age appropriate design which gives clear guidance to implementers on what it means to implement this act.

So there are already both regulations as well, and this is just one example.  This is also discussed in other countries as well.  So one example of how, let's say, standards and regulation can interact to protect children online and other human rights, as a matter of fact.

>> MARIA PAZ CANALES LOEBEL: I don't know if the panelists have reaction.  If not, we move to the next one.

>> JAMES HAIRSTON: The only thing I would add is just to base a lot of the work on top of the sort of research that's being done by child safety experts around the world.  There are just so many great institutions, you mentioned the Lego example, but academics and organisations that are looking at usage patterns and understanding how children and any number of vulnerable groups interact with these technologies, the harms or their expectations and how they diverge prior to working, at open AI I worked in virtual and augmented reality, and, again, in safe settings whether it's doctors or research teams, really go deeper and we don't base sort of the work on our understanding as adults, and this is, again, whether we are talking children using the tools or elderly populations, vulnerable communities who may have less access that it's sort of research‑based, evidence‑based and I think in these settings it's possible for organisations to really work with the community we are trying to build the safety tools and systems around.

So I don't think there is anything revolutionary about that idea, but these organisations really do such important work.  I think supporting them and advancing their work and putting their research front and centre in the development of policy is essential.

>> MARIA PAZ CANALES LOEBEL: Definitely.  Thank you very much for that answer.  Khristen we have another question?  We can take one from here.

>> AUDIENCE: Metropolitan university in Canada, while AI systems are technological in nature as many of us know it involves a lot of human input and we have seen media reports of the kind of labor involved in creating AI in the Global South to be quite a bit different from the kind of labor involved in creating AI tools in the western world, so in governing creations of AI, how do we think about sort of international labor, work standards regulations?

>> MARIA PAZ CANALES LOEBEL: Thank you very much.  Anyone from the panelists who want to react to that?

>> JAMES HAIRSTON: I'm happy to begin.  The importance of protecting labor that's involved in the production of these tools is essential and sort of the work that's been done over the years in advancing the rights of workers in other sectors has to be applied in artificial intelligence and making sure people are compensated properly, that when there are abuses or harms that they are addressed, and so, again, this is just an area where everyone is going to have to continue to be vigilant whether companies inside the private sector, monitoring groups and just making sure that we are listening and understanding the production, understanding where voices aren't being heard or where actors at any level of the sort of labor and employment chain and the production, the development of tools are sort of acting improperly.

And so I think if there are places where existing law and policy can't address those harms and we certainly should be vigilant for places where there are gaps, we have to talk about them openly and constructively and sort of move quickly to make sure there aren't communities and types of work that's going on that is abusive or harmful.

>> MODERATOR: There are three questions.  A colleague from one Professor from the Afghanistan, Kabul University asking could we use generative AI in Developing Countries like Afghanistan in the education system?  I would say why not, but maybe you have a brighter answer.  And there are two other questions, one asking for the accountability aspect, given that AI is not fully understood and to balance AI values and risks how should we deal with the accountability of AI?

One last question referring to the ethics, for the moment AI is providing output‑based and human‑based input data, in time it may be processing its own data.  Is it ethically acceptable for machines to decide on human matters based on now human enter, what is to ensure that something will be left out of total control.  So a question on the education system and a wide question on the ethics.

>> MARIA PAZ CANALES LOEBEL: It will be a challenge, but maybe I can ask Dr. Centre to add on the question of accountability mechanisms how we can build effective accountability mechanisms.

>> SETH CENTER: Every question comes down to accountability.  Skepticism around governance frameworks that are voluntary come back to the question of accountability.  I think even a hard law framework comes down to accountability if the challenge is figuring out what to measure in order to apply a hard law.  From our approach as we think about accountability in the context of a voluntary framework at least as a bridge to something harder, I think it comes back to what you were talking about in part which is there is a reputational cost that comes along with signing up to voluntary commitments and, James, I think you will probably have some views from open AI's point of view as well on what accountability means for a so called voluntary commitment.

I think insofar as voluntaryism and accountability are linked to technical action, you can talk about accountability in meaningful ways because it can be measured, and I think the measurement question is extremely important to dive down below the abstract level of principle where I think there is an increasing amount of skepticism that principles can achieve accountability.

>> MARIA PAZ CANALES LOEBEL: Thank you.  We have one last question, but now I'm going to close the queue because we need to move to the next segment.

>> AUDIENCE: Hello, everybody, my name is Anna.  I am Chair of youth IGF Nepal.  While IGF 2020 is being bombarded from all topic from AIW. still struggling to connect the people, 40% of the population in Nepal and APAC region is still unconnected.  And if we see the 40% those are connected are the new entrepreneurs of the Internet.

My question is while developed nations are developing AI and these technologies, nations like Nepal are fighting to actually counter the disinformation, misinformation that are being held by the Generative AI and that became so popular in 2022 with the use of, you can name it, ChatGPT, so in this scenario how does developing economy help these kinds of nations in entering the digital era?

Another thing is we discussed this in multi‑stakeholder platforms, but these platforms are not capable enough to actually serve the policies because when it comes to policies, multilateral system influence the policies across the world so how does developing economy cocreate the digital ecosystem that is inclusive for all.  Thank you.

>> MARIA PAZ CANALES LOEBEL: I think it's a complex question to answer in a few minutes.  And we will need to answer it from different panelists.  I don't know if, James, you have a take in regards to the jurisdictional challenges that has the idea of implementing this governance mechanism for companies that offer services to different context?

>> JAMES HAIRSTON: I will start with two projects that I think begin to get at sort of solving for this but, again, are just the beginning.  We recently launched a project, a grant program for democratic inputs to AI, just to sort of give communities, nations, different domains the possibility of trying to surface, you know, what are the unique values and the types of outputs that are responsive to sort of local context from AI systems that a community sort of expects and acknowledging that those may diverge and beginning to figure out what is the process that sort of locally, regionally community driven and how can we build on that.

I think that's going to be one important stepping stone.  Another is we announced what's called our red teaming network and the security and safety testing that is very specific to Nepal to nations and communities around the world and sort of, again, encouraging safety and security testing, submitting evaluations, you mentioned miss and discuss information, if there are types of whether it's linguistic failures or ways that large language models like tools are attached or vulnerable, we want to know.  We want to really hear where we are following short or where perhaps a gap in understanding or a particular type of action is producing results that are especially harmful.

I think that that practice, building that community of practice, and submitting those evaluations and growing the community that is doing that in different countries, in different regions across sectors is going to be important.

>> MARIA PAZ CANALES LOEBEL: With that intervention, we will proof to the next segment of the conversation that is particularly linked and related to the role of IGF.  We are all here sitting in this room and participating in this event on Internet Governance and there is a particular value on the conversation that happens in this space and have been happen willing for 18 years shaping digital technologies, shaping the form and use of the Internet.  So on that note, what we want to question during this part of the conversation with the intervention of the speakers, the role of the IGF as a convener and facilitator of artificial governance action.

And for that conversation I will turn first to Clara and I will interrogate about the experience of IEEE working on developing voluntary guidance.  What is your perspective about the opportunities and limitation of self‑regulatory effort to ensure responsible AI governance and what could be the contribution of the IEEE experience in the role of IGF facilitating this international governance of AI discussion?

>> CLARA NEPPEL: So we see our standards being adopted, once a standard is out, we as a standard setting organisation we don't need to know who adopted it.  We just had a meetup last week, and I was surprised to see how many people actually say they know the standards they implement it in different projects, both private as well as from public actors.

So I think that this, I would like to bring here, well, one example is speaking of children, a UNICEF project which really used the value‑based design approach to change the initial design of a system to find talent in Africa from let's say a closed system that was untransparent to something that the young people actually have agency on, so this is proof of concept that you, by having certain methodologies and taking these value expectation of the community into account you end up with a different system.

I wanted to discuss the incentives of the voluntary engagements.  What are the incentives of adopting a standard?  One is, and we have the City of  Vienna which is one of the pilot projects for the certification.  If you are discussing with public authorities, one of their incentives is trust.  They want the citizen to trust their services.

And you probably also have a lot of private actors who have the same incentive.  If we are talking about C level people, of course, there is also the discussion so what is in it for me?

And we know from a business course that one way of making money, well, two ways, one is to minimize cost, and the other is to differentiate or focus.  And we saw actually in the meet up investors who are interest in this standard because one way of doing value‑based design or what outcome is you end up with better value proposition.

I think this is an important way of moving away not only from the risk‑based approach to thinking what kind of measures of success do we want to have in the future?  Do we want to have performance which is, of course, important for us from the 2:00 community or profit which is, of course, important for the private sector?  How do we incorporate the people and the planet dimension?

And I think that this is something we have to discuss collectively.  And, of course, the other incentive is, of course, to satisfy regulatory requirements, we see that now with the AI Act a lot of people are interested in the standards because they anticipate that this will be required.  Here is also something where I want to very much stress that there is a limit on voluntary measures, so we as a technical organisation, and I think as private actors, the business of private actor is not to maintain human rights and rule of law. 

Of course, they should, we all should be part of it and we should comply with it, but I think that there are certain red lines which have to decide in a democratic process.

And the only way to do let's say a common approach is this kind of feedback mechanism.  If we want to have something like global governance, we need to establish lines of communications to have standardized way of reporting incidents, to have benchmarking, testing facilities and here in Kyoto having something like the international panel of climate change which has an advisory role to Governments to say where is it where we actually need to do something, and see if new regulation is needed or if regulation needs to be adapt.

As a matter of fact, we are just doing this with the Council of Europe with one of the applications of artificial intelligence immersive realities, we are working with them to see what are the possible impacts of Human Rights of these new technologies.

>> MARIA PAZ CANALES LOEBEL: Thank you very much.  I think you bring super relevant point about the role of incentive.  I will be anxious to hear the take of the other speakers when they intervene about that.  It is a challenge for everyone to identify and align with those incentives in order to bring the process to the right direction.

But for now, I will turn to Alisa Ema and ask you in your experience of social science researcher, and your activities in that role include facilitating dialogue with various stakeholders, what are some of the challenges in conducting that facilitation of the multi‑stakeholder engagement with AI governance that you can share with us and how this can be effectively, for example, this learning integrated in the role that IGF needs to play as facilitator of these discussions?

>> ALISA EMA: Thank you very much.  I think the role of the IGF is really important, and I wanted to tell just one episode, well, actually the previous session I organized the session and I invited my friends who actually is in a wheelchair, however, he or she can't come.  So I actually brought the robot that they can operate remotely from their home.  So that kind of thing is really important and to be more inclusive, I think we need to involve as many people as we can, and this kind of thing connects to the 7‑year‑old boy's connection.

It is really important to be connected to all of the other stakeholders or the other people with some challenges and also those technology actually empowers people so that they can come, virtually come to these places and make their presentations, interact with others, but on the other side, in our session what we discussed is although we have that kind of system, on the other side it's vulnerable because if, you know, the crisis happens or the power went down, you know, those kinds of technology are not actually available.

So I think it's really important when we are discussing about AI governance, we need to put humans also into this kind of systems and human is the most flexible or maybe resilient to be kind of adaptive to all of those crisis situation or maybe how to say to be more creative and to be more active.

So what I wanted to expect, what I expect of the IGF forum is that we can talk about the AI governance, but we need to include human and human centred is the very key ward and I guess this kind of topic is really important to repeatedly come out to this kind of discussion like the democracy, the rule of law or the human rights.

So with this kind of topic, shared with people and then it will be connected or brought to collaboration.

The last thing I would like to expect of IGF is that all of the ‑‑ all of the interesting and important things being discussed in this panel session, however, maybe the next step action is discussed outside this room with over lunch or having in‑person discussion or maybe just having tea.  So that kind of forum is really important and because IGF forum is open to everybody, we can talk with the person just next to you.

So it's really important.  So what I wanted to speak or expect of the IGF is to be inclusive and also this kind of in person and informal communication as is really important and I really appreciate that many people came to Kyoto and also enjoy the Kyoto.

>> MARIA PAZ CANALES LOEBEL: Definitely we are enjoying Kyoto.  Thank you very much for hosting us.  I think that this conversation about inclusiveness has so many dimensions.  It has the dimension of the different stakeholders, it has the dimension of the particularly situation of vulnerable groups or groups in vulnerable condition is more appropriate, but also has a geopolitical and geographic dimension.

On that I will invite Thobekile Matimbe to react in that sense and what is generally missing in diversity of factor in the conversation of AI governance and how IGF can continue contributing for address that challenge?

>> THOBEKILE MATIMBE: Thank you so much.  I will start from the people press of highlighting that I know that a number of colleagues were not able to be here because of visa issues.  When we are talking about inclusion, I think it's something that we need to proactively think about in terms of how we can make sure that we have inclusive processes, but also accessible platforms for those from the Global South specifically.

And just going beyond that I will highlight that I think within the Internet Governance Forum there is need for continued engagements and engagements with critical stakeholders and victim‑centred approach to the kind of conversations that happen here in the sense of having everybody, vulnerable groups well represented in terms of the conversations that happen especially when we are looking at AI.

I'll highlight that an understanding of the global asymmetry is something that is important to continue to highlight because we do realize that when we are looking at Global North versus Global South, the different context, and I think that it's something that I highlighted earlier, the importance of context, and I think my colleague here as well highlighted the aspect of understanding the different context that are represented within the Internet Governance Forum.

I think it's something that will continue to shape processes even better and to be able to ensure that we come up with AI focused solutions or resolutions that ensure that no one is left behind when we are looking at fundamental rights and freedoms particularly.  And I think just to emphasize that I think definitely this is a forum that we continue to leverage with regard to advancing the promotion, protection as well of fundamental rights and freedoms, but also we need to continue to engage in terms of remediation for victims who are likely to suffer adverse impacts of design of technology.

And that is something that cannot be overstated.  I'll, I think, just round off by highlighting that I think it's critical that we continue to highlight that there is a need to break down the walls.  Earlier I highlighted about the centers of power when looking at AI, and I think the IGF is that good opportunity to be able to break down the walls that stand in between the power in a real stakeholder engagement where all voices are heard and no one is left behind.

>> MARIA PAZ CANALES LOEBEL: Thank you very much.  In this same line, Dr. Centre, I invite you to react to this very same issue of how to deal with this diversity of realities and diversity of processes that are ongoing for dealing with this diversity of realities at the national level, at the regional level in some cases, the European Union that Clara had brought before, but also at the global governance systems and propositions coming from the UN in creating new bodies for overseeing the governance of artificial intelligence, how that also can be approached from the perspective of a Government that is conducting its own efforts at the domestic level for finding the most appropriate way to address the governance of artificial intelligence and being inclusive in this, and how those efforts and this experience that is acquiring the Government in the process of doing that can also be shared and contributed in this forum of the IGF for continued making this global artificial intelligence governance discussions connected and interoperable?  Thank you.

>> SETH CENTER: Is the answer yes?

>> MARIA PAZ CANALES LOEBEL: The how?

>> SETH CENTER: Let me rewind just a minute to the question of accessible platforms and walk into how I think the IGF can play a role.  I think if you get to the end of the governance story and you get it all right, you are still left with the question of why we care about AI.  The answer that we believe in the United States, and I think most people in this audience believe is that you should employ the most powerful technologies to address the most important problems in the world.

And how do you get powerful AI developers, whether they are companies or Governments, although it's usually companies to devote time and attention to govern AI responsibly and then to direct it towards addressing society's greatest challenges?

The answer is the multi‑stakeholder community directing them through conversation, delicate pressure into thinking about those problems in meaningful ways.  A few weeks ago at the UN General Assembly's high level week, there were a series of events that brought together different parts of the multi‑stakeholder community and the multilateral community and countries to talk about these issues.  The secretary of state of United States could convened one with the whole series of diverse countries and companies including open AI and we asked these companies what they were doing to address society's greatest challenges defined however, they wanted to within the contest of the SDGs and if you open up those conversations and you have them at the inn, you have them in the General Assembly and you have them at the IGF, if you ask questions about impact on labor, if you ask questions about what we are doing to protect children's safety in the AI era, if we ask about inclusive access, it naturally changes the entire conversation so the young gentleman who asked a question about whether or not the multi‑stakeholder community could make a policy or not, and I think there was a sense of skepticism, I actually am far more optimistic.

Policy is made at least in democracies, including ours, the United States, by listening to the inputs of everyone.  Our entire architecture in the United States for our AI governance framework was built on listening to the multi‑stakeholder community in a domestic context.  The entire architecture for thinking about the voluntary commitments, the most recent one included extensive multi‑stakeholder conversations.

These are the way in which Governments and democracies actually formulate policy.  No Government has the hubris to believe, at least the ones I have talked to, that they understand foundation models and Generative AI.  They need the technical community, the standard‑setting bodies to help them.  They need companies and the experts in companies to help them.

They need civil society, Human Rights organisations to help them, and out of the input comes an output and the output is policy.  And then you need Governments to actually enforce the policies.  That I think is actual I where we probably have a bigger challenge.  If you take a step back and say how do we ensure accessibility, how do we ensure collaboration, we should encourage the energy in all of the forums whether it's the U.K. safety Summit, whether it's the G7 Hiroshima process, the UN's H lab because we are at the early stages of the next era of AI and we need all of those conversations at this point in time.

>> MARIA PAZ CANALES LOEBEL: Thank you very much.  I turn kind of a similar question now to the private sector represented here by James in terms of you don't have jurisdictional borders in the offering of your services.  You are binded by different regulatory frameworks and different jurisdiction but you need to deal with the question about artificial intelligence in a way in which you can operate as a company and offer your products and services beyond the borders.  So what are the challenges in that perspective in terms of how you are dealing with the discussions of artificial intelligence governance at these different local, domestic level, regional, global also, and how bringing some of those challenge to the discussions here in the IGF will be useful in terms of addressing them for the perspective of the industry?

>> JAMES HAIRSTON: Start with the first challenge that comes to mind, which is sort of size.  We are trying to make sure we are in as many of the conversations as we can be in, and in all regions of the world in every country, cities, states, geographies, there are important discussions, is it possible to be in every ‑‑ it's impossible to be in every room, but coming off the recent listening tour we did around the world we have a great respect for sort of the variance and the needs for these tools, the different restraints that are going to be placing on area where's hard and soft law will differ, so just making sure we are in the right places, that we are listening fully, that we are providing the right sorts of research and technical assistance.

I think that is one of the threshold challenge of just sort of making sure we are participating in the right way hearing and learning in the right venues.  Then from there, I think sometimes there is a discussion about the sort of spectrum of you have these really important short, medium‑term risks, as well as some of the longer term ensuring safety for humanity, sort of on the road to artificial intelligence which is seen as a spectrum and sometimes talked about as if you have to make a binary choice of either addressing short to medium‑term harms versus looking further out into the future and being focused on building the international and domestic systems to solve for those.

We don't think that's a choice, like we have to work on both and we as the private sector as a research lab have to be contributing to those discussions as countries formulate laws, but also on the other side.

Regulatory conversation as country, societies decide how they want to use these tools for good.

So being in enough room, contributing the sort of core research and technical understanding, making sure that the transparency, the work that we are doing around our tools is aiding those conversations in as many geographies and for as many communities as possible, it's a challenge but it's a responsibility.

So, again, we just welcome and sort of being in as many of those rooms and as many conversations as we can be.

>> MARIA PAZ CANALES LOEBEL: Thank you very much.  I now open the floor again for reactions and comments from the audience inside but also online.

>> MODERATOR: There is a lot going on Maria.  Let me start with one question from Moka Beri from Islamic Republic of Iran, could shaping the UN on artificial intelligence help to manage its risks.  Due to geopolitical conflicts allow this at all?  And what could be the role of the IGF in this regard?  If I may, I would like to seize the opportunity to enlarge the question a little to you, James, because I have the great opportunity of sitting next to you and you being a newbie at the IGF, not you as an individual, but representing open AI, what do you think could be the added value of the IGF when it comes to the discussions right now on regulations of AI and governance and do you get the impression open AI could contribute in future times as well?

So two questions in one.

>> JAMES HAIRSTON: Absolutely.  I think one of the comments earlier on benchmarking, defining what does good look like.  I think that's going to be important as much from a technical perspective as it is in sort of policy development around the world.  So I think there is a really important role for IGF and international institutions to really harmonize those discussions and say these are the benchmarks.  These are how we are going to be grading our progress.

And that's probably where I would start.  Similarly, to address the first part of the question of sort of where we can build on existing work, I mean, I think for a lot of these technologies and sort of where we are heading next, it's important to build on the important Conventions and treaties, areas of law that we have in place.

And that's not to say that there won't be new approaches, new gaps as we have been talking about today but we don't necessarily need to reinvent the wheeled so take the hard work that's been done in areas like Human Rights and draw on that as we sort of figure out the places where we want to set new standards going forward.

>> MARIA PAZ CANALES LOEBEL: Thank you, James.  I don't know if there are any inside questions.  I don't see anyone.  There, I'm sorry.

>> AUDIENCE: Hello, everybody.  This is José from the governance lab for the record.  Thank you for bringing up this crucial issue of how IGF, whether or how IGF can help to deal with AI specifically the governance and regulatory issues.

I think as this is not the first IGF we may learn from the past.  You know better than me we have discussed many times for more than one decade about the governance and regulation of big data, digital privacy data governance, and even finally we didn't, we were not able to reach a framework to deal with those even on both sides of the Atlantic ocean we were not able to reach the same regulatory framework and laws.

You can compare the DSA and DMA in Europe with the way U.S. is dealing with their companies.  So my wig question to add spice to your interesting topic is as far as we have not been able to reach framework Tidwell with big data, how can we be optimistic to reach a global framework to deal with AI, and you know AI is rooted in big data as well.  And the last thought, I have a question yes/no question for James, Mr. James who is representing private sector today.

Right now is there any emergency shutdown procedure in your company?  Like if you, by the case you find that there is an urgency danger coming out from your products and your AI models, is there any procedure in place right now for an emergency shutdown or not?  Thank you.

>> JAMES HAIRSTON: I can take that last one and so, you know, when there are ‑‑ we have harm reporting and we take security reports and so we can turn our tools off by geography.  I think there are probably many layers to that question beyond just on/off access.  But happy to sort of follow up and understand sort of the types of shutoffs that you have in mind.

>> MARIA PAZ CANALES LOEBEL: Do you want to react to that?

>> SETH CENTER: Maybe because I have never come to the IGF before, I'm not as down as you.  I think there is a tremendous amount of consensus on AI governance.  I think obviously the challenge of enforcement and what the regimes look like maybe a bridge too far at a global level, but I don't think that's an existential threat to the value of these conversations or pursuing AI governance conversation.

So, for instance, if we were to ask ourselves moving into a future in which foundation model and Generative AI will likely subsume narrow AI, what are the kinds of safe guards you would want in place at a governance structure?  I think everybody would basically agree, you want some kind of internal and external red teaming.

I think you would generally agree that you want information sharing among those who are developing these models.  I think you generally agree that for finished models which are potentially profoundly powerful, you would want some sort of cybersecurity to protect model weights.  I think you generally agree you can't solely trust those developing them to be accountable, and so you would want third party discovery and auditability in some way, shape or form.

I think you would basically want developers to agree on public reporting on capabilities.  I think you would basically agree they should prioritize research and safety risk including issues like bias and discrimination and my sense is if you get to the end of this you would agree they should employ these to address society's greatest challenges.

So at that level I'm sort of fairly optimistic we are at least going in the right direction.

>> MARIA PAZ CANALES LOEBEL: Thank you.  I see two more speakers line up here.  I don't know if we have some online tool.  Can you read the two so we can have time for the other speakers?

>> MODERATOR: I have.

>> MARIA PAZ CANALES LOEBEL: We pose all of them to the panelists.

>> MODERATOR: Two questions, actually one from my side to Seth because it's really pressing and I'm hacking the system now, but you said before we have to get all stakeholders involved.  I would be interested in your opinion on the fact that it was uttered somewhere by the steering group UN on the analogy of we need something like the international atomic energy agency for AI, this idea which sounds rude.

Is that an adequate idea or not?  And maybe I ask pose the online question already?  That is a colleague, actually a member of the Parliament from South Africa was asking, William Faber, considering that AI technology was developed by humans, could we not explore the possibility of leveraging AI to establish Government regulatory systems instead of relying solely on human efforts to find solutions?  A technical thing.

>> MARIA PAZ CANALES LOEBEL: So AI regulating AI, that's the proposition?  So maybe we can turn for that one to James and what are the take and for Seth for the other one.

>> JAMES HAIRSTON: One air of I think long term research, this goes back to a question raised earlier, I think it's important to have humans in the loop and the development of systems and then their testing.  We have talked about red teaming and auditability, and there are a lot of research possibilities around the use of, say, synthetic data in the future.  We have been talking about bias and what the future avenues for addressing them might be, and there is one area of sort of work around the world that I think needs a lot more exploration and sort of how we might create high quality data sets that are derivative of sort of research work by domains to sort of generate the ability to perform all sorts of new tasks.

And in that way, you would have information not based on sort of a current corpus of the Internet or people's information that, of course, involves a lot of human training to get to, but that is derivative and used to sort of build new capabilities.  I think that's going to show up in some form in a lot of domains.  And there are pieces of that that are going to require a lot of monitoring and evaluation, but there are other ways in which sort of synthetic data sets help solve some of the problems we have been talking about.  It's not a panacea, of course, and that what you could use in sort of deconstructing and reconstructing information that tries to resolve gaps in, say, language of the available information we have today or over or under representation of certain regions or genders, or that that synthetic data could be used and applied to create personal tutors or to improve genomics research and or improve understanding of climates.

So that's one area of research.  There is a lot to do there, but I think as we talk about machine‑created data, again with a lot of humans and a lot of standards, bodies, security testers in the loop, there are interesting possibilities there, but that doesn't mean we can step away and let that happen.

>> MARIA PAZ CANALES LOEBEL: Do you want to react to that, Clara?

>> CLARA NEPPEL: We have a Working Group on defining the quality of synthetic data, because again we are coming back to define what is good, what is ethical synthetic data and I agree with you that actually it is one of the ways of providing let's say private data to be used for research, and so if you are using it in that way, I think it's okay.

Coming back to the why it is important to think about the global regulation or global governance of AI, coming back to the analogy of electricity, I think that now we have this moment where it is out in the open, and it's being used in different ways and different geographies.

So we need, now, we are coming to Japan, we use a different socket, so we need to have a transparency what is being used there?  Where is it that we need to adapt?  We need to have a kind of, as I mentioned before, transparency in the sense of what basic information about how this AI models have been used and what is important for that context.

And I think it is laudable that, of course, we have this private efforts to make AI as trustworthy as possible, but it is still something which is closed.  I mean some of the things are made open, but it is, again, voluntary.

So we need to have a certain common ground to understand what we are talking about, what are the incidents, what are the data sets?  Where is synthetic data being used?  What kind of quality of synthetic data is being used?  I think once it becomes everywhere, I think that there is a pressure as well to kind of have this standardized way of understanding the impact of AI.

>> MARIA PAZ CANALES LOEBEL: Thank you very much.  After that I will take one more question from the audience, and I will ask you, all of the speakers to do a round of final remarks so we can start to close.  Thank you.

>> SETH CENTER: So I certainly think the IEA is an imperfect analogy for the current technology and the situation we face for multiple reasons, one being the predominance of private sector developers of AI versus state‑based questions about nuclear control.  A second being questions of the ease and facilitation of verification and what you are trying to verify and track, I think, is quite different at least in the era in which the AI was developed versus what we are talking about in the AI area.

I think there is one instructive lesson that comes out of the AI and that is between 1945 and 1957 when the IEA was established was 12 year so as we pound the table and demand action to institutionalize global governance around AI, we should be a little more patient with how this evolves.  I will say, look, we need scientific networks that span countries that are convened to take on these problems if for no other reason to build shared assessments of risk to address, agree on shared standards for evaluation capabilities which I think we will need shared international approaches to.

So I think we should continue to look for the right kinds of models for inn cooperation, even if that's not the right one.

>> MARIA PAZ CANALES LOEBEL: Thank you very much.  Please, your question.

>> AUDIENCE: Thank you very much, Christen Mujimba from Uganda, but I will be speaking as a mother in this regard and really advocating for the 7‑year‑old by I think it was from Bangladesh who asked a question on children, and there have been follow‑up discussions on whether have a place in influencing policy, and being from a technical background and many other background, sometimes I find that we get lost in the high tech definitions and all of that, and we lose the low hanging fruit of common denominators such as the diversity that we have all been children before, and even in the session before when we are talking about cybercrime it came out clearly that we need to protect the future generations.

So I think for me my ask to experts and panels like you, as you have your elevation pitches wherever you are ‑‑ elevator pitches wherever you are, to sort of have the low hanging fruits come out, if you all agree that you have been children, and we can find the child in us, let's at least get there in addressing the AI that we want and maybe these other things we will learn from there to have inclusive designs we are talking about, whether to bias things or not.  So for me it was really that plea of let's find spaces even in harmonization in addressing common denominators such as preserving future generations.  Thank you.

>> MARIA PAZ CANALES LOEBEL: I think that was a question, but a comment also.  So I would invite you to react in a final rod considering this last question, but your remarks in 1.5 minutes or less.  Invite James to start.

>> JAMES HAIRSTON: So just final remarks here?

>> MARIA PAZ CANALES LOEBEL: If you want to address some of the last question, and if not, your final remarks, yes.

>> JAMES HAIRSTON: Again, I think the sort of public and private collaboration on sort of the safety of these tools ensuring that both on the design side, on sort of the reporting and sort of the research we do about how children, other communities are using these tools, how to protect them, how to make sure that even where tools like ours are not for use for anyone under 13, you know, understanding how young people and communities that are vulnerable come to these tools how they interact with them will be an important part of the work ahead and being responsive to the new research that comes out of sort of the academic community and civil society, and being able to action reports of crime or misuse is going to be key.

I think in terms of sort of closing remarks, I think we are at this important moment, and it's just going to be essential that we build on the momentum that has been put together whether the work on voluntary commitments we see as our responsibility to continue to act on, and to contribute to the international regulatory conversation and the promotion of long‑term safety that we just, again, sort of continue to get more and more concrete about where we are heading, about is sort of international tools we want to apply to these new technologies and that we build the capacity both for identifying harms, reporting those harms, understanding what new capabilities are sort of working or are putting communities and people at risk, but also what the really, the unique opportunities are here for these types of tools.

Those will be different.  They will be adopted at different rates, with analogies to electricity, I think is instructive because there will be different decisions made in education sectors or health sectors, finance and other areas.

Really getting concrete about how we can take some of these tools and apply them to problems for people while also trying to solve for long‑term harms and risks will be important.  So I'm really glad to be here and participate in this discussion.

>> MARIA PAZ CANALES LOEBEL: Thank you very much.  Happy to have you.

>> CLARA NEPPEL: Thank you.  I think that especially when it comes to Generative AI, what will be important is to be as agile as possible and this will be important to be from the organizational level to be national to the global level, and I think that for all of these levels we need feedback mechanisms that work.

Also at the organizational level, we have to make sure that these feedback are taking into account for the further development of this foundational model.  I agree with you that, of course, it has to take risk into account and it has to be differentiated, but I think that for certain high risk applications we have to have conformity assessments and this has to be done through independent organisations because there is, again, a different incentive to self‑certification than being compliant.

I think as well that maybe the international atomic agency is difficult because we have so many uses of artificial intelligence, I would like to bring back, again, the idea of more of an independent panel, independent mold panel which should be implemented also for this important technologies which are acting basically as an infrastructure right now.

So if it's a public infrastructure, we need to have a multi‑stakeholder let's say governance for that.  Thank you.

>> MARIA PAZ CANALES LOEBEL: Maybe similar to CERN, just another idea.  So I will move to Thobekile Matimbe.

>> THOBEKILE MATIMBE: I think what is clear from the conversation is that as human beings we cannot creed or rights to technology and we need to continue to emphasize the importance of us remaining with agency over fundamental rights and freedoms and in that way we will ensure that children's rights are promote in the use of AI, where we centre conversations around environmental rights, et cetera, and think I it's a critical conversation that we need to continue to engage in, and looking at basic concepts such as participatory democracy, I think bringing it into the realm of Internet Governance, I think it's something that we need to also emphasize that there is need for participation of everyone marginalized groups, vulnerable groups, but also ensuring that the processes that we have are actually very inclusive and we have a truthful and meaningful multi‑stakeholder approach.

>> MARIA PAZ CANALES LOEBEL: Thank you very much.

>> ALISA EMA: Thank you.  I think that the AI governance discussion is important and challenging.  The AI itself kind of changes and evolves and also the situation changes, the environment changes and in that sense the people who we need to involve will kind of expand and never shrinks so the more people should be involved in this kind of discussion.  In that sense, in my first remarks, I kind of mentioned that we need some kind of concrete cases and to discuss what will be the risk, what do you mean by transparency and how to take accountability.  However, as many people as we are going to include, we need some kind of philosophy or shared concept we can be unite and we can at least collaborate with the same context or the same kind of, the common understanding or the common concept that we share.

So in that sense I think these couple of days discussion really have kind of come up with various important concepts and the principles, goals, and so I really enjoyed this discussion.  So the last thing I would like to mention is this is not the end, but this is just a starting point.  This never ends, but I think we can enjoy the process of this kind of exchanges and discussion and we need to be, we need to be kind of aware of that to involve as many people as we can.

>> MARIA PAZ CANALES LOEBEL: Thank you.  And the final word.

>> SETH CENTER: You did a great job moderating us and keeping us on time.

Thank you.  I will sum up my take and theme using a quote from a famous basketball coach before about AI governance, be quick, but don't hurry.

>> MARIA PAZ CANALES LOEBEL: Thank you very much for that.  So we are running out of time.  I am supposed to summarize this rich discussion but I only will provide the highlight of the takeaways rather than the full take away.  I think we have heard the main take away we have heard from different perspectives of the value of this multi‑stakeholder conversation and the value of making, continually making it as much inclusive as possible, and enjoying of the participation of the people that are already in this room but welcoming for the people that are still outside of the room and thinking about this as a necessary step in what Dr. Center was inviting us to be quick but not hour Y. but take the time for listening to different perspective and take the time to evaluate different options to address different challenges.

We talk about artificial intelligence governance because we think that it is a broader concept than just regulation or just voluntary guidance or just ethics, it's a broader aspect and this is the value of the Internet Governance Forum that we can reach different aspects of the discussion and bring different levels of expertise and also be mindful of all of this level of inclusivity and diversity that referred to vulnerable groups, the ones that referred to different levels of expertise and the one that referred to different geopolitical realities.  So as Alissa was mentioning this is not the end.  This is the starting.  Thank you very much for keeping connected with the progress and thank you all of my speakers.

(Applause).