IGF 2023 – Day 2 – Open Forum #78 AI Regulation and Governance at the Multilateral Level

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.



>> You can also move to the front, if you like.  There's empty seats here.  We want this to be the interactive session.  We have Q & A from 2:00 p.m. onwards.  Feel free to fill up the front row.

Since we're not fully complete yet, we'll give it a few more minutes before we start.


>> FILIPPO PIEROZZI: All right.  Welcome to everyone.  Welcome to everybody in the room and also welcome to everybody who is participating online.  We have the open forum on AI regulation and governance at the multilateral level now.  My name is Walt.  I'm part of the office of the technology.  Let me quickly walk you through the agenda of the day.  We'll start it off by some panel remark by esteemed guests here, and we'll have a big Q & A session in which we want to engage with you, the audience.  We'll start off with remarks by Amandeep Singh Gill.  Next, we'll hear from Peggy Hicks who will moderate the panel.  After that we go over to the Q & A session.  Yeah.  We'll further ado, I'll hand it over to Amandeep to introduce the topic.

>> AMANDEEP SINGH: Thank you very much.  Welcome to the event.  The very important dimension of human beings and all of human rights as how we approach the governance.  To set a little bit the context.  I'll talk about the secretary general disposal on the policy brief that he launched on June 5th this year.  For a multistakeholder, high‑level advisory body on artificial intelligence which would meet regularly to review the AI governance arrangements and offer recommendations on how they can be lined with human rights, the rule of law, and the common good.  This proposal that he reiterated in his remarks to the first security council debate on artificial intelligence in July is currently being put into practice.  So this advisory body is being formed as we speak.  After a process for nominations, that ran along two tracks.  One was member states being invited to nominate experts.  The other was for the secretary and the other open call for nominations.  We got about 1800 nominations around the world.  Different areas of expertise, backgrounds, different geographies.  It is very satisfying to see that degree of interest and excitement about this proposal.  We kind of hit the right spot with this.  Now what is the advisory body when it comes together.  What is it supposed to do?  The secretary general has tasked to provide an interim report.  By the end of the year.  And there's a context to this timing.  The discussions on the global digital compact start early next year.  Re‑start early next year.  They move into the negotiation faze.  The interim report would help those who are putting together GDC to put together one of the more important dimensions.  There are the eight high‑level dimensions along with the cross‑cutting of gender and sustainability that have surfaced through the consultation.  It will bring more substance and expert level inside into that discussion.  So after that, there's time for the advisory body to consult more widely, including with ongoing initiators.  You heard the ‑‑ the Japanese prime minister speak about the G7 Hiroshima process.  There's a UK AI summit.  There's work that was done earlier in the G7 and G20 on the AI principles.  There's long‑standing work in the UN context.  I'm happy to be joined by some of my colleagues and work in UNESCO adopted by all member states.  The work in the international telecommunications union on some of the standards that underpin the digital technologies and also at the AI focal meetings.  Then most importantly from the perspective of the vision and topic, the work being done on how to make sure that existing commitments that governments, member states have taken under international human rights since they are implemented in the digital domain.  I just want to conclude by saying that this body that will start meeting soon would help us disciplinary AI expertise from around the world to provide a credible and independent assessment of AI risks and make recommendations to comments on options for global AI governance in the interest of all humanity.  I think those conversations that are happening today, they are very important.  They are essential building blocks.  If this is an issue that concerns all humanity, then all humanity needs to be engaged on it through the universal forum that's the United Nations.  The risk discussion can often be political or it can be motivated by economic interests.  We want a discussion in which there's an independent neutral assessment of that risk.  And the communication of that to the ‑‑ to the global community at large.  At the same time, we also need to make sure that the opportunities and the enablers that are required for AI to play a role in the acceleration of the process on the sustainable development roles.  They are assessed and presented in a sober manner to the international community.  Looking at the risks and the opportunities in this kind of manner allows us to put the right governance responses in place.  Whether they are at the international level or at the national regional regulatory level or at level of industry where there maybe self‑regulations and coregulation schemes to address the including to the kind of initiatives that the Japanese minister of ‑‑ shared yesterday.  I'll stop there.  I'll hand it to Peggy for the moderating panel.  Thank you, Peggy.

>> Great.  Thank you so much.  We're fortunate to have Amandeep with us.  I'm Peggy Hicks with the office and commission for human rights.  I'll be giving some introductory remarks starting out.  Just to sort of set the course by making four introductory remarks.  One is that I think when we're looking at the issues of AI governance, we need to be able to have a complex conversation.  We tend to throw out the term AI and think we all know what we are talking about.  We tend to talk about existential risk, near‑term risk, short‑term, and mid‑term risk with no real definitions on the table.  We need to break the conversation down and realise there are areas existing being used in sensitive and critical areas like law enforcement where we don't have any question about what needs to be done.  We just need to implement the things that we know.  Recommendations have already been made about the guardrails that should be in place, for example, on mass surveillance technologies to protect privacy and in other places.  We need to move forward on that.  We don't have to wait to do that.  We also have the issues that have really rushed to the surface around generative AI.  There's a real need to look at new challenges that are presented and within that area some are immediate in terms of, for example, the impact of deep fakes and water marking in providences to be put in place as quickly as possible, transparency around and look forward.  We also have to be able to look at what are the risks?  The governance, mechanisms, and approaches that will allow us to make sure we're tackling not just what we already know, but what we've receive for the future.  The next point I want to emphasise is that's a global challenge.  As much as we appreciate all of the different efforts at the national and regional level, we need to be able to come together in the global way to address the issues.  We need to be able to learn from each other.  We need to recognise the solutions won't work if there are only solutions that are adopted and taken in one place.  For that global engagement to work, we need to create a level playing field.  That means there needs to be much greater investment and resources and engagement with the global majority that may have more difficulty being part of these policy‑making conversations going forward.  The third piece is one that, of course, comes up in the IGF context all the time is around what we mean by multistakeholder and how that has to be part of the governance approach that we undertake in AI.  I want to emphasise that when we talk multistakeholderism, we're talking both in terms of the business side of things and the civil society side of things.  In fact, what we need on each of the pieces is quite different.  With regards to business there's a tendency to look at how we engage and to some extent mitigate the extent a small number of companies have an enormous influence in the space.  But at the same time we need to create a race to the top where those companies maybe the ones that are best prepared to put in place some of the guardrails that we need.  We also need to protect against the way other businesses will come into the sector and are coming in perhaps with less incentive to put those same guardrails in place as we go forward.  On the civil society side we all know that's an area where there's a lot of commitment to general participation.  But perhaps not as much to effective engagement.  We need a different pathway.  We need to draw on the expertise, we need to make sure that civil society is present.  They are the ones that will help us to make sure that no one is level behind.  Finally ‑‑ you won't be surprised to hear me say this.  I want to make a pitch for human rights and the human rights framework as being a crucial tool to allow us to move forward in all of the areas effectively.  We've heard in many of the sessions that I've been in already at IGF how we have to build on what exists and not create everything fresh.  The human rights framework is a framework that's been agreed across continents and context and celebrating the 75th anniversary today.  We need to find a way that we leverage it in the space.  That also requires support for us to be able to do that more effectively.  It requires all of us to move from sort of the talking point of, yes, we're grounded in human rights to making it actionable in a variety of ways in the policy‑making context.  So those are the introductory remarks from my side.  I'm very much looking forward to hearing from the contributors today.  I'm very pleased that we're going to turn first, I guess, to a member of the IGF leadership panel.  Over to you.

>> Thank you, Peggy.  I thank Amandeep for the earlier comments.  I think it is important to start with the three areas that have identified by the Secretary General in terms of human rights, rule of law, and common good.  Help, you know, the ongoing conversation.  Let me start with a same.  At the opening ceremony someone who sat behind me, yes, behind me.  I shouldn't confuse behind me and beside me.  He leaned over after the session and said look at stage.  There's no diversity.  During the AI panel.  Then we had a conversation.  The conversation that we had wasn't just about diversity, it was about many things.  Peggy, you are right.  Civil society already ‑‑ AI is not new.  AI is the official theme for 2023 AI IGF.  I'm sure if you got $1 for every time AI is mentioned, you'll have a billion dollars already.  It is not understood by everyone.  We are not all on the same level.  Some people even before conversations of AI we already have a divide that's contributed to by some of the problems that we have, that civil society is trying to address.  Three very quick things for me.  Number one in all of the conversation, of course, we've talked about the need to be human rights and for the rule of law and the common good.  I think the common good will only be served if we have a conversation that's based on ethics.  I say this, because if you look at all of the race.  Literally, the AI race we had for the last few months.  I'm sure we'll hear more from the representative on this.  At some point, it had to be called to say let's stop.  The reason for that was it become a race literally without rules.  Everybody was trying to get to be the first to do it.  Many reasons for that.  There are other ‑‑ the first advantage and all of that.  The conversation must be viewed on ethics.  Thankfully we already have many frameworks around human beings that can guide us on this.  We're not creating new principles.  We're not saying the ethics should be based on new inventions.  We have principles on that.  The second is on data protection.  I say this particularly because we've had many conversations about the need for privacy and protection.  There are many countries where there are still, for example, majority, you know, so we do an initiative and report every year on the Internet and digital rights across the African continent.  One of the big challenges is there are many countries that don't have data protection frameworks already.  Not only are they now talking about, you know, just selective data, but also talking about AI.  They are talking about massive data projects.  That's important.  Ethics also is the protection.  I'll come back to the first one that I made about diversity.  Not just diversity in terms of conversation.  It is great to have a panel at times.  At times I think with talking you can solve the problem.  We need to go beyond the talking.  The importance is not just in the conversation, but also in the modeling.  I always give the example of my very first experience with an AI demo.  You know, somewhere not too far from here.  I, you know, stood in front of the machine where everyone was standing.  They were testing.  It will tell you where in the world you are from and about yourself.  I faced the machine.  I said hi.  Hello.  A few words.  The machine said I was from the wrong continent said I was very angry.  Wait a second.  What is going on here?  By the way that project was already being used by a country to determine who to arrest based on prank calls.  I sound like this all the time.  I'm Nigerian.  I'm from a country of 200 million people.  When I speak, I need to raise my voice to be heard.  When I speak, I raise my voice.  It is because I'm Nigerian.  I have to raise my voice.  It is absolutely important not just in conversations by modeling and research, AI by nature is global.  But global does not mean it happens in the global.  Global means it has applications across the entire world.  If you ask, then it means the diversity must be fundamental factor in what we do.  Other than we're going to keep have many of the problems that we have on social media.  Platforms are Googling something that's understood within the context and means something else entirely.  So ethics, data protection, and I rest.

>> PEGGY HICKS: Thank you very much.  Words to live by.  We'll go back to each of the three points.  I understand that Gabriela Ramos is with us and online.  She's the assistant director and general at UNESCO.

>> LAURA GOODWIN: Thank you, Peggy.  I'm sorry.  I got the wrong link.  I was with the very technical experts.  Very interesting session.  It was not mine.  Great to be here with you.  Thank you.  Great to share this panel with you and with Amandeep.  I could not agree more with what the previous speaker mentioned.  I think that ethics is a good guide.  Because it is not only about the challenges that we are confronting now, but actually the challenges that maybe close to us with these very fast‑moving technologies.  We're now probably questioning all of the issues brought by the generative AI.  But AI is not ‑‑ we know.  Since how many years they have been used to make decisions that are substantial and relevant for all of us.  We know the application of the technologies in the description of benefits, we know how much facial recognition has been used.  It is now being debated how much we can rely on it to take positions in the public sector.  But the public and private sectors have been taking decisions based on AI for many years.  We tend to forget.  We know that having a vaccine to fight the COVID pandemic was actually allowed because of the analytical capacities that the technologies could put together to understand how they work.  So it is not new.  But the questions that we ask, of course, are much more relevant given the pervasiveness and also at which the developments are advanced.  It is very important that we have the right frameworks.  If these major technologies are just deployed in the markets, for political reasons and commercial reasons and profit‑making reasons, it is not going to work.  That's why we are very pleased to be contributing to this framing of the technologies in the right manner at UNESCO.  Two years ago, 193 member states adopted the UNESCO recommendation on the ethics of artificial intelligence.  I recognise Amandeep was one of the major contributors.  He was part of the disciplinary group that we put together to develop the recommendation.  It was pretty straightforward.  I feel it was also in the right frame.  Because the question was not to go into technological debate of how do we fix the technologies, or how do we build the technologies in certain ways to deliver for what we want to have in the world.  But the question was actually what are the values that we are pursuing?  Then we put it all around.  It is a societal debate.  Not a technological debate.  The fairness, inclusiveness, protections and privacy.  They need to be served by certain principles and goals.  You know them because of the goals of accountability, transparency, the rule of law, the principles are part of the equation that happened and have been advanced by many, many players in the ecosystem.  These principles need to be translated from our perspective into policies.  Because policies is what will make the difference.  Yes, the technologies are being developed by the private sector mainly, but this will not be different as many other sectors that we have in the economy where governments need to provide with the framework and the right framework for them to develop according to the law.  At end it is not that the governments are going to go into every single AI lab to check that we have diverse teams that the quality is there and the training of the algorithm has points not to be painted by bias and prejudices.  At end when you have the norm and all of the systems to advance this kind of jut come, you get it right.  This is where we're now in the conversation.  Because the member states when they adopted the recommendation it was not left to the goodwill of anybody that wanted to advance in building the frameworks.  They also asked UNESCO to help them advance specific tools for implementation.  Because we are also at a regenerative capacities and systems that can be put together.  Therefore we develop the tools to understand where member states are regarding the recommendation.  The readiness assessment methodology technology that's not only a technology about the discussion again, it is about the capacities of countries to shape and have the legal framework that's necessary for them to deliver.  Then we also develop the impact assessment.  I feel that now we are converging with many other institutions and organisations that are advancing better frameworks.  Just last Friday we were with the Dutch digital authority.  This is also an institutional debate.  For us this is for governments.  Governments think to upgrade their capacities and the way they handle these technologies.  Because as I said, I'm a policy person.  The reality is this is about shaping an economic sector.  An economic sector is changing the way all of the other sectors are working.  In the end it is an economic sector.  The way the technologies are produced can be shaped and determined by technical standards.  It can also be determined by the rule of law.  It is not as difficult as it might seem in terms of at least having these guardrails.  When we say, for example, that we need to ensure human determination, well, then what the recommendation established is that we cannot provide AI development with little personality.  I feel this is just the very basic to ensure that whenever something goes wrong, there's going to be a person and somebody that's in charge.  That can be legal ‑‑ liable legally.  We also need to have systems for redress or mechanisms and ensure that the rule of law is really ensured online.  I'm proud that we have the framework.  It is now being really deployed by 40 countries around the world.  We will be having more.  Next week we're going to be in Latin America launching the council ‑‑ the business ‑‑ the American council for the implementation of the recommendation.  We're partnering with many institutions to ensure we work with member states to work at how we can build the capacities to understand the technologies and deliver better frameworks.  We always also talk about skills.  Skills is skills is skills to understand and frame and advance the better deployment of the technologies.  I feel it is also very important that we have the skills in the public sector to frame and understand.  These are also so vastly moving technologies that we need to be able to anticipate or so the impacts that they can have in many fields that have not been tested.  But if you ask me for the bottom line, the bottom line, I think this is not the way that generative AI or chat GBT arrive in the market.  Is that you need to have an ethical impact assessment, human rights impact assessment of major developments on artificial intelligence.  Because before they reach the markets.  I think this is just right to do due diligence.  It is not what is happening in many of the developments as we see them.  Therefore, I think, it is the moment to put the conversation right ‑‑ in the right framework to ensure that this technology deliver for good.  We are seeing many movements.  We just saw the bill that was put together in the U.S. Congress.  We know what the European union is doing.  We know how many countries are advancing this.  We're doing it with the private sector.  We can either put all of the private sector in one basket.  We're working with Microsoft and also this needs to be a multistakeholder approach and also gathering the civil society and many, many groups that need to be represented, because of the ethics of artificial intelligence concerns us all.  I'm so glad that I had this minute with you to share these thoughts.  I'm looking forward to the changes.  Thank you so much.

>> PEGGY HICKS: Thank you, Gabriela.  It is wonderful to hear your comments based on the experience of UNESCO and ethics of AI development and also the application and as you said the work that's being done globally to move forward on the issues.  I think the point that you make around human rights impact assessments and the need for them to be done before things reach the market is one, we'll come back to as well.  I would like to turn to the final panelist now.  We're fortunate to have with us Owen Lashingner.

>> Thank you, Peggy.  It is pleasure to be here.  I'm Owen from Microsoft.  We're very enthusiastic about the opportunity of AI.  We're excited to see the way in which customers are already using the Microsoft co‑pilots to better use the productivity tools.  We talk about co‑pilots rather than autopilots.  The vision is very much pertaining to the human dignity and agency at the center of things.  More broadly we see AI as a huge range of tool that's going to offer humanity and immense amount of opportunity.  Really to understand and manage and to be able to address major challenges like climate change, like health care, like a lot of what is being addressed in the SDGs.  A lot of opportunity.  I think it is clear there's risk that we've heard very well-articulated across the panel.  We need to think about governance.  As we turn to the governance of AI, we need to think about governance globally.  AI is the international technology.  It is the product of collaboration.  We need to allow people to be able to continue to use AI.  It is clear the risks that AI presents are international.  They transcend boundaries.  AI in one part of the world can cause harm in another part of the world either intentionally or by accident.  As we think about global governance, it is worth taking a step back and understanding where we are.  I feel like an enormous amount of work is needed.  We've made a huge amount of progress.  We're coming up to a milestone.  We're just a few weeks shy of the one‑year anniversary of chat GPT being launched.  We can see the way it has changed the conversation around the world.  The UN has done what it is good at doing which is catalyzing the global and issues that we're excited about the high‑level advisory body.  That's going to be productive work.  Really delighted to be working with UNESCO and to be able to take forward the recommendation on artificial intelligence.  That's an important piece of work.  Really exciting to see the way in which you have concrete safety frameworks being developed and implemented around the world.  People might be familiar with the national institute for standards and technology in the U.S.

They published at the start of this year.  It is a global best practice framework that any organisation can use to develop their own internal, responsible, AI programme.  So I think we've sort of moved to the place where we have the building blocks of a global governance framework in place.  Now it behooves us to step back and chart forward.  There's probably a couple of things worth bearing in mind.  Having a bit more conversation about where we want to get to.  We are on a global governance regime to be able to achieve?  Secondly what can we learn from the many attempts and successes around global governance and regime.  I'll offer a few thoughts in closing.  As we move forward, we want to get to the place where we are setting global standards that are being developed in a representative and global way that can then be implemented by national governments around the world.  I think there are great lessons to draw from organisations like ICO, it does a good job in developing safety and security standards globally.  The other thing we need a global regime to do is to help us develop more of a consensus on the risk of AI.  Really important part of thinking about how we address them.  I think of organisations like the intergovernmental panel on climate change.  They've developed an evidence‑based consensus.  It is a really effective job of taking that out and driving a public conversation which can lay the ground work for policy as well.  I think the final suggestion is we need to invest in infrastructure as we move a way forward.  That's the technical so we're able to study these systems in a holistic and broad way.  It is very intensive to develop and use the systems.  We need to provide publicly available compute and data and models so that researchers around the world can better understand the system and develop the much‑needed evaluations that we need going forward.  The other bit that's just as important if not more so is thinking about the social input.  How do we really have a global conversation on a sustained way on another issues that's properly represented if it brings in views from everywhere around the world?  It is a great start on that front.  Conversations like this and work that the IGF is doing is really important.  There's more that can be done.  One small contribution is setting up the global, responsible, AI fellowship.  We have a number of fellows around the world.  We're bringing together some of the best and brightest minds working on responsible AI right across the global south to help shape more of the global conversation and inform the way that we at Microsoft are thinking about AI.  There's more of the opportunity to do this kind of thing.  I'll pause that for now.

>> PEGGY HICKS: Great.  Thanks.  Owen.  It is helpful to hear your comments on what the global governance AI challenge looks like and what are some of the next steps that we need to take.  Just to put together some of the thoughts, then we're going to turn it over to question and answer.  We heard similar answers from the diverse ‑‑ not as diverse as we need to be probably here either, Benga.  We all recognise the need for the global diversity.  How we achieve it, I think we still have a lot of work to do.  We can, you know, commit to in principle.  But in practice it requires a lot more effort and resources to make it a reality, I think.  We also heard the importance of really putting in place guardrails based on what we know in the space and moving toward on them.  The governance conversation with regards to the best practices is there.  But we also need to recognise that we do have some red lines.  Those red lines ought to be part of the global standard setting process as well and moving forward.  Finally we also need to understand the need for greater transparency, greater ability for a global conversation to happen, and that means making sure that forums like this one are able to a much broader audience.  But that we have the social infrastructure that's needed.  That will require investment and commitment as well to move forward.  So with that, I think I will close this first segment of the panel discussion.  I'm to turn over to Maurice who will guide us in the question and answer.  Over to you.

>> Thank you.  We will now take the time for the extensive question and answer.  You have the possibility to ask any question that you might have.  Unfortunately, Amandeep had to already leave the session.  Our colleague, Quinton is filling in.  I understood that Benga has to leave in 20 minutes as well.  We might prioritise you in the process.  Seeing that we have the co‑facilitators.  Let us know if you want to participate in the discussion.  You can line up behind the microphone.  First come, first serve.  We collect the first three questions and answer them from the panel.  Yeah.  Feel free to ask anything regarding the session topic.

>> Okay.  That's a nice clarification.  Hello, everyone.  I'm Alice from Brazil.  I'm also a consultation from GRIDE, the global index for responsible AI.  I have a question that I think has relations with everything that you said so far.  Because with me listening and all of the panels on AI that AI must be regulated through a global lens; right?  They can't just be national frameworks.  We've also been listening.  It must happen now.  It is urgent.  These things we know that global regulations are not the fastest regulations that we have.  My question is how do we balance this both ‑‑ this needs?  Thank you.


>> Hi.  I'm an attorney at law from Sri Lanka.  Last year I just did a course from CIDP in Washington.  I've been studying AI policy.  I was just wondering the biggest threat is the technology is running far ahead of the law.  Is there any possibility?  We're speaking of global AI is there any possibilities that punitive measures that can be given to the tech companies that are going ahead without the human ethics being examined?  If they put out the tech, the only way is to penalise them.  GDP brought human fines.  Is there any conversation of that going on?  I just wanted to know.


>> My specialty is in a working group at UNESCO.  I think we are agreeing altogether about ethical values.  I think there are a certain number of ethical values which are recognized by UNESCO recommendation, by EU recommendation, by OECD.  The ethical values are well known that's dignity, autonomy, that's diversity, that's the problem of security and well‑being and so and so.  So the problem is not the ethical values.  I think that Gabriela was right.  The problem with ethics is not the problem of designing the ethical values.  But the problem is to what extent does the ethical values are met in the concrete situation.  And that's still a problem.  That's an order of difficulties.  That's why I think we need to have definitively legislation imposing what I call ethical assessment.  Si think it is very important to have the ethical assessment.  What it means at the company level and the assessment needs absolutely not to have what we call stakeholders.  It is in the company and the customer, perhaps the clients, and I know exactly and wish them around the table.  But we need to have the stakeholders and really disciplinary assessment.  To clearly mitigate the risk and definitively try to avoid the risk.  I think if we have the assessment, that's the most important thing.  Of the global level we need to have the discussion.  Discussion about very important issue.  They will join together.  Definitely we must have a number of reflection and AI system.  Especially as we have the problem of management of people in all of the discretion.  So my question is to know what's your position about this reflection?

>> Yes.  Thank you.  Just one suggestion.  I think for the next round of questions you could also say who on the panel you address the question to.  Then we can have it a bit more targeted.  So three questions.  The first one on how to balance the need for quick action in the face of some of the global processes that can take longer.  Second question is on enforcement.  How do we make sure that the rules that we agreed on are actually applied?  And the third one on the need for assessments on how to mitigate and also enforce the rules.  Whom ‑‑ who would like to go ahead?

>> I can chip in.

>> Perfect.  We'll start with Gabriela and over to Benga.

>> The technology needs to be recognized.  We are not only referring to the technical systems and the data flows across the country, we are talking about interoperability of the legal systems.  Because at the end of the kind of definitions that you have in one jurisdiction is going to be determining the kind of outcomes when you go into international corporations for law enforcement.  But at the end, the very basic tenant of all of these construction is to have the enforcement of the rule of law regarding these technologies at the national level.  This is the emphasis that we are putting in the implementation of the recommendation of the ethics of AI in many different countries that we are working.  They need to have the technology, second to anticipate what kind of impact they can have on the many rights they need to protect.  Then to have commensurate measures whenever there's harm.  This is another bottom line.  Whenever there's harm there should be compensation mechanisms.  These are the areas that they need to upgrade their capacity.  We need international cooperation.  At the end it would not work if you have fragmentation at the national level.  It is important that we have this kind of changes.  With the stakeholder approach to ensure that we learn from each other and that we can also share what we know are the ‑‑ those that are the front running developments in terms of the big frameworks and those that are lagging behind.  I feel again the role of governance is really important in trying to ensure that the rule of law is respected.  But that's a task.  That's why they are paid for it.


>> Thank you, Gabriela.

>> Thank you.  I think it is right to look at them through a global ends.  I don't think it necessary means that every single, national, regulation needs to look the same.  It is all about interoperability.  I think a big part of this will be developing some global standards in relation to how you evaluate these systems, for example, the different countries can then implement in a way that's sensible for them.  In terms of sort of how to apply the law and where the law might apply, there's a large amount of existing domestic law that should be being applied right now.  I think if you are in a country where you have a law against being able to discriminate against someone in providing a line, it shouldn't matter if you are using AI or not.  It shouldn't be an offence.  Yes, I discriminate against the person.  I gave loans at unfavorable terms.  I was using AI.  Don't penalise me.  That's not going to hold.  Existing law should be applied across various jurisdictions whilst we put in place the frameworks that address some of the specific issues of AI as well.  In relation to the impact, it is a great thought.  We are very enthusiastic.  It is one of the things.  We have an impact assessment as a core part of our responsible AI programme at Microsoft.  Any high‑risk system that's being developed, the product team has to go through an impact.  Making sure the system is performing poorly and addressing the issues of bias.  We think that's a fundamentally structured process to be able to go through.  We have now started publishing the templates that we've used.  We published the guide that we use to help our colleagues navigate the impact assessment process.  Others can scrutinise and build on it and improve it.  I welcome thoughts that we are using at Microsoft.

>> Thank you.  Just to build on the earlier ‑‑ on the urgency of now.  I can understand why, you know, that is the conversation that is happening.  That's a natural reaction to some that we've seen in the last year.  One is that first of all ‑‑ regulation ‑‑ it is really important to say this.  Regulation is about creating standards and not implementing the UNESCO control.  This is about the protection regulation in many countries.  It became an opportunity for certain governments to seek legitimate control over areas where they were supposed to quit standards.  They were also, you know, going to abide by.  I can understand why global always gives the idea of being slow.  Because there's negotiation, there are countries that ‑‑ I think there are some countries that just want to contrary.  They want to take the microphone and speak.  They work.  I like the example that you gave of the international civil organisation.  There are many examples that we can look at.  We can talk about some of the conversations at ICAN and now at IGF and build on the processes.  On the second question.  They exist in laws that can be applied.  Also it crushes when it comes to the sort of the tension between innovation and regulation.  And policy or regulation.  I think that innovation will always be ahead of regulation.  What is important for them to policymakers to seek to understand.  We've seen many instances.  I know a country that we are working where crypto currency was banned.  What urbanning is the new force of money and movement.  It is important with the experiment ideas.  Within the specific frameworks.  If something goes wrong, of course, they have to abide by it.  It is absolutely important that in the name of, you know, caution and not, you know, not align people to go haywire.  We're not stifling the innovation.  We've seen that happen.  Regulation doesn't understood the innovation and wants to jump ahead of it.

>> Thanks.  I'll pop in as well.  The first question is a really important one.  I think the ‑‑ the idea that we can't come up with a global framework.  I've said that a million times myself.  Making a treaty isn't going to get us there.  It will take us too long.  By the time we got it, it would be outdated.  Benga's answer and Owen and Gabriela as well have said some of the pieces that we have.  We need to build piece by piece.  One thing we desperately need and we talked about it in a conversation earlier today is around the authoritative monitoring and observatory to give us greater understanding of what the risks are and what's happening.  Being able to report on incidents or problems in a credible way.  This is the type of thing.  You've seen all of the different analogies.  Looking for ways that we can better understand what's happening at a global level.  Then that informs the national level is a key piece.  The second thing I would say is we can now develop authoritative guidance on the issues.  Then we need to have the ability to advise and support in the capacity to implement it at the national level.  Those are all things that we can do with the framework as it currently exists.  We can't mandate some of that to happen yet.  The more we have the guidance, the closer we'll get to the process where states will pick up the good practices and move forward with them.  The last point that I would make in terms of the global dimension and the urgency issue is that there are areas that we don't have to wait for us to solve the overall AI picture.  I mentioned earlier the issues around water marking providence on AI generated images.  That's something that we all ought to be concerned about and acting on now.  The companies have made voluntary commitments.  We would like to see that turned into something that's enforcement.  It would be enforcement across the industry in different ways.  You have to give transparency around things that are generated through AI.  That would make a crucial difference.  We can also chunk it into pieces to take up some of the most urgent issues and address them in the concrete way.  Just to stay on the remedy and accountability issue.  I do think it is a major issue.  I agree that we have to use the laws that already exist to do some of this.  I would also say that part of the problem in terms of analogy on discrimination by AI systems is that we don't have enough transparency to know how to make the lawsuits and legal remedies happen.  For me one of the starting points is to build more transparency into the systems.

>> Thank you.  I think we have time for one more round.  We can take two more questions.  We have another five minutes.  Is there anyone who is still curious?  Please.  Feel free to take the microphone.

>> Hello.  I'm Tom Baraclef.  I have a think tank called the Brain Box Institute.  I also manage something called the action coalition on meaningful transparency.  I have two questions, but I don't want to take both of them.  I'll leave it up to see as to how you respond.  The first question I would like to ‑‑ you are talking about transparency and having enough such that we can test the way that AI systems are working and hold developers accountable to the standards that we see.  I wonder if any of you have any comment on the way that developer frameworks also require companies and governments to make adequate resourcing available to the groups who we are expecting to hold these systems to account.  That's one thing to say here's a lot of conversation.  It is a whole other thing to be able to meaningfully use to hold powerful institutions to account.  The other question that I have is I suppose it is a tough one related to human rights as a framework for AI.  In my work, I quite commonly say we should use human rights as a framework for understanding what we expect of people using technology and developing technology.  I find it very useful.  In recent experience, I was sort of making this argument.  The response came back to me that globally we're seeing a decline in the states.  Sort of backing human rights and liberal democratic frameworks.  Do you have any comment on, you know, as the human rights framework is not landing from an advocacy perspective where we might turn to in order to sort of give effect to some of the principles that underlie those in term of human dignity and other factors like that?  Thanks.


>> I'm Quinton.  I'm with the Tech Enjoy office.  I can respond by taking a step back and thinking about earlier question around speed and inclusion.  From the political perspective, there's a global paradox.  Everyone is talking about university standards.  They are talking about fast.  The private sector is interested in interoperability governments to move through.  Slow maybe fast in a sense.  To get a global agreement to move from 20 countries, 50 countries, to 193 countries.  All of those countries have to want this.  What we've noticed on the global digital rights, they have certain countries.  As an example, we had a lot of submissions from the political groups from the global north human rights.  We did a word count of how many times human rights was mentioned.  We compared to digital divide.  Human rights may have been mentioned several times.  And global divide many sometimes.  The opposite.  When we think about holistically, we have the individual, civil, political, and the kind of rights.  We have the universal declaration of rights.  These are all human rights.  These need to be protected and governed for.  These are human rights in which the whole world can get behind, including the right to work, employment, favorable pay, standard of living, education, protection, and so how can the world think about this topic of governance of AI from a holistic perfective and bring along the countries who have more urgent pressing needs on the economic side, on the development side, and take a holistic approach, not just geographically to 193 countries.  But also holistically from a governance perspective.  So if you allow me one more kind of interpretation here.  We're talking about regulation and legislation in the panel.  Governance can involve other types of policies.  Not just legal regulation.  Not even just ethical standards.  It can also involve other kinds of policies that impact incentives from taxation, trade policy, intellectual property, policy, which also by the way is one of the social, economic cultural rights.  So how can the conversation be shaped in a way that governance can be thought of holistically across the different parts of the UN's work.  Not just the ‑‑ what is commonly thought of as human rights, the social and political rights but also the economic and cultural rights and the sustainable development goals.  How can all of these other counties who when they hear human rights, they think it doesn't matter if we don't focus on economic side.  To actually embrace the concept of governance, that will ‑‑ we don't hear a lot about AI accelerating.  How is that going to happen?  I mean we can talk about productivity tools on 365.  That's great for a lot of office workers in the west.  How does it put bread on the table?  How do we get the climate that people keep talking about?  Does it involve forms of policy like prizes subsides or incentive creating policies like in the COVID challenge trials where the vaccine was developed in a matter of weeks instead of normally years.  How does that happen to really get material impact?  To get the global 193 countries agreeing, they have to see an interest in it.  To see an interest in it, we have to think of human rights holistically to include the whole universal human rights.  To get to that, we need holistic.  It will embrace other kind.  That's when the secretary general has put together the body on artificial intelligence which will look at governance, there was a choice to make it disciplinary and include voices from all regions, agendas, and also including the digital economy to look at individual impacts on the human rights but on the societal impact on the social, economic, and cultural rights.  Thank you.

>> Thank you very much.  Dear audience and dear panel, I would hand it back to Peggy for wrapping it up very quickly.

>> PEGGY HICKS: Thanks.  Quinton helped me out with the assist on the human rights side.  I think it is a crucial point.  One that we need to think about.  Human rights aren't only when we use the words human rights the digital divide and what it means for people who are suffering for the lack of technology is also a human rights and falls into the basket of economic, social, and cultural rights as Quinton has described.  We have to get away from the terminology debate and move toward on the issues that we have discussed today.  I see the facilitator as well.  There's a lot of work to be done in building the global framework.  It does need to be done across the sectors and across the rights.  Also across the communities, countries, and people.  That means finding the ways to bring in all of those who are going to be affected by the choices in a much more effective way.  That goes to the second part of the question that you asked.  How do we make sure that the resources are available to do?  I think that's a fundamental piece here.  That we need investment in the global public good.  That does mean, Owen brought up the need for the social infrastructure to be built.  That means public, compute resources that will allow the researchers to be able to do the research that we know we need them to do.  It is really looking at those questions and finding a way that we can make sure those who are making the profits out of this are also helping us potentially to invest in the ways that we can make sure that this opportunity side of artificial intelligence is there for all of us.  Thank you all so much for joining us.  Thanks to the wonderful panel that we've had with us today.  I hope everybody enjoys the rest of the IGF.  Thank you.