IGF 2023 – Day 2 – DC-DAIG Can (generative) AI be compatible with Data Protection?

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.



>> We are starting in less than five minutes.  We are waiting for the last panelist to arrive, everyone to take a seat, and then we'll start.


All right.  We are starting in two minutes.  If anyone is still arriving, there are three spots here in the front.

All right.  We are almost ready to go.  It is almost 5 past 5:00.  Should I give you a heads up to start?  We can start already in the line?  Okay.  Fantastic.  Good afternoon to everyone.  My name is Luca Belli.  Improvers of the law school where I direct the center.  Together with a group of friends, many of whom are here with us today, we have decided to create this group, this coalition within the IGF called data and AI governance coalition whereas might imagine we will discuss ‑‑ we are discussing already data and AI governance issues and with a particular focus from the global south perspective.  With the idea to create the group was born some months ago during the capacity building programme that we have at FUV law school.  It is the governance school which is an academic spinoff of the conference.  You might know the European one or the Latin American one.  After three days of intense discussion and in March and at the end of April we figured out it was good to keep maintaining the very good interact that we had and even try to expand them to bring new voices.  Because one of the main critiques emerged is frequently the discussions about data governance and AI have no representation of global north if we can say so.  Ideas and solutions.  And the severe under representation of global south ideas and concerns and even solutions sometime.  So the idea was precisely to discuss how to solve this and as many of us have researched in the background or interested in doing research we decided to draft this book that we manage to organise and print in record time.  But I have to also to disclaim this is a preliminary version.  If you want to give us feedback on how to improve it or in case anyone is interested in proposing some additional, very relevant perspective, we might have missed, for instance.  We noted that the only region that's still poor in the book is Africa.  The others are very well covered.  And we are going to ‑‑ we also created the form.  If you tape in your browser bit.ly/DAIG you'll arrive directly on the forum.  You can also download for free the book.  If you are allergic to Google Forms which is something that may absolutely happen, you can even use another media URL, bit.ly/DAIG 2023 you won't have to fill out a form.  The book deals with AI sovereignty, AI transparency, and AI accountability.  I'm not going to delve into the transparency and accountability part.  We have speakers that will evaluate from very different perspective.  I want to say two words which is an application and implementation of what I've been working with some colleagues from another cyber briefs product with regards to digital sovereignty over the last few years.  Fundamental teachings of the past years have been of two types.  First, there are a lot of different perspectives on digital sovereignty.  You might ‑‑ a lot of people see this as control or protectionism and there are a lot of other perspectives, including those by themselves determination and the fact that both states are local communities or individuals have the rights to understand how technology works, develop it, regulate it, there's nothing and naturally sell it.  It is a right of all peoples in the world according to article I of not only the charter of the United Nations in the contest with those of the international covenant on economic, social, and cultural rights.  The fundamental rights of everyone here to ‑‑ to be the master of your own destiny if you want.  But in social rights, governance, and also technology.  The fundamental refraction of the first part of the book about how do you achieve this?  In the chapter I identified the eight key elements that go from ‑‑ they form a stack.  An AE sovereign telestack.  They go from data.  You have to understand howdah that are produced, harvested, how to regulate them.  Data.  You have algorithms, you have compute, you have connectivity, you have cybersecurity, you have electrical power, because something that many people don't understand if you don't have power, you cannot have AI at all.  You have to have capacity building which is sort of transversal.  If you have to have AI governance framework which is the main thing that we're trying to regulate.  I think if we only regulate AI through risks, we only look at three and we miss the forest.  There are a lot of other elements that interact.  They are interrelated.  That's in a nutshell the first chapter.  I was very honoured to have Melody and the former directors of the South African regulator to draft a reply on this framework with regards to South Africa.  There are a lot of other very interesting issues analysed by our distinguished speakers of today.  So without missing any more time, I would like to pass the floor to the first speakers.  We have in the first slot of speakers we have more general perspective.  Then we delve into the generative AI and we dove again and Zoom out into transparency and accountability and more general issues.  I would like to pass the floor.  I'm not going to release all of the speakers now.  I'll present them one by one.  They are a lot.  We have the director of the data transformation.  Armando, the floor is yours. 

>> ARMANDO MANZUETAI: Thank you, Luca, for the presentation.  I'm more than thrilled to be present here and talking with you and how they are trying to use generative AI on the public services.  Well, how do I begin with this?  A few technologies have taken the world by storm.  The way that AI has over the last few years.  That's something ‑‑ that's a reality.  Not even the revolution had this much impact on the world as AI had.  And it is the cases for the topic of technical discussion or the so‑called tech probe.  Today all people has been discussing how to implement AI in one way or the other.  Generative AI has the potential to transform our society as we know it for good and generate public and private values in the coming years.  Well, the value of AI is not limited to advances in industry and retail alone.  When implemented in a responsible way were the technology is governed and the privacy is protected, AI has the power to usher a new era of public services.  They can help restore trust in public entities with work force efficiency and cost in the public sector.  AI likely has the potential to super charge automating to more flexible cloud‑based application or the mainframe application which is one of the main issues most government has.  Despite the many potential advantages, many are still grasping on how to implement AI in particular.  In many cases, public institutions around the globe embrace a choice.  They can help improve the lives of the citizens they serve or stay in the guidelines and risk missing out on AI ability to help agencies meet their object give the.  Lever on labeling the solutions into technologies public sector benefits.  In the tax collection system or using automation to improve the efficiency of the food supply and production chain or better detect diseases before they occur and prevent outbreaks such as the pandemic that we had before.  All of the successful deployments reached directly including virtual assistance to provide information from citizens across the apps and messaging tools.  Again however it requires an approach focused on three key main areas.  The first one is work force transformation.  The government from the national entities to local government item, public employees must be ready.  That can mean hiding the talent, it can also mean providing existing workers with the training they need to manage AI‑related projects.  The goal is to free up time for public employees to engage in high‑value meetings, thinking, and work.  The second focus must be the citizen engagement.  For AI to truly benefit, the public sector must need to put people when creating new services and modernizing the existing ones.  There's potential for the future.  Whether it is providing information realtime, personalised the services based on the particular needs, or the processes that have a reputation for being slow.  For example, anyone here has ever to fill paper work for doing impossible line and just to receive the accommodation and must be repeated just to receive the same service they need.  The thing is most of the governments, for example, don't have the interoperability or any sort of services to exchange the information freely.  With AI, we could be solving very quickly.  The third one is the government mobilization.  They are held back.  They are tightly coupled with the rules that requires the substantial effort.  For example, public sector agencies can make better use of data to the cloud and infuse it with AI.  Also AI‑powered tools hold the potential to help with the decision, and in larger stores of data and be able to write the applications.  This way instead of seeking hard‑to‑find skills, they can reduce their skill gap and tap into the talent.  Last but not least AI in the public sector is complete without the life cycle design and development use.  Something which most governments have promoted for years to put it simply.  Along with many realizations the financial sectors.  They must try to be seen as the most trusted institutions.  It holds most of the citizens data.  If the citizens doesn't trust the government, how can they trust all of the institutions that exist in the nation.  That means the humans should be able to be at the heart of the services while monitoring for responsible deployment and delaying on the aspects.  Transparency, and last but not least, privacy.  We talk about the explainability, it means that the AI system must be able to provide the explanation for insects to the public in the way that does not hide behind technical jargon.  In government there are many trends regarding the transparency.  To reveal what's in the black books.  For anyone to see how works and how it was built.  We understand how it provide us insight on how it developed ‑‑ how it deploys and how it function.  Also treating the group equity relate to the characteristics of the gender, race, age, and other status.  Transparency.  The system and ability to share the on how it was designed and developed.  Something like previously mentioned with something that is closely related to it.  The system must be able to effectively handle the exceptional conditions and consistent outputs.  Last privacy.  It is basically the ability to prioritise and safeguard the privacy and data rights with storage and access and disclosure.  Which is why it is important that besides implementing AI, we should be improving and modernizing the frameworks that entices everything related to data protection.  If we don't have those rules in place, there's the possibility that many people, not just in the private sector, but also the government, used the data that's stored in the government database to do harm and use it as a political weapon and many other things.  It is important we have the strong data protection rules in place.  The data isn't used against the same citizens that the government is there to protect and to serve.  Just to conclude if AI is implemented ‑‑

>> Quickly.

>> Okay.  Okay.  Just a quick conclusion.  If we implement AI, including all of the traits mentioned ago, it can help government and citizen alike.  We can generate public value.  But in a way that allows all of the citizens to benefit from it and to build a future that we all want to live in.  Thank you.

>> Thank you very much, Armando.  Thank you for giving us the initial input on what is the idea that government should strive for when they have to automatise the system.  Now I would like to give the floor to Gbenga.  He might have a critical perspective and idea.  It is good to have his perspective to try to synthesise our own opinion.

>> GBENGA SESAN: It is like you framed my conversation.  I'm glad we're having a lot of conversations.  Thankfully this is more focus on generative AI and data protection.  One of the advantages of having such conversations over and over is you get to put out all of the points and ask the questions.  What I want to do is to very quickly so that you don't have to say I should conclude.  One is in terms of policy and the other in terms of people.  You know, if I have more of the six minutes, I'll conclude on practice.  And by policy, mean that we already ‑‑ in many cases ‑‑ have data protections in many countries.  There are countries that still don't have the data protection regulation.  This is presents an opportunity for them to have the conversation within the context of massive, you know, processing, you know, for AI.  But for those who have, it means that this is also a chance to have a review.  As an African who is excited and finally the convention as has been ratified so it is enforced.  But also concerned that it happens so late that the convention and the text of the convention is to say the least outdated, you know, and, of course, having calls for review.  There are countries that are literally just ignoring the fact that they have, you know, more recent policies on the subject.  I think in terms of policy, we need to have a conversation about how to make sure that existing in the protection policies are useful as we have this conversation about massive processing.  You know, people have opportunity for the data.  It's been processed.  That takes me to my second point of people.  You know, I work in civil society.  That means that most of my work is centered on people.  It means that when we have all of the conversations, over the last year, I mean we ‑‑ November 30 ‑‑ it is just a month away.  November 30th is the best day of chat GPT.  It's been one year.  At center of all of this is people.  They did it themselves.  I'll give a simple example.  The reason people engage with social media or new platforms or the way we do is many people think it is literally magic.  You put in where you are going.  The map tells you how to get there.  The map tells you if there's going to be traffic.  It is almost like magic.  Many times because people don't understand when they put the data, that's the input.  The output is what you get.  We need to have the conversation around AI.  I'm glad we're having all of the conversations over the last two or three days for me to understand, you know, when I put in data and training the system.  When I, you know, ask questions.  The response is what input has been given.  Of course that goes to the need for.  We talked about that earlier today.  In modeling AI we need to make sure it is diversity.  This is not about tokenism.  This is about diversity.  This is about viewing systems that don't understand the context.  We're going to cause more problem than solving things.  Finally its own practice.  It is where the protection commission called me.  These are independent and already understand the need to have conversations with various stakeholders.  The practice is what happens if something goes wrong when you use any platform or system.  Someone shared an article with me a few year ago.  I read the article.  I was confused.  Because at the beginning, it was accurate.  Then it gave me a master’s degree I don't have from a school I haven't attended.  Then it said I was on the UN I‑level panel.  It is close.  It is tricky.  This is, by the way, one area of criticism from me to say that what happens when I use this and something goes wrong?  Who do I talk to?  This is one place that people and institutions answer questions can come in.  I'll just close it here.  And say that it is really important that we center this on people, but apart from, you know, saying that, there's a need to, you know, review policy and people are the center of this.  When comes to practice, what do I you do when something goes wrong?  Who do talk to?  We need to break down the black box.

>> DJ: most of the people would not be aware of their rights.  All right.  On the initial energy and optimise.  The floor is yours.

Please, Melody.

>> MELODY MUSONI: Good afternoon, everyone.  Thank you, Luca.  I'm happy that you've been bringing the issue around data protection and how that can help with regulating AI.  I've been following a couple of discussions around AI policy and regulation.  I keep wondering what do we want to regulate here?  When we look at low, it is quite vast.  There are different areas of low.  I'm looking at the perspective and digital liability and criminal liability.  Are we looking at the intellectual property issues and there's a myriad of issues that I think when we have the discussions around AI policy and regulation, we need to keep that at the back of our minds on what exactly do we want to regulate?  Are we regulating the industries?  Or are we regulating the types of partnerships that we may end up having or it is just going to be specifically data protection?  I'm sure some of the speakers will speak on the limitations that we have with data protection.  Coming to my section on the chart that we wrote on South Africa.  What we did is we looked at the case framework that Luca spoke about earlier.  Looking at how the key can apply within the South African framework.  Hopefully that can also be replicated across Africa and in other African country.  I'm going to touch on four important key findings that we have conducted for South Africa.  The message that we are getting throughout is there's the need for AI in Africa to solve the African problems.  When you go through the framework and the African union level the transformation strategy and looking at the data policy, that's the message that we keep getting across.  There's the urgency to look at Africa to look at AI, innovation, and develop African solutions or homegrown solutions to deal with African problems.  Then the second key point that I want to emphasise in looking at South Africa is the issue of computational capacity and data centers and building the data in cloud market in Africa.  You understand, of course, AI development it would depend more on the availability of computing infrastructures to host, process, and use data.  With South Africa, what we have noticed is that they have improved on the computational capacity.  There's been discussions about having as many data centers within the country as possible.  There's been actually working closely together with government to make sure there's data center on the continent.  The vision for the country is not just to have data centers in South Africa to but to become a host and get them to host the data within South Africa.  There was a draft policy that was published sometime in 2020 call the national data and cloud policy.  And that policy seemed to actually point towards a direction where South Africa wants to locally on ‑‑ to make sure locally owned entities are active in the data market in promoting the local and processing the local storage of the types of data.  As you can imagine with localisation, it is not something that's popular.  There's been clash back from different stakeholders.  Now as they understand there's an update on the draft policy.  It is yet to be finalised.  The updated version is yet to be released.  What we anticipate is we want to see the revised data in cloud policy to focus more on better regulation of foreign‑owned infrastructure instead of existing infrastructure while promoting the public, private partnership.  The third point I want to speak on which also supports the motion of AI sovereignty for Africa and South Africa in particular is the commitment towards the AI skill development.  There's a gain in what we are getting from going through the fragmented policies is that South Africa is hoping to build its own AI ‑‑ pool of AI experts to research and develop AI driven solutions and address some of the problems that it has.  There are different programmes starting from a basic primary education level through to the university levels.  We are focusing on the stem subjects as well as AI related subjects.  Of course, the question would be how long are these initiatives going to be implemented?  Most of them they are still strategies and they are still plans that are yet to be implemented.  So it is still a long process.  The last point that I also want to point out is they still need to have an AI strategy.  They don't have a clear AI strategy or AI policy.  But I would like to say or to think that it is important for countries to first prioritise like Gbenga said for the AI strategy or AI policy in place.  So starting from what are the low‑hanging fruits?  We have data protection law.  Are they enough?  Adequate enough to address some of the data processed securities?  Do we have cybercrime?  The legal frameworks are addressing some of the issue.  Of course with just to finalise the challenges that the country and other African countries are facing in likely to face in development of AI systems and even with data processing.  Issue of power outages.  And reliable power supply in South Africa.  It is not a big problem.  Almost every day they are electrical outages and road shedding.  They are saying it is going to run for a period of two years.  Imagine you rely on electricity.  Almost your amount of time you spent online is going to be cut short, because there's not electricity.  That's a challenge that the country is facing.  The second challenge is it applies to other project is issue on meaningful connectivity.  Yes, they have been massive deployment of different additional infrastructures.  Now they are moving to 4G and 5G.  Still about 60 million people are connected in the country.  The need for stronger cybersecurity.  There are lows of cybercrime and protection.  There's still no strategies specifically to do with cybersecurity.  Also coming the last point on implementation of the lows that we have, especially data protection laws, there's always going to be the challenge that our data regulators will not have the capacity and even the expertise to understand some of the AI tools that are in place to be in the position to assist with implementation and enforcement of the laws.

>> LUCA BELLI: Thank you very much.  These issues are interconnected.  Many of them are structural issues.  Particularly I would like to stress something that you would mention about compute and cloud computing.  There are actually three mean corporations that have almost 70% of the cloud computing and little bit of Chinese participation and Alba baa.  Basically the entire world for generative AI.  That's a huge challenge.  If you want to find an alternative, it is investment that takes ages.  It is ten years investment in the best‑case scenario to have something reliable.  No government is in charge for ten years or has the vision to do something.  It is really something that is worrying about.  All right.  This is now the moment for the first break for questions.  We can take two questions.  Then we'll get into the second segment of the session.  If you have questions, you could line ‑‑ yes.  Raise your hand.  There's a microphone there for questions.  We can take two.  We have a quick round of replies.  Then we'll get into the second segment, and take more questions.  All right.  We have one here.  I see two hands here.  If you could use the microphone and mention who you are.  Thank you very much.

>> Hi.  I'm Shugue.  I work for the organisation in human rights.  My background is not in AI.  I was interested in the conversation.  I really wanted to understand the question it proposing.  Whether generative AI can be compatible with data protection.  I understand the challenges that we've all been speaking about.  They've been insightful.  For the second face of this panel I would be super interested to know if there's frameworks or any sort of ways that we have if this has basically worked in regions.  I was really curious.  It is very in line with state and nationality.

>> LUCA BELLI: Yes.  In the second segment we'll think about this.

>> I want to know about why there's the need for data panels.  My question is what are some of the key privacy principles in the generative level so the platforms can comply with it.  I have the question with identifying 17 of them in the paper.  This is just the first step to seek input at the global forum.  Then I would like to test those principles by deploying it around 50 use cases and make it better.  If at the normative level these are some of the keepers.  That level of consensus building would be really helpful for our people.

>> LUCA BELLI: Fantastic.  We have 24 chapters here.  Given the time constraint and space constraints, we are not able to have everyone.  We plan to have a webinar where everyone can present.  Or if you want to have the conversation here in the segment.  If anyone from the audience wants to give a reply, you are welcome to reply.  We'll have feedback from the session.

>> Thank you.  Thank you to the previous speaker.  I would like to thank you.  It is glad to hear from the southern countries.  The voice.  That's very important.  As we have the problem of AI and data protection.  It is a very big question.  I work oddly on the problem.  It is quite clear that AI put into question a set of number of principle of data protection principles.  I would like to have you feeling there about.  First the question of finality.  The question of purpose.  Normally you must have purpose.  And with generative AI system you have to know more.  You have the possibility to have a specified purpose.  The second problem is the question of memorization.  It is not working.  I don't know which kind of data will be interesting for achieving new purpose.  Another problem that we've mentioned is affecting mobility.  It is very different to make the system explainable.  There's no logic.  It is quite clear that you are working and not on the certain logic.  You have no logic at all.  You have all of the problem.  We might come back on the issue.  We are working more and more on non‑personal data.  They are using that for profiling people.  The legislation could emerge quickly.

>> LUCA BELLI: All right.  Any questions?  Initial replies from the panel?  Yes?  Melody, you can go first.

>> MELODY MUSONI: Okay.  We have more questions that we have answers.  Looking at the particular parts of South Africa, we provide the framework.  It comes to automated processes and these are the conclusions that have to be met.  They look at basic data protection and transparency and data subject and limitation.  It is the principles are there.  I think application is key.  Because it is much easier to say, okay, this context.  This is the presence.  On processing.  You are transparent.  Data subjects exercise their rights.  In principle ‑‑ in theory, the principle is applied.  I think we have more questions.  That's what I was saying in the data regulators and the level of expertise and more technical and how the technical side of AI that can be translated into the legal side.  So in my opinion there's more questions to ‑‑ than there are answers.

>> ARMANDO MANZUETAI: Armando doesn't answer.  Like I said there are many questions to be answered regarding the uses.  I think it replies to systems.  Regarding the use of data and any system.  Any platform or technology.  The quality will depend on the system itself.  And if we don't have, as you said, the power protections in place and we don't have the data that's properly collected and probably minimized, then the system will, of course, will do a profiling of the person of the company or the subject itself in the way that doesn't necessarily translate into the realities or the ‑‑ or provide a solution to solve a certain problem.  So in this case, besides having the strong data protection rule, there should be strong data collection and validation involving the data itself for AI or any system to provide a problem solution or help at all.  That's the main challenge that we as a government have.  Especially in the developing nations.  Having good data quality is the main issue that we're facing right now.  To get this in use.

>> LUCA BELLI: Okay.  This is a good segue to the second segment of the session.  Let me give the awards to the regulator.  We have Jonathan Mendoza.  Please, Jonathan, the floor is yours. 

>> JONATHAN MENDOZA: Thank you, Luca.  Good afternoon.  How are you?  I want to thank the organizers for bringing this to the table.  Data governance is a critical topic.  We find it in the history of technological advancement.  Artificial intelligence is rapidly involving offering the potential for innovation growth and improvement in our daily lives.  But in the same way we must also recognise the challenges it poses for its regulation, ethical use, and the importance of promoting AI, transparency, and accountability.  In the Latin American region, the steps have been taken to regulate artificial intelligence.  The region is diverse and has technological deficiencies that only allow access to technology for some sectors and groups of the population.  Therefore closing the digital crest is a primary task.  Even though there are some exercises that are part of the efforts to regulate the artificial intelligence, there needs to be a fully instruct dedicated entirely to it.  In 2019, the members of the authorities in the data protection network issued general recommendation for personal data in the artificial intelligence.  Also in the region, it seems to be closer to the ethical use of technology.  But how could we ensure that algorithms if they are not accessible to public scrutiny.  How can we balance the ethical design and implementation of AI.  Artificial intelligence can contribute to the transformation of development models to make them more productive and sustainability.  To take advantage of the opportunities and minimise the tricks reflection is original and multilateral regulation and currently one of the requirements.  According to the fifth Latin American article in 2023, Argentina and Mexico influence the global discretion on AI.  In the global context, according to the McKenzie Global Institute, they use a development of AI in multiple industries will bring mixed, economic, and labour results.  The 2023 estimations are.  $13 trillion will be the impact of the AI in the global economy.  1.2% will be the contribution to the GPA.  45% of the benefits of AI will be there for finances, health care, and the automotive sector.  As it becomes more difficult for humans to understand how AI tech works, will become harder to resolve in the available problems.  In our interconnected world, they place a key role, because AI knows no borders and international cooperation is not just beneficial, but imperative.  We must ensure that AI respects fundamental rights abiding biases.  The paper with my colleagues is a proposal to start that debate on AI in Latin American region.  Cooperation and strategic alliances with the organisation of American estates will help us attribute this goal.  To facilitate the implementation of this proposal, it is suitable to create a community of experts that analyses and agrees on the important for non‑binding mechanisms regarding the use and implementation of existing and yet to be developed technologies given the risk they could imply in the private life of users.  The objective of this committee of experts must be built on goodwill and on the exchange of knowledge and good practices that promote international cooperation based on multilateralism and the opportunities it affords us with human rights joining effort with other organisations that have spoken out as well as with groups of economic powers that have Shaun their concern about the panorama of the new digital age.  The work of the committee will be based on the mechanism that will seek to analyse the specific cases, issue recommendations, provide follow up, and develop cooperation.  That's be part of the conversation to realise the benefits of AI for our societies while minimizing its potential risk.  We must remind the committee as they strengthen these efforts to ensure that AI serves humanity's best interest.

>> LUCA BELLI: Thank you, Jonathan.  AI has been doing a lot of good work.  There's regulations for how to work with generative AI.  Staying in the Latin American region, I would like to ask Camila who is the mind behind the construction of the group to provide an overview of what happened in Brazil.

>>CAMILA LEITE: Perfect.  Thank you for the invitation and creation of the group.  Thank you for the amazing job that you are doing.  Pleasure to be here with you.  I'm from Brazil and an organisation in Brazil, I would like to focus on that.  We are talking about data privacy.  We're not only talking about data privacy.  We have several rights I'm going to consider.  I'm going to talk fast about the general risks that we can talk about facing the challenges generative AI.  And we're also going to talk about context in terms of the translation and ways ahead too.  AI has lots of possibilities.  Financial services, community services, health, all of the areas can benefit from AI and generative AI.  As we can see, it has two sides.  We can both an opportunity and a challenge on dealing with that.  Especially because innovation goes in a speed that regulation does follow.  That's why it is important also to think about current legislations that have to be applied when we are facing that.  Some general reason that you are tired.  We have issues related to power and on the use of the technology to manipulate people on bias, discriminate, privacy, and vulnerabilities.  We have a challenge coming from a global country.  It is a table that goes off in here.  It is stupendous.  We were talking about how to protect people from that.  We rely on other countries and other technologies and how we can do that and how can we be the sufficient power on that?  It is a great challenge.  Obviously, I don't have an answer.  I hope we can build on that.  Also one important thing is the solution is this kind of technology brings.  When we do that, we disregard the context.  That's the reason why I want to talk more about Brazil.  But before talking about Brazil and the different laws, I would also like to bring the issue of concentration of power.  Once we are talking about generative AI, of course, we think about captivity.  We're not only talking about captivity.  We depend not only on foreign companies when we're talking about the global south and rely on think tanks.  It is necessary and consumer law in the end.  We were putting people in the center.  We are all consumers.  Fist law that we have to come ply is competition law.  The second one is data protection.  To develop on that, I'll talk a case in Brazil that was brought by a known person in Brazil which is Luca Belli about this.  Also consumer rights have to be respected.  We are talking about transparency and information.  We just basically consumer traditional rights.  Beyond that we have also IP law, of course, copyright and ‑‑ but I'm going to focus on that.  Okay.  Talking about Brazil.  Brazil is a huge market.  Not only in terms of market in general, but also on AI.  Brazil is the fourth country on the use of ChatGPT.  Since this is a concern, I'm going to talk about the competition that was presented by Luca about not complying with the data protection law and not complying on the law.  I'm going to focus on the rights that was requested on this petition.  Which is to know the identity of the controller of the data.  This is a good one to know.  The second one is vice president for personal that was respected.  As Luca mentioned, this is not only data protection; right?  This is a human right in the end.  The third one is the right to have access to clear information on the criteria and procedure used on the automatic response.  This is three topics that can ‑‑ Luca brought it.  Everyone is affected by that.  Not only in Brazil, but also in other countries.  This kind of complaint could have been brought by the consumer authority.  We were talking about access to information in the end.  This is a provocation for you.  We have to think about how we can advance on that in Brazil and other countries.  Unfortunately, I have sad news.  They didn't go forward on that.  It is not sad; it is an important issue.  Now aye day, the data protection authority is bringing a consultation on sandbox of AI.  When we bring cases like had, when Luca brings a case like this, they don't advance on that.  I don't know why.  The second context that Jonathan also brought ‑‑

>> Let me ask you to wrap up in one minute.

>>CAMILA LEITE: Okay.  Just one minute.  The network in the American regions was focusing on the ChatGPT and exercisers of rights and transfer of data.  We have to comply with existing laws.  We can advance on future frame work as you were mentioning.  We hope to advance on that.  Meanwhile we have to comply with existing laws.  Thank you.

>> LUCA BELLI: Thank you very much.  This is a concern that concern me personally.  Even when there are roles in place and rights in place.  Every law needs to have elements of flexibility not to ‑‑ to be able not to regulate technology in a way that is to strict and allow the advancement of technology.  When there are closes of flexibility, what's adequate information about how your data processed?  What is the adequate information about the criteria in which your data is utilised to train models?  That's the moment.  Anything ‑‑ adequate is a favorite word of lawyers.  Together with reasonable.  You can charge hefty price and fees.  What is adequate and reasonable?  The role of the regulators is supposed to say what is reasonable?  It is frustrating when the regulators doesn't do it.  They find also some curious by some corporations maybe considered as adequate or reasonable.  Those are hard to believe and think as reasonable and adequate practices.  Anyway, not to get into personal matters.  I would like to ask our online panelist; can you hear us?  I would like to ask if Wayne Wang is connected.

>> Sure.

>> LUCA BELLI: We have specific recommendation on it.  What is it in regard to protection?  Wayne, the floor is yours.

>> Thank you for having me aide.  At least virtually.  It is quite to see new and old friends at least virtually.  Yes.  As per the content of our report, I think I'm supposed to share some Asia perspectives on regulation artificial intelligence in the first place.  I came back to Asia.  I have attended quite a few events.  I haven't defined quite a few jurisdictions or pushes and regular laces for the AI.  As they prefer where they go to arrive.  They also prefer minor steps.  What we call precise regulation and the governance model prefers a light touch and voluntary approach for AI.  Basically the end to use AI as a tool for economic gross and approve ‑‑ approving quality of life.  But they also acknowledge that.  They might have to adopt to exist to global framework.  This is the perception.  From others like you, Brazil and in the United States.  I think as all of us know, you are adopting the abuse and you came out on the space of the idea so far.  The United States tends to stick to the market ideas too.  China has a sector‑specific approach.  In accommodation the technology and generative AI as Luca mentioned.  They are becoming a deregulator.  They have articles.  They are clearly relevant to regulating and official regulation.  Under the newly established measures on generative AI, they highlight the importance of issues the use of data and underlying models in compliances in the law center as we have IP and the data protection.  If these are becoming more interesting as quite a change in the regulatory model in most states and China.  As you are aware the reason for the bipartisan framework focused on the legal accountability and the consumer protection promoting a license by autonomous oversight and entity.  Similarly in China.  They drafted a model proposing the at least‑based approach to governing.  There was some similarities.  There are also some challenges as well.  As long as they look at the entity along the AI data chain in terms of closure, you've done sharing.  The posting of the transparent AI system.  Some of them are reaching as with regards to the governance.  This is more studies and more approaches.  Those new developing basically highlights to those challenges off of AI with the folks that were tailored obligations.  You mentioned that's a solution.  Still you'll see we're operational I've mentioned in our chapter.  Sort of recognizable requiring and sort of the between adaptability and regulatory predictability to ensure governance within the landscape.  We will definitely keep coming across the question of regularlation versus innovation.  I think this is the perfect place to achieve the goal.  I look forward to continuing the collaboration beyond the group in the new future.  Okay.  That's all from me today.  Thank you for having me virtually here today.  Back to Luca.

>> Thank you, Wayne.  This is a good segue to enter into the last speaker of the segment.  Smriti, can you hear us?  Are you connected?

>> SMRITI PARSHEERA: Yes.  I can hear you.

>> LUCA BELLI: Okay.  We can expand on this in the last segment.  The floor is yours.

>> SMRITI PARSHEERA: Thank you.  Hello to everyone in the room and online.  I'm going to be broader than the topic which is suggested which is most specific to generative AI.  My intervention in the books talks about transparency and what transparency should mean in the context.  This is a term that is regarded and accepted in most strategies.  They talk about the principle of transparency among others.  It is reflected in different ways in data protection laws.  And the philosophy of transparency does come about when you think about practices and access to information and collection and all of this does speak in some way to transparency and very often the transparency is connected with explainability and accountability.  When we often think about it in the AI context, it is even the tools and discussions are very much about the technical side of the transparency.  It is about transparency of the model itself.  The paper that I'll use that we need to step back and take a broader lens.  We know there are a number of actors involved and therefore transparency should permeate throughout the entire project.  I take it off one facial recognition systems for being for airports.  You see the similar system.  The argument is there are three layers of transparency.  The first is quality.  How did the project come about?  Is there a law backing it?  Which government took the decision through what process?  The second is about technical transparency.  What kind of data was used?  So designed the code?  What does the code do?  How well does it work?  The third is about operational and operational.  Which is the entity that's giving effect to this.  How does the system work?  What are the kind of failures that you see?  What are the accountability mechanisms?  Do they answer to the parliament or ‑‑ I apply this in the paper.  I'm not going to go into great detailings about the findings.  There were three observations that I make.  One cannot just about should we bring this in the first place et cetera.  There's a culture of third parties working as philanthropy and think tank and consult Tate.  Finally transparency you have outside of the public and private sector and non‑profit running the systems.  Which tools of transparency apply to the entities?  We see in the case study the design does not enable the application off.  Transparency and public disposure.  I'm stuck with that.  People in the room, I would love to hear your comment if you have a chance to read your paper later.

>> LUCA BELLI: Fantastic.  We have to have a series of actions in the next 5‑10 memberships.  We'll have the possibility for participants to ask questions.  At the same time, we'll have the speakers of the initial two rounds that will move to the first row of chairs.  The speakers of the last round will move to this part.  Speakers have to be here.  If you have questions in the room, please this is the moment for you to ask questions using the microphone there.  We have questions ‑‑ let me also thank Shilpa.  That's our remote moderator.  We can take a question from online participants.

>> There's a question from Mr. Moucaberry.  Could shaping the intention help to manage the risk at international level?  Do the competition between AI powers allow this?  What is the role of the IGF.

>> It is an open question.  The new set of finalist has any ideas on hand.  My personal take is it will take a lot ‑‑ a lot of time before I have an international agreement on any regime on AI.  That's precisely the way why you may need tech executive or at least some of them maybe advocate for an international regime.  It will take seven‑ten years to be slightly meaningful.  I don't know if we have any other thought on an international organisation.  This is a really good person.  He's the last speaker in the spot.  He's written about the topic.  No one better than you can reply to this and start with your food.

>> Thank you.  I'm amazed at how quickly it came together.  My paper was under powerful regulatory block and examine the implications of the trend for emerging AI governance landscape.  I'm going to have to go through this quickly.  I don't go too quickly into the detail.  I discussed the different structures.  There's a broader tension between the values and efficiencies in harmisation and the tendencies of harm sized standards at the unlevel or the Brussels or the California effect on whatever to trample over important local con ex, not only in terms of the populations being impacted by AI, but also in term in the basic level how it is harmed and any legislative pave work.  I argue in the paper there's a challenge in terms of I'm trying to develop a harmonized structure.  The people who tend to have a seat at table tend to be from wealthier parts of the world.  I explore that tension.  I'll caution by saying it can be overly reductionist to view the dynamic in global north or south that there are a lot of different mentions to this.  Ultimately, I say that a framework.  It is important to query if they meet the needs and concerns on the tech really and the development and harmisation of these standards has potential to further entrench.  That's a 2:00 version of my paper.  I'm happy to chat for the folk that have questions.

>> Thank you.  For the reply and presentation of your paper.  I think you have al a question.  We can do this.  We can take the question.  It will be the first question to be replied at the end of the presentation.

>> I wanted to build on what Michael just said.  I'm Michael.  I work in Washington.  One of my colleague is Ann Bradford that wrote the book, "The Brussels effect."

We have a debate if the Brussels effect or defense?  They are taking earmark language.  A more important problem with the EA act is they are running throw.  They haven't gotten a definition of what is AI.  I'm a physicist.  Not a lawyer.  The first thing we did was getting the definitions right.  Got just defending what not going to regulate.  I guess my question is how do we have the big, aspirational goals on a big field of technology that will be outdated in 18 month.

>> LUCA BELLI: Okay.  Because we started ten minutes late, we might have ten minute.  The next one is Shekar.

>> SHEKAR: Thank you.  Perfect time to rush through the paper.  But my art chapter talks a little bit and answer some of the questions like the first panel spoke about also.  I'll very briefly touch upon the three thing we do in our paper and what's the background for that.  As we all know there's already a lot of bus around uncertain or the AI regulations and AI technologies item.  Just a respond to that.  We see a lot of frameworks happening at the levels.  You see the legislations cropping in here.  One question ‑‑ very important we tried to answer through the chapter is tomorrow we bring a framework.  We say that AI developers have to follow a certain amount of principles.  Will everything become fine?  That's where the paper comes in that says deployers and in the space of the technology.  AI technology used to be B to B and now it is B to C.  We also interface with it.  It is the very specific question.  Just to framework at the ecosystem level.  The responsibilities are devices and various stakeholders.  Collectively and collaboratively we talk about exclusion.  If we talk about the impact of what is happens and the fist implication.  It just doesn't happen because one aspect has gone wrong.  There are various aspects of the life cycle.  We all know there are different players involved.  All of these, you know, implications come together and make the exclusion happen.  We went about actually mapping that.  This also answers with the point on what is the liability?  What is the liability or responsibility?  We need to understand who and what they do.  After doing this, obviously, what's the principles?  That everybody has to follow.  This also answers somebody's question from the online.  We have a lot of principles available.  We need to have a conversation.  You have those principles.  I think this is a principle I resonate with.  I think that's the starting point.  Maybe that's an answer to the question.  Starting at the international level everybody coming together and discussing about something and collaborative let's do it for the international level.  So we map all of the principles.  The third point is operational.  In operationisation what we went about doing is the specific gap.  Bring out the different stables and show that you came.  Hold human in the groups a principle.  When we come to mapping and design stage, it means differently.  It means you have to engage with the stakeholders or bring the population into the room.  Same principle.  That's the difference that we bring.  Thirdly the final point before I can conclude that's also the impact.  We map the impacts, optionalisation, and implementational.  That goes to your governments.  Look at a little bit also somebody mentioned the ‑‑ I think the last speaker ‑‑ there's a market for it in Brazil.  They need to balance the approach.  Not necessarily they have to be compliant based.  It can be market‑based.  How can we enable the market?  We have the framework that the market and there's a value problem significance.  This is what we do in the paper.

>> LUCA BELLI: Fantastic.  Let me thank them for being concise.  We have time constraints.  They are kind to give us five to ten minutes to finalise.  Now the floor Kazim.

>> KAZIM RIZVI: Thank you so much.  What they were talking about we have two people as part of the brief.  The second looks at mapping for the principle.  We are looking at certain sectors.  We maybe have to look at understanding synergies and conflicts with regards to the AI principles and how they play off.  They look at the finance and finance sector and the second is the health care techno.  For these two we come up with certain principles that are critical.  To make sure that you are deploying just for the principles on the ground.  So the paper has adopted an approach where it has looked at the technical and non‑technical layer of AI.  Within the technical layer you are looking at different implementation solutions and how do you have the solutions with the responsible AI that we're developing.  The non, technical layer is exploding the strategies to look at the responsibility of the implementation and ethical directions, et cetera.  All of had has been done ‑‑ we have an approach towards the mapping and the principles.  That's something that we've been making it about.  Because we believe that, you know, you need a different set of stakeholders and industry as a society and the academy and government to sort of come up and look at how the principles will be operationalised.  We've spoken to experts.  We will be looking at certain discussion and see if some of the principles can be implemented effectively.  Also to look at regulation.  We have identified that AI is no specific act in India.  So we have tried to come up with some sort of principles where you have the privacy law, ID law, and different laws which are coming up.  How do they work together and harmonise with each other with respect to regulation in the future.  One level you are talking about domestic coordination.  Existing Internet laws, how can they be harmonized.  The second around.  That's even previously what they were talking about.  This is something they looked at a global level to become the framework.  Looking at the health care and what is required and it may not be necessary for the financial sec to.  That kind of mapping is what we're doing right now.  We're looking at the approaches.  So I know we're talking about mechanisms, private public partnerships and looking at the protections within the developers.  How do we ensure safety?  That's something that we've looked at as well.  The idea is to look at deployment and testation and testing it with one of these two sectors.

>> LUCA BELLI: The technical support is telling me we have to move past very breaking things.

The last two speakers who will very quickly explain.  Then the very last other presentation.  If we can have this online.  Can we have the presentation?  In the interest of time, ‑‑ yes.  We have the presentation.

>> Thank you.  I will quickly dive deep into the partnership between educational intelligence and the corporate.  Artificial intelligence gains is shaping our environment.  Also the corporate government and the business processes are being affecting by the technology revolution.  In deed we're hearing about art official intelligence as directors.  Legally speaking, it is helpful in this time.  I have a feeling we are going towards a new form of corporate above man as corporate governance model where artificial intelligence is an construct or can maybe institute it or correct for the safety and decision making.  They have compliant.  I have a question to myself.  We are going towards the technologization of the human being.  I'm afraid of it.  As we know we have a lot of problem in the revolution.  The main problems I'm work on in my paper is the transparency and the mobility problems.  For this reason I tried to create a framework to allow the corporation to do it in an ethical way.  The framework is granted.  It can analyse and improve the coop with ‑‑ both of them in seven step to control art initial interest human in the loop and human op the loop.  Of course within this model is ground in the human rights, EI.  That is better prevent than reaction.  Under the corporate governance and quickly I'm going to conclude.  Counsel can act as a filter between the stakeholder and the output of the artificial intelligence.  Also I conclude that we are asking not only to give but all to you, this is not the legislators start to think about the technology as a corporate mention?  If you have it in reference to the administration, my answer is yes.  I think that's the time.  Thank you.

>> Fantastic.  Thank you for doing the in three minutes.  Then we have the final one.  Pleiss, Liisa, the floor is yours.

>> LIISA JANSSENS: Thank you.  I will shortly explain where I'm from.  That's important about where I'm from.  I have a background in law and philosophy.  I combined the two portions of the projects.  I work with organisations and technicians.  I'm now at my seventh year.  It is working out for the second year together.  How am I doing that?  I found a way how to work together.  You can meet in other disciplines and stay on your own discipline and meet together and how to solve problems from the technical point of view and how to connect those two, for example, rule of law mechanisms.  Because I'm trying to cook for new requirements from the point of view of recall of law tenant.  We can find an agreement and also in the European union and in the USA.  The rule of law is about good governance.  If I to AI.  I found a way to work October.  That's a scenario that's a good, in fact, you can trust test the project.  If you have digital twins and maybe a real setting test environment.  Thank you.

>> Fantastic.  Foe as everyone has been patient to stay here until the end of the day, it is 6:36.  So you all deserve a free complimentary copy of the book.  The first that will run here deserve it.  The one others will have a free access PDF that you can download on the page for the governance coalition.  You can use both 2023.  You can use the follow to give us feedback.  You can speak with us now.  We can have a drink now together.  You can give us feedback.  All of the feedback is very well.  Thank you very much.  Really, thank you very much.  I don't want to diminish the importance of the first two sets.  But this last one has been fantastic.  Thank you, a lot, to the technical teams.  Excellent.  Thank you very much.