IGF 2022 Day 0 Event #27 Regulatory challenges of addressing advanced technologies (AI and metaverse) – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> ‑‑ who is the director of Information Society against crime at Council of Europe interest his director carries monitoring incorporation activities on freedom of expression, data protection, AI, Internet cyber crime, money corruption and money laundering.  So, Yen, over to you and let us start.

>> Thank you very much, Thomas, for this introduction.  I think the sound is now okay.  I hope you can hear me.

>> THOMAS SCHNEIDER:  Yes.  I can hear you.

>> It's a privilege for me.  Thank you very much given the opportunity to provide you with special information on the work of the Council of Europe and Artificial Intelligence.  And we very much appreciate the possibility to be able to exchange, of course, not only with our usual European artists, but also with states around the world.  As always, the council of Europe is organizing a series of events at the IGF.  This time, we will be organizing with the UN office on genocide a session on combating hate speech.  We'll also have a workshop on cyber attacks and E evidence drawing attention to our work of the cyber crime convention, mechanism other the TCyork, the octopus program and, of course, the latest protocol to that cyber crime convention.  We will be promoting Democratic (?) online.  There are things by the Council at this IGF.

Today I'm very honored to be able to speak to you on the work of Artificial Intelligence.  Particularly honored because this session is chaired by the Chair Mr. Thomas Schneider and the vice Chair is also participating in the session.  So anything I forget or fail to mention will be introduced by them.  I will try to stick to the allotted time, but just a few words to stress that the Council of Europe is very distinct.  It brings 46 member states together.  It is Europe's oldest political organization and it strives to promote human rights of democracy through a variety of ways but primarily through legal corporation and agreeing on common standards in the form of treaties.  So not just political recommendations or mere statements, but treaties.  And the Council of Europe has some 200 plus treaties now of which a whole series are open also to non‑member states.

Also the council of Europe realized very early on that in order to protect human rights of democracy, one had to address new technologies, which is why 40 years ago, the Council of Europe established the world's first treaty on data protection and convention 108, which has now been rectified by 55 states and spoiled in 70 countries participate also in a number of them as observers in its work.  21 years ago, the Council of Europe established the cyber crime committee that I mentioned and on which a workshop will be organized.  And that conversion was recently rat Iifyd by Nigeria as the 67th ratified state.  There were more than 130 countries, every continent to help with capacity building and for the parties of the convention, the implementation of the convention obligations.  So it is no surprise that the Council of Europe realized also that artificial intelligence was an issue of technology that with all different testing benefits and things can also have real impact on human rights, which is why in 2019, the committee was set up by our governments, the then 47 because the Council of Europe has expeled this year.  So the member states established a committee with two tasks in 2019.  First to assess whether new regulation, new law was necessary with regard to Artificial Intelligence and secondly, if the answer to that first question do we need anything more was yes, already identify what elements, what should be dealt with in such a new legal treaty.

At the end of 2020, despite the pandemic, the committee it was called Kahi.  It was chaired by my co‑panelist.  Answer the first question namely yes there are important gaps existing regulation, not sufficient.  We need new legal principles to ensure that Artificial Intelligence will not interfere with human rights but on the country foster these values.

The committee, this committee then went on in its second year in 2021 to identify a number of elements that could usefully be put in the form of what was then still called a possible legal instrument and presented this report at the end of '21.  Again, unanimously.  On the BASIS of that report, the Council of Europe started this year to negotiate a treaty.  The legal instrument should be binding.  It should be a treaty.  We hope the world's first treaty on Artificial Intelligence and human rights and that is negotiations that started this year.  Thomas Schneider was elected to Chair.  We are at the table and I think this is important to stress.  The U.S. represented and we have our observer states, the United States, Canada, Mexico, Israel ‑‑ sorry.  Japan and the United States.  In addition, Israel, which I mentioned a second ago, not an observer state, but made a request to negotiations.  It is fully participating.  In addition, we have business partners, so‑called digital partners.  Global companies, tech companies including?  (?) in this panel, but also met us (?) and a series of smaller European associations of Internet service providers.  Also telecops and also standard sum as I triple E.  And so the business communities very much represented as is Civil Society and academia.  So it's a truly multi‑stakeholder process.  That it is urgent to address human rights aspects of Artificial Intelligence has become clear throughout the world in a number of cases where governments used Artificial Intelligence and the results were not always what was hoped for.  And in certain cases such as a particular Scandal or a tragedy I should say because it led to loss of life in my own country with the tax authorities used Artificial Intelligence to detect fraud with regard to child benefits and the system got it dramatically wrong.  It led to deaths of people commits suicide.  1600 children were taken away from their parents.  The government failed and it was a dramatic demonstration of what can happen if there is insufficient governance of AI, if the process, if the pride boxes are not taken and the process is not supervised.  This and are the examples that made the committee of Ministers realize that not only should we negotiate, we should also do it fast.  And therefore, the Council of Europe has a deadline of the 15th of November, 2023.  So within 1 year minus one week, the draft treaty should be presented to our governance.  That is a (?) and many of you will be aware of treaty neg were oniations in the content of the united nations or other regional organizations and you will probably agree with me that negotiating a treaty within a year is really a very ambitious; however, as I mentioned, we have two years of preparatory work.  Elements are aligned very much in detail and we therefore think that it is possible to negotiate this treaty within a year.

The Council of Europe are very much complimentary to the international organizations OACD, UNICEF and principling regarding AI and the European union is working on the union's AI act, which is conceived initially as internal markets effort.  Therefore dealing with products ensuring product are safe, the Council of Europe approach is very much a procedure to ensure when governance use Artificial Intelligence themselves or let others use it in ways that affect or may affect human rights in the world of law, that number of safe guards are building during this session.  You will hear more of those.  They are and you will not be surprised by the substance.  Many of them are based that already exist with charters that have been developed around the world; however, ethical charters are not binding and they don't confirm rights upon citizens and cannot be (inaudible) before court.  They're not insulated into national legislation.  As a global convention and the cyber crime convention I mentioned earlier is the example we would like to follow that it will be ‑‑ that its provisions will be transposed into national law.  They will provide real (inaudible) and remedies to our citizens.

The provisions in the convention will be supplementd by another document which is a human rights rule of law and democracy impact assessment.  We believe it is essential that before resorting, before using Artificial Intelligence on behalf of public authorities own before delegating to machines, it is (?) by qualifying human beings that the impact of that decision are probably assessed.  We're developing this tool with the institute in the United Kingdom and it will be linked to the treaty but in such a way that it is updated more easily than a treaty canning to remain flexible and take care of future technology called developments.  Impact assessment is really in our mind the key element.  Of course human rights and environmental impact.  You know in order to train Artificial Intelligence systems a lot of energy is needed and the question arises.  Is it very marginal increase in okayeracy worth an enormous investment in a carbon foot print.  These are questions to be asked.  We look very much forward to the discussion.  We hope that many of you will also consider following the discussions in more detail possibly requesting to join them and we, of course, are very interested to learn from you, from our partners all over the world how you are progressing with regard to this very urgent question:  How can such a powerful and very positive technology like Artificial Intelligence be hardest for the good and how can we ensure that either by mistake or by design it could be used to or lead to human rights violations or undermining the rule of democracy.  I look very much forward to discussions.  And thank you very much.

>> THOMAS SCHNEIDER:  Thank you very much.  We have expressed from several continents and let me just start to give the floor to the first speaker in the panel, which is from the African continent.  Sally is international AI expert and currently a Ph.D. candidate at the royal hole lowway university in London.  She servedas the AI advisor in Egypt where she was also working on the national AI strategy.  She has also been a vice Chair of the UNESCO ad hoc expert group that was asked with the development of the draft recommendation that you all know on ethics and AI that was adoptd last year.  She has had a number of other functions.  Let me address the question to you, Sally.  What have been the main challenges of dealing with something like AI on national level, but also on regional level in a country like Egypt or others.  How did you deal with them?  What are you doing dentally than maybe in other continents and also an important question:  What is the role of international organizations should play in helping to create a level playing field of AI ideally across the globe.  So Sally, over to you.

>> Thank you, Thomas.  Good afternoon, good evening, good morning.  Great to be here.  I hope you can all hear me okay.  So we started thinking about the AI strategy in Egypt in late 2018, early 2019 and at the time, it wasn't a given that every country in the world given an AI strategy.  There were understandably a lot of fears, a lot of misunderstandings around AI.  There was a link between AI and automation, especially in things like manufacturing and there was this perception that AI was this automation technology that was designed for develop the countries that suffer from lack of human labor in sufficient numbers and having this to replace human labor which they do not necessarily have to worry about.  It can damage the economies and you all know the narrative.  Luckily, this started changing a little bit and we have quite an intensive national debate about it and it started to shift a little bit towards AI as this technology or I'd like to call it an umbrella of technologies that can help achieve a developing country goals in line with the UN SDGs, but also the local aspirations and priorities of citizens.  From that, from the idea for international AI strategy and we decided to really emphasize the role that AI can play in development by choosings the slogan AI for development and prosperity.  And this is how we structure the strategy.  So it's around four pillars mainly AI for government.  How do we build on existing digital transformation efforts within the government to then add an AI layer on top to facilitate not open decision making, but to standardize workflows across the different government entities and to compensate a little bit for lack of training of people and so on.  The second one was the absorption of AI in various sectors and we chose a number of sectors like agriculture, healthcare, environment and smart cities to start with and to explore how we can maximize the use of AI to create value in the sectors.  And then the third one was around capacity building.  Of course, be it the individual or capacity building.  Finally, the role that Egypt can play with regards to various recommendations and guidelines and treaties and so on.

Now, to talk about challenges, I think we face pretty much the same challenges that any developing country would face when thinkings about implementing an AI strategy.  The thing about AI is that it builds on so many existing elements and foundations.  And if you don't have these elements in place, then you're automatically disadvantaged.  You're disadvantaged in rankings and the various AI development and so on, but you're also far behind in how quickly you can adopt and promote AI.  And I've kind of structured these challenges or these foundations along three main areas namely human institutional and cultural.  You're implementing AI in a country in the global south that has 40, 50% literacy rate.  Then this is a very different place to start from them going to a country that already has 50% rate.  Most people go to college and they already know how to at least deal with technology.  And you can just build on the knowledge that you can assume is there.  The picture looks very different in developing countries where quite often, you have to teach people how to read and write and teach them about the basics of technology and even if they're just going to be normal users and then you can start with AI, which is a very difficult road not only in the sheer number of people that lead to educate, but in find thing people that can teach them and finding the tools that can do the teaching.  So teaching the teacher or training the trainer becomes a huge priority if you want to implement this at scale.  And then, of course, you need to also institutionalize that mindset across organizations as well in the public or Private Sector.

The second area, the institutional bit is ‑‑ let me actually tell you how we address that or addressing that first one in Egypt mainly.  Capacity building and the importance of catching up.  So what we did is we developed a pyramid strategy where Egyptians and everything has to come in the shape of a pyramid.  We developed two pyramids.  Based on the roles that different people and different organizations can play in an AI‑driven ecosystem.  This comes in the form of either very tech centered roles.  So we have broken down a typical AI development team whether it's data scientists, engineers, machine happening, experts and architects, QA specialists and so on and we came up with structure of how many we would need and what we would need to teach them.  This is to build the ecosystem of AI development in the country.

The other pyramid has to do with non‑technical roles.  First and most importantly the main experts.  So people who work in the domains in healthcare and agriculture infrastructure in whatever domain you're trying to implement AI in and can help you absorb these technologies and decide on where the data is, what the type results are and so on and so forth.  And this is ‑‑ and this pyramid also includes things like teaching AI at the highest levels ofs and government.  So AI essentially, but also the wider base of children in schools and in universities.  When do we start teaching them about AI?  In what way can they start using it and so on.  This is the first one and of course, it was a company by a large number of intensive duty programs as well.  The institutional part is again if you don't have an existing digital base in your government, for example, if most of the government operations still work with pen and paper, then you have a very hard time thinking about how to introduce AI whether from a practical perspective or even from people's mindset because you also need to reengineer those processes behind that transformation.  And then in Egypt, we were lucky we had already started a digital transformation of the government.  So adding that layer in some sectors wasn't that big a jump.  In some sectors that are still relatively traditional, it is and will be for some time a huge challenge.  So this is the second thing that you need to take care of.

The third thing has to do with the culture radius to embrace AI.  Unfortunately, when we talk about AI, we talk about quality.  We talk about leaving no one behind.  But AI does exactly that.  It amplifies the differences and disadvantages that already exist in society.  And Egypt and all developing countries are no different than that.  So if we're talking about people who can read and write or people who are already disadvantaged because of their economic status or they live in remote areas oring about like that, then they would automatically have less access to AI else.  They would have less access to opportunities and you have to flip the argument and say okay.  So how can I use AI to reach those people in those remote areas and we implemented this through a number of programs including things like remote healthcare diagnosis or online learning or for different groups and so on and so forth.  But the cultural issue has to do with more than that.  It has to do with society's perception of things like ethics of the legal and regulatory framework that already exists in the country which again you have to fix or you have to build before you can adapt it to AI.  If you don't have the diagnose for cybersecurity and inteelementual property, for e‑Commerce, whatever the case, then it becomes a big jump to start legislating for AI suddenly.  And that kind of takes me to the second part of Thomas's question which is organizations and the role that developing countries can play in that noble context of AI regulation and ethics and related discussions.  And I have to say that I'm a little bit skeptical about the pocket of having a global treaty for AI, just having seen as Young mentioned negotiations are always extremely hard.  The problem is that they end up giving us the least common denominator of everything that no one necessarily agrees with which is something that is very, very (?).  There is additional challenge that there's this perception of these negotiations and these treaties always being driven by the west.  So talking about the EU and yes.  I know the Council of Europe is different from the Eu, but there's this perception that it still represents European countries or coming from the U.S. that is trying to impose sets of values and priorities that don't necessarily correspond with those in developing countries.  I realize I'm running on the of time.  So I'll wrap up and we can talk later in the Q&A.  But just to say that my kind of suggestion or my preference would be to empower regional coalitions and regional clusters of countries that are like‑minded  and come together is and have their own regional treaties on AI and the governance and legislation and so on and then to build bridges between those.  This would be the first suggestion.  The second suggestion is for international organizations to focus upon more on weaving the story of AI ethic and legislation into the story of development because that is really the main priority for developing countries is how can this technology help us achieve our goals rather than how can we mitigate the risks and so on.  Risk mitigation is important, but it's not the priority because if they don't realize the value behind the technology, they're not going to think about adopting anyway.  So I'll stop here and we'll hopefully continue in the Q&A.  Thank you very much.

>> THOMAS SCHNEIDER:  Thank you very much, Sally.  A little too long presentation, but I think it was important to get that view also from one of the developing countries.  So I will move as quickly as I can to the next speaker which is from Japan.  I'm asking you to stay within the time that we agreed.  Professor Susumu is a professor at the faculty of global informatics in Tokyo.  A lot of things he has been a member of the Ues inca and the ‑‑ Ues inca and the cabinet office of the government of Japan.  So Susumu, over to you.  Thank you.

>> Professor Susumu:  Thank you very much, Mr. Schneider for kindly introducing me to the audience.  It is my honor to be given this opportunity.

Have you heard my voice?

>> THOMAS SCHNEIDER:  Yes.  We can hear you.

>> Professor Susumu:  It might be difficult for me to show you my video, my face unfortunately.  So I will keep on speaking because time is limited.  And just a moment.  I will share ‑‑ yes.  And I share my ‑‑ just a moment.  Can I share my slide using my device?  That should work.  Maybe remote guy can do that.

>> Professor Susumu:  Can you see the slide now?  Yes.  In Japan, AI is regulated under the so‑called (inaudible).  Principles of human centric AI feature or published by Council for principles of human centric AI in the cabinet office.  And there are two more specific concretes principles and guidelines in Japan.  They're AI principles and guidelines as well as AI utilization principles and guidelines.  Those principles and guidelines are published by confidence to work AI network society on MIC.  The internal affairs and communications.  They should build and comply with their own self‑regulations regarding development or usage of AI.  They are encouraged to follow suit of the governmental principles; however, the social principles of the cabinet office is a little bit of a consensualistic.  MIC, RND and the utilization guidelines will be more helpful for corporations and entities to build their own internal (?). per this slide shows some portion from the principles and documents.  Actually, the documents are very voluminous.  But they're specific and concrete.  Therefore, they are ahead of corporations to build their own internal roots.  First a feature of Japan's principles and guidelines is (?).  Second, it's been prepared through participation.  And thirdly from the beginning, we aimed for building a global standard because we the confidence expected that AI would cross‑borders easily.

This slide depicts the historical development towards OACD and beyond their love.  It will begin in 2016 in the bottom of this slide.  I was the vice chairman of this confidence speech prepared a tentative draft.

Immediately after the draft was prepared, G7 ICT Ministers' meeting was the head of Japan.A the that time, madame Minister showed the tentative draft to the other fellow Ministers.  She proposed that G7 members should create common norms like these principles.  Then the Ministers said likes.

Japan's position is intermediate one featuring between the pre‑cautionary Minister and the innovation.  On the one hand, we don't think that knowns are required.  On the other hand, we think that binding regulations can give chilling effects on the development of AI.  Historically, Japan performs Harmony and Japanese people tended to comply with non‑binding (?).  Why Japanese comply with soft law, in Japan, there is no pitch required of people wearing masks under the receipt COVID‑19 environment.  Even today, almost all Japanese people wear masks in public areas such as in the public transportation or offices.  As the first point and the next slide show, actually, many leading corporations voluntarily made their own internal rules given the governmental principles guidelines and OACD principles.  However, I don't know as do MCEs follow suit.  In my personal know if, the government should make efforts do make them know the principles and guidelines published by the government and OACD.  In addition of course, corporations and entities should comply with the current and enforceable law.  In this concontinuous, I think American EOCs efforts are helpful.  It is my understanding that the EEOC is federal commission which has its jurisdiction of employment discrimination.  The EEOC pronounced that it launched an initiative to ensure that usage of AI and other imaginative technologies in hiring what other employment decisions should comply with the currently discrimination laws and it said, for example, that the EEOC would issue guidance on use of AI in implemented decisions and that it would identify promising practices and so on.  These efforts to make sure corporations practices should comply with the current enforceable law are very important, I think.  Thank you for giving me this opportunity to make my presentation on AI governance in Japan.  Thank you very much.

[APPLAUSE]

>> THOMAS SCHNEIDER:  Thank you.  It was interesting to see which cultural differences come into play with the use of different instruments.

Next I will move to Gregor Strojin from Slovenia.  He has a large experience in particular when it comes to justice and court issues.  He was the Chair of the previous committee that was mentioned, the commit eye that spent 2 and a half years on leg the ground of the work we're undertaking.  Gregor, over to you.

>> GREGOR STROJIN:  Thank you for unmuting me.  I will try to be brief and I'm happy that both you and also the (?) have provided to what was the goal.  Basically to connect the issues and understanding of AI between Technical Community, legal community, political community, industry in Civil Society and provide a visibility study for a potential instrument that would govern the design development and application of AI in line with the standards and human rights, rule of law and democracy.  I will not go too much into detail of the work, but mostly I would like to or briefly I would like to point out a couple of findings.  I would like to refer to what madame rightly said there is a challenge to create an international or even global treaty on AI.  It can go toward minimum common denominator.  Well, at the same time, AI is perceived as a strategic and competitive tool and precisely because of this, we need a common understanding and also guidance into what kind of a society we want because it is a technology that will shape not only our presence, but also our future and it is the responsibility of international community of all the countries to find a solution that goes well beyond the common denominator.  Of course, it's part, but let me give you an example why do we even need to go beyond ethics and recommendations and what are some of the priorities.  And let me address one persistent regulation does not inhibit innovation.  As it happens, it might be easier to prove that it is the lack of regulation that is currently inhibiting innovation by entrenching the existing monopolies while stimulating optimization of (?).  In the beginning, there were no specific regulations.  They have developed over time due to fear needs which have e merged in this society.  We need safety.  We need seatbelts.  We need traffic rules.  We need clear rules on liability.  We need lower e missions.  Individual ideas for optimizations were slowly shaped into recommendations and these were adopted as binding rules to increase their actual effectiveness.  Over a century ago, there was comtition in the development of both electric and gas powered vehicles, but it was not regulation that stifled that innovation.  The market chose gas for business reasons.  Partially that was also linked to the existing monopolies of the time the oil industry.  This brought consequences and it is now, of course, coming back full circle.  Climate change and Civil Society have influenced governments to impose rules and the mart is now choosing to readapt by moving towards electric vehicles.  We cannot afford another century and probably not a decade to arrive in similar conclusions with AI.  We need smart regulation with coherent, comprehensive and systemic solutions to avoid unintended and sometimes intended consequences.  This is why we need binding instruments and we need them fast.  We should be proven to what we bind ourselves to and not to vest as to make waste.  We need a legal framework that provides certainty, building blocks of various initiatives should be at lost compatible, if not complimentary.  If not, we might well be counter productive in our efforts with the fragmented development.  We must focus on realistic problems and not in fiction.  Most challenges currently remain on a very human level and are still not adequately solved.

We should not be techno solutionists and allow additional pseudoscience to creep into governance.  We should not be expecting too much from AI's capabilities or even create new scaled up inequalities by failing to ask the right questions, use AI in decision making or accept the as faith and complete that cannotting avoided or changed.  We need transparency especially on what is used, how or for what purpose.  Some applications may prove to present unacceptable risk and should be considered for mora toria such as AI systems using biometrics to identify, categorize or infer characteristics or emotions of individuals and in particular, if they lead to mass surveillance or AI systems used for social scoring to determine essential services.  We do not need explainability for all types of applications.  But we do need better disclosure and understanding of the capabilities and limits of uses.  We do need auditability and accountability especially if we are sincere about our desires to increase implementation of solutions and their quality.  We need to prevent and mitigate risks and avoid strengthenings some of the existing trends.  Effective compliance mechanisms and stands must be ensured through independent and impartial supervisor authorities.  List classification and impact As soment mechanisms are necessary and must be consistently, systematically provided throughout the life cycle of applications.  They need to be proportionate to the nature of the risk they pose and carefully balance with the abilities of the developers and expectations of the society to high compliance burdens and provide an advantage to larger established actors or stimulate avoidance and further conis ification.

Finally, we should avoid or stop emphasize the desire to be first in regulation.  We should strive the great rules that are clear, effective, robust and enabling both for designers and developers as for our fundamental rights and values.  So this is the direction also where we're going with different initiatives in Europe and in 2020, that was the year when the need for regulation was clearly established.  Last year in 2021, key elements of this regulation were elaborated and defined.  And this year is the year when verbal commitments are being put to the test.  Hopefully next year we will find effective instruments come into place next year.  So back to you, Thomas.

>> THOMAS SCHNEIDER:  Thank you very much, Gregor.  So I will move to the last speaker of the panel and we hope we have a few minutes left for interaction.  It is Marisa Jimenez Martin.  She works for meta, the company that used to be called Facebook some time ago if in case you don't think what META is.  It is participating in the work of our committee as an observer like others.  But her presentation will go a little beyond AI but also show us a little bit about other emerging technology.  So over to you and please don't go over time.  We would like to have a few minutes of interaction at the end.  Thank you very much.

>> Marisa Jimenez Martin:  Thank you.  It is a pleasure for me to be here at IGF and in particular an event that isco‑organized with a Council of Europe.  META is in particular a partner in its work on the KAHI first and then,000 in the KI.

Perhaps we can talk more about what META stands in AI later.  You asked me to do a little presentation on the metaverse and it's important because it will open your minds and ears to the next generation where AI also we have an important role.  But to what the others have said before, it is so important to get the regulation of AI right because it's on top of that and then we'll have a new experience and new digital spaces such as the metaverse.  So I will try to do it very, very quickly.  Bear with me.  I will share my screen now.  I hope you can see it.  Do you see it?

>> THOMAS SCHNEIDER:  Yes, we can.  Thank you.

>> Maroycea Jimenez Martin:  I will speak to you about what is the metaverse?  What are the technologies that power the metaverse?  A couple of use cases and what it takes to do responsible innovation.  That will probably fit in really well afterwards to the discussion that we will have.  So the metaverse is really a set of digital spaces where an individual will be able to connect with others in that immersive fashion.  The reality, the metaverse is the new evolution, the evolution of the internet and that is the sense of presence in that world that is so different from what we are used to today.  That will mean that you can travel to the past, to the future, to other places.  For example, let me glyph you.  Here you can be with your friends and go to the top of the hill or learn about history or even play chess with someone that is absent but is very close to you.  That is really the essence of what the metaverse will be in the next 5 to 10 years.  What is important for the discussion we're having today.  It is not really a revolution.  It is an evolution.  So we go from in the '50s from computing with mainframes to the '80s with PCs and computers to the 2000s where we have the mobile Internet, which was really a breakthrough.  We have gone there from text to images to videos.  So obviously the evolution what's logical is that we'll find then the metaverse.  What's also different in the metaverse from the Internet today is you can access it from a variety of devices that are very different to each other like a smart or VO headset.

Now what are the technologies that power to the metaverse?  AI is part of that.  So we call them XR or extended reality.  The extended reality is what covers all these technologies from wearables and computing technology, AR and VR.  So the augmented reality is a computing technology that overlace digital images and also puts animations into somebody's view the real world so you are extend will the reality.  The reality is a main element in augmented reality.  We have virtual reality which is usually talked to together with AR.  It's very different.  Here what you create is a simulation of the virtual world where you can explore in 360 degrees.  Let me give you two examples.  One with AR and one with VR.  The principlality beginning of the metaverse and it is very much the similar examples and similar use cases, but one use is AR and the other one uses VR.  This is an example with medical trainings.  So immersive training that has an enormous potential.  In metaverse, we have worked with the Academy to bring a 20‑minute training for professionals to put on and off their protective wear, which is so important for the safety of our healthcare professionals.  We have seen a 23% ‑‑ not 23%, but 23 times less cost in delivery of the trainings for the WHO.  You can imagine what that means.  The same thing with emergency preparedness.  Now, this is VR.  So here you're creating a simulation where in real life in a situation of distress where you would have paramedics and the series burns.  Training and medical staff and paramedicals to be as ready as possible in a situation of distress.  In this is really the potential of VR.  Now, here's my boss.  You all recognize that.  The metaverse will be built by one company.  In fact, if we can ‑‑ we believe that the new version of the Internet is going to have creators and developers at the front and center and not so much platforms.  Platforms as well, but it is going to be driven by the creative community as well and that has an enormous impact as well when we talk about policies and standards and regulation on the metaverse.  How do we build this?  Responsible innovation?  For us, these are the brings pells in which we know the metaverse should be building.  Frankly, I can put the principles against AI as well because there's an enormous digital opportunity, but it has to be an opportunity for all and therefore, equity and inclusion play a fundamental row.  Privacy has to be a place that is safe where integrity is safe guarded and voice and inclusion.  We are at the IGF.  So I can is to the stop without taking about two things which are governance and technical standards.  So before we talk about who controls the metaverse rather than the metaverses by the way we think is the metaverse and one metaverse in digital spaces, we need to really work and make it interparable.  So it will take some time from 5 to ten years, but first and foremost, we need to focus on the development of standards, a collaboration between the industry creators and also policymakers.  Have many conversations, get the AI word right.  So back to the subject that we have today.  And investment in skills and investment in support of those who will make it possible.  And with that, I'm finished.

>> THOMAS SCHNEIDER:  Very interesting presentation also very being so short.  So we have a few minutes left and given that the speakers have all been participating remotely, I would like to give the opportunity to the people in the room here to express ‑‑ to make sure comments or ask questions.  Who wants to start?  Just hold your hand up.  Yes, please.  Go ahead.  And introduce yourself so we know.

>> Peg Hicks.  I'm with the high Commissioner for human rights in Geneva.  I want to go back to the colleague about Council of Europe pros versus the EU AI act approach and one focusing on products versus process or the use cases.  I wanted to perhaps ask him and others to pick up how that plays out with particular sectoral uses of AI and whether while we're ‑‑ there was such a stress on needing to move quickly on some of this, are there places while we believe in the global approach it might make sense to develop some solutions that might be easier to get consensus on and move forward on more quickly.

>> THOMAS SCHNEIDER:  I answer this as I'm sitting here in this room.  To quickly explain the difference between the EU, AI act, two regulating a digital market in the EU and whoever is acting in the market.  Europe's convention is something that is a legal instrument open to non‑European countries to adhere.  Try to establish a few principles on a principle BASIS between the human rights and specific instruments.  But this is not a sold instrument.  That will have to be complimentd by a number of sectoral instruments they can be binding or non‑binding.  The counselor and others have developed sectoral instruments on data protection, on health, on the judicial system.  So it needs to be several instruments playing together.  It also needs self‑regulatory.  So all of this needs to play together and the council of the convention is one important element.  I hope that answers your question.

>> Good afternoon.  I have a question for the Mehta.

There was a stress involving the problem ant when building the metaverse and in the case ever the European framework, the framework stresses the use of the open standards in the condition of interoperability.  I would like to ask a question:  What is the meta approach for developing or use standards for meta verse.

>> THOMAS SCHNEIDER:  Thank you, Marissa, if you would like to answer this.  I don't know if somebody needs to unmute her.  Yes, now we see.

>> Marisa Jimenez Martin:  Absolutely.  Open standards and interoperability.  Now how to do it?  I will not know how to tell you here.  But indeed, the floss 52 around it is they need to be open standards.

>> THOMAS SCHNEIDER:  Thank you very much.

>> Hello.  Thank you for the interesting session.  My name is Jody KAI.  I'm a student and I have a question that for someone that works in a company, I would like to know if you can share some good and bad practices to organize the Artificial Intelligence in a private sector like how could you manage to like the division of the word between a compliance team and a technical team.  What do you think is the best to organize the burdens in the Private Sector that's my question.  Thank you.

>> THOMAS SCHNEIDER:  Thank you.  If you have somebody working for a company that is developing or using AI in the room, of course, that would be nice.  Otherwise I'll turn to Marissa again, I guess.  We seem to have no industry representatives in the room.  If you can answer the question, thank you.

>> Marisa Jimenez Martin:  That's something that was a take away from this workshop.  I think it's an extremely interesting question and there's no easy answer.  I can tell you how we do it at meta.  So AI is so fundamental to what we do that we have to have that governance model internally.  So what we do is because we believe so much in AI, we know the opportunities they have there are products and services, but we also think it has challenges as well.  That's the first thing to understand that it is a great technology, but it has challenges.  The way we have done is we have a framework divided in five pillars.  One deals with fairness and inclusion.  So to make sure that the VR systems which are ML engineers can detect when there is a bias.  Sometimes these things happen in the AI models because once these are detected, they can be addressed.  We have teams that deal with transparency and control.  Robustness and safety, accountability and governance as well as privacy and disability.  They all come together.  So this is important in a company that you get governance models that have people from different parts of the company.  And another thing that we have developed is something called the open loop.  And the open loop is a prototype where we internally look at different requirements and try to test them and see how they would actually ‑‑ the results that it would rend us.  We would do it with eye Council of Europe requirements too to see what affects it would have in our work.  So it is a wonderful question.  I would imagine that each company will define it in a different way.  But it doesn't sit in one part of the company only like privacy or safety.  It is multi‑disciplinary.  Back to you.

>> THOMAS SCHNEIDER:  Thank you very much.  There's another person.Ing to take two more.  I don't ‑‑ my IGF account is a catastrophe.  So I can't log in on the Zoom link.  Please go ahead.

>> From what we have been hearing, ethically certified AI developers.  Otherwise we don't really know what sounds like metapractices have improved certainly.  I know other people inside that are doing much better in many different dimensions for those outside of the company to have confidence in the actual output of any of these ML models.  It might help advocating for years.thically certified ‑‑ ethically certificate feed AI developers.  The question second that be added to any of these instruments that have been discussed today perhaps an addendum or added on a national level or international level.

>> THOMAS SCHNEIDER:  Who.S to take the answer to this from the remote from the panelists?  If not, I'll quickly try to say something.  The counselor from the frames and we are still discussing what we call a framework convention to make sure that is an umbrella to be fit and then leaving room to implementing things on national level in a given cultural context in a given institutional context.  Of course there's a challenge to basically create a share on the umbrella and be flexible.  So we'll see where we end up with.

>> Just to add then I would suggest that every nation should have ethically certified AI developers or I would worry for that nation that doesn't.

>> THOMAS SCHNEIDER:  Thank you.  Let's take one final person to take the floor.

>> Pressure to have the last question.  I'm Catherine from the German Youth IGF.  I have a question for Marissa and the question would be in Germany, we see a lot of hate crime on different metaplatforms.  And we fear that they would even be stronger and continue in the metaverse and the question would be how you want to address those hate crimes via AI specifically.  Yeah.  That would be interesting to hear about.

>> THOMAS SCHNEIDER:  Over to you, Marissa.  How do you deal with (inaudible) and violence in the Mota verse?  You are still muted.  Can somebody unmute?

>> Marisa Jimenez Martin:  Yeah.  You control from the room.  So there's two questions here, I think.  I would like to take it into parts.  One is how does AI work to detect and to address hate crimes or hate speech or harmful content in general.  And another question is how does that ‑‑ how do we deal with that in the metaverse?  The first question I would say is Einvolved over the years.  To give you an example, today we detect hate speech.  So let's put hate speech.  More than 90% of the time before it is actually reported to us.  In 2017, it was only 24% of hate speech was detected before and reported to us.  And that is, you know, in immensely great majority thanks to AI.  So it is the evolution of technology and classifiers that have allowed us to deal with harmful content to detect it and remove it in a much more significant and efficient way.  So that's the power of the technology.  The technology alone cannot solve a problem, but it just shows that when this classifiers work, we have better results.  So that's one just to answer a little bit on the role of AI.

When it comes to the metaverse, I think the way in the regulation which we deal with Internet of today will have to continue for platforms and services of today.  As we move to the metaverse, we will have to see what are the actual rules that make sense for that metaverse because it will be different digital services and it will not in the same way this is not one platform vision and realization that it is something for many companies and society in general, we will have to see what additional rules we have to bring there.  It is a wonderful question that we will have to agree that is not given.  One thing I would like to leave you with is we think in many instances, the metaverse will look less like the (inaudible) Internet and it will be closer to the real world and that means that many of the rules and governance on addressing hate speech and harmful content will have to be different or will have to have additional ways of addressing them.  We'll have to give more responsibility as well to developers, to creators, provide better tools for everyone so that we can deal with the issues.  But it is true it is difficult and definitely going to be a challenge because of that nature of what we're all building.  So I don't know if that answers your question, but it's a very interesting one where we have ‑‑ we'll have to continue discussing.

>> THOMAS SCHNEIDER:  Thank you very much for the last statement.  So I'll quickly wrap up.  It will not take more than 55 minutes, of course.  No.  Basically as a historian, I know that some people used to say the technology of the future will help us to solve the problems that the technology of the past created, which is not completely wrong, but the whole of the history.  Technology can be useful to make our lives better, but in the end, we need people to talk to each other and agree of how they want our societies and economies to progress.  I hope we contributed a little bit to this dialogue discussion here with this suggestion.  Thanks all very much for being here online and here physically and looking forward to our next location.  Thanks very much.