IGF 2019 – Day 1 – Estrel Saal C – OF #39 Artificial Intelligence – From Principles To Practice

The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 



>> AUDREY PLONK: Good afternoon, ladies and gentlemen. Welcome to the OECD's open forum on Artificial Intelligence: From Principles to Practice. My name is Audrey Plonk, and I head the OECD's digital economy policy division. Our division developed the AI principles that we'll be talking about today, and we're working on developing the OECD AI observatory.

Just a couple of words about the OECD, for those of you not familiar with our work, at the core, the OECD is a governmental organization with 36 member countries that are all market economies and democracies in Europe, North America, and some in South America and Asia, but our reach is not only limited to developed countries. We're increasingly engaged in work in developing countries through partnerships.

We focus on economic and social policy, analysis and statistics so our work our work is often quite upstream in the policy process. We also develop international policy standards, such as the OECD's privacy guidelines guidelines originally in 1980 and revised numerous times since then and the AI policies principles that were just adopted this last May.

I would like to say that stakeholders are closely involved in our work, particularly the division I head, and we have formal advisory committees in place representing business, the technical community, civil society, and labor unions.

I'm delighted that our incoming chair of the committee on digital economy policy, Mr. Yoichi Iida, from Japan will moderate today's session.

Among many of his other achievements, Yoichi led the G20 discussions on AI that resulted in the G20 nations agreeing, in June of this year, to a set of ethical guidelines for AI drawn directly from the OECD AI principles.

With that, I will turn it over to Yoichi.

>> YOICHI IIDA: Thank you very much for your very kind introduction.

Good afternoon, distinguished speakers and ladies and gentlemen.

So we would like to start the very fruitful discussion today. Let me briefly explain outlying of the session. First, we'll focus on policy frameworks for AI at the international level. We'll highlight on the OECD works and on AI and also some discussions at G7 and G20. Then, we will listen to the stakeholders' views and priorities.

After short discussions from Singapore and China, and if time allows, I will open the panel to the floor.

So without further ado, the first presentation is from the OECD. Ms. Karine Perset, the administrator on AI policy at digital economy policy division. She will present the process to develop the AI principles and also foresee next steps for the OECD to especially implement the principles.

So, Karine, please take the floor.

>> KARINE PERSET: Thank you very much. Good morning. Well, good afternoon, in fact, as it's noon. I will give a brief overview of the OECD AI's principles and next steps.

First a little bit of background. Our work on AI began after a G‑7 meeting in 2016 that Ms. Yamada will say more about, and Japan encouraged the OECD to conduct analysis on AI and to organize events. These events led to a pretty broad consensus that high‑level principles were needed for AI to provide direction and agreement on top policy priorities for international corporation.

So in 2018, we created an expert group called I Go with members from about 20 governments, business, civil society, trade unions, technical community like the IEEE that's also represented today, and other IGOs, like the European Commission and UNESCO, who are also here with us. So the principles were adopted in May of this year by 36 OECD countries as well as six partner countries. And then in June of this year, G20 leaders committed to similar principles in Osaka.

So while the process to develop the principles was multistakeholder, the principles are, in fact, an intergovernmental agreement. It's not binding but represents a strong political commitment to use these principles as a common freight mark for national policies.

Ms. Yamada will present the principles themselves, but just to show you a map to visualize the reach of these principles and the importance of the G20 commitment to give a broad common framework worldwide, but it also shows the need to reach out to such regions such as Africa, which have not been closely involved in the process yet.

So back to the OECD principles. They include five principles for the responsible stewardship of responsible AI, i.e., systems that protect and benefit people and individuals as well as five priority recommendations for national policies and international cooperation. So basically to help economies and societies benefit from AI.

The principles were the beginning. Now we're focusing on implementation, and the OECD AI policy observatory is one of our major endeavors to move from principles to action and implementation and help policy makers in this journey.

So what is the observatory? We envision it as a collaborative platform that facilitates knowledge, sharing, measurement, and analysis for trustworthy AI. We're calling it OECD.AI and aim to launch it in February 2020. The core characteristics are one multidisciplinary and interdisciplinary because applications in one area of AI are increasingly transferring lessons into other areas. Two, that's evidence based; and, three, that it's multistakeholder and cooperative. And that's really key because we're bringing together resources from partners from all stakeholder groups in a complimentary manner.

The observatory will provide resources structured around four main pillars. One, the AI principles and their implementation guidance that provides a rationale for each principle and explain what it means, providing examples of actions and initiatives that can help implement these principles.

Two, analysis and public policy areas impacted by AI, areas from science to health, transportation, employment, or education, so this is bringing together analytical work at the OECD but also by partner institutions such as the European Commission, science direct, or Microsoft research.

The third pillar is trends and data. This is really a key area for us. I will just show you a preview of some of the elements we're going to include there.

Four, a live repository or database of country and stakeholder initiatives.

So moving into the preview, I mentioned we'll have OECD metrics and measurements but also live data from partner institutions with the goal of showing from as many vantage points as possible where AI is being developed, where it's being used, funded, by who, how fast, and in which sectors. This is a sneak preview of trends, of live news taking place around the world.

This shows you work we've done in private equity investments in start‑ups across the world. This is another sneak preview of trends in AI research powered by Microsoft Academic Graph that really shows the trends over time. Then we dig into that data in much more detail in the observatory.

Last but not least ‑‑ also, again, we're focusing on AI jobs and skills, which is a major priority for policy makers everywhere. So just trying to show where is the demand for AI skills, what type of AI skills are in demand, et cetera.

Last but not least, we have developed an interactive developed database of AI policies and initiatives from countries that share and update through a survey. To date, we have over 47 countries contributing today and over 270 individual AI policy initiatives and instruments.

So this allows countries to compare their national AI policies interactively and at both the aggregate level and at granular levels, navigating through dashboards by country, territories, types of policy instruments, et cetera. This was a short preview of the AI Policy Observatory.

Thank you. Back to you.

>> YOICHI IIDA: Thank you, Karine. You said the G20 principles are similar to OECD principles, but, actually, they are identical. So G20 fully owes to OECD for the full achievement.

Let me introduce Ms. Makiko Yamada, the Vice Minister of Minister of Internal Affairs and Communications from the Japanese government. She will tell us more about the G7 discussions on AI and the significance of the AI principles, including in the G20.

Please take the floor.

>> MAKIKO YAMADA: Thank you very much. It's my pleasure to share with you Japan's contribution to international discussion on AI since 2016. I have some slides here, so please have a look. This first slide shows the recent sequence of G7 discussions. The starting point was Japan. So I, myself, was in charge of this discussion. We hosted the G7 minister meeting in 2016.

The G7 minister discussed a wide set of issues in Japan. With respect to AI, there was international discussion on AI at G7 with the possibility of formulating a set of guidelines for research and development. The development of technologies in each network along with the opportunities and concerns. The discussion was succeeded and deepened by the following three presences of Italy and France.

This is the material distribute for reference at the meeting. It was the very draft of the list of items that the group was focused on the guidelines covering. We believe there should be an international shared instrument on AI that many countries would refer in their policy making because the impact of AI would be multiplied when systems are interconnected with each other over networks.

In Japan, the draft would be the starting point of the discussion. It was composed of eight principles. Transparency, user assistance, controllability, security, safety, privacy, ethics, and accountability.

As shown on the screen, if you compare this to is OECD's final accomplishment, many factors and we believe this was not bad for the first trial.

This slide shows our national policy on AI. Up until now, the Japanese government published AI strategy in June this year. It has five priority areas that the government focus its effort to prepare for the AI adoption in the economy.

As part of this strategy, expert group developed the social principles of AI this year. This principles are intended to be input in international discussion as a contribution to the global community. So, please, have a look at the slides.

Number two.

Japan took the G7 this year. It was a pleasure to expand to that level. We proposed in the discussion on AI principles at G20 the dichotomy task force.

G20 principles were then elevated to the leader’s level and welcomed by the G20 leaders in the leader’s declaration.

So we are missing the slide. Okay. Slide five.

Japan, particularly the ministry of Internet Affairs and Communications Ministry has supported the work on AI since 2016. After the G7 meeting, support to the OECD includes OECD's conference in the year 2017, AI intelligent machines smart policies. To this, the conference in Paris and more than 300 experts and policy makers from all over the world participated.

Development of AI principles including active participation in AIGO by Japanese experts, great friends of mine, who participated in the OECD's expert group discussion to come up with the principles on AI.

Publications, artificial intelligence in society, we have been supporting the work on AI. This book puts together the OECD's analytical work on AI.

In Japan, we support the policy presented by Karine a few moments ago. I would like to congratulate that and expect the OECD continues to lead the international discussion on AI.

That's the conclusion of my presentation on AI. Thank you.

>> YOICHI IIDA: Thank you for explaining all the way up until now.

I would like to invite Mr. Robert Strayer, Deputy Assistant Secretary from the U.S. Government in charge of cyber and international communications and information policy. Mr. Strayer will provide the U.S. Government's perspective on the principles and also their implementations.

So, Mr. Strayer, please take the floor.

>> Robert strayer: Thank you. I don't need to tell you how important AI is to our if you can. I don't think you would be in this room if you didn't care about how it's going to transform our live in so many ways. It will also have dramatic impacts on our future economic growth and on our national security. It's for those reasons that AI really will be a key foreign policy initiative for all governments and probably why I'm here today to talk about AI. We believe that AI will have dramatically positive impacts if we channel it in the right way. That is why we in the United States released a national AI initiative in February that sets up a whole of nation approach, including academia, civil society, government in the private sector as well as strategy to work with like‑minded partners. One two important things. What we do domestically and what we do with foreign partners around the world.

I will start with the engagement abroad. We've just heard an outline of the OECD principles. That was an important step for more than two dozen democracies to come together around the Celt of key principles that embody our shared values.

Among those is the AI principles say we should have human‑centric AI, AI that's based on our views of privacy and basic human rights. It's important to have those human rights included in AI because there's divergent paths that can be pursued relative to AI. We could see AI become a key enabler of future success and growth and let humans reach even greater potentials of success, or we can see it used in authoritarian ways. In fact, we're already seeing authoritarian governments use it.

We've seen the use of AI technology, facial recognition to identify the weaker population and then have those people sent to camps. More than 1 million are in prison camps in China without any due process. There's two paths, one that follows our shared values and one that does not follow our shared values.

I want to compliment the Japanese government in the G20 this year for the principles in the G20 and the G7 under the leadership of France. We're very interested in seeing how these principles are adopted into practical guidelines for the future. The OECD policy observatory is going to be a key place where we can see different policies that advance AI in ways that are going to be consistent with our values, our beliefs about having explainability and transparency as well as safety and security for AI.

Turning to the United States, our National Science Foundation recently announced an institute program that sets these up across the country. One of the initiatives is to have a trustworthy AI. Our administration is working on steps that agencies can take to advance AI because AI relies on data and the need for large amounts of data. It's very important to have the right kind of policies in place so this data can help advance future AI initiatives.

In closing, I want to thank the OECD for the work on the AI principles and the way they convened so many of us together for those principles. It's going to be important we work together, democracies and those that share similar values in this space. There continues to be a regulatory foundation around the world in regard to data and regarding digital technologies that sets up companies in a way that they're going to be able to continue to innovate in this field.

Thank you.

>> YOICHI IIDA: Thank you, Mr. Strayer. I enjoyed very strong support by the U.S. Government when we worked together for G20 purpose.

So let me introduce Ms. Carolyn N'Guyen, Director of Technology Policy from Microsoft. Ms. N'Guyen she was one of the core members of the OECD's expert group on AI, AIGO, and is also closely involved in the development of the OECD AI Policy Observatory. I expect she'll provide business perspective on priorities to implement the AI principles.

Please take the floor.

>> CAROLYN N'GUYEN: Thank you very much.

Good afternoon, everyone. What I will share about is really the business perspective in this process. As Microsoft, we recognize early on the possibility of AI to transform the world and improve our lives collectively. This potential will not be realized if AI is not trustworthy. Early on, in the middle of 2016, our CEO published the importance of AI followed up a year after by a set of our own principles.

The intention there, really, is to promote awareness and potential issues, but the fact that we all need to go together to foster trust if we're going to enable broader adoption of AI. We're very excited to be a part of the I Go process at the OECD because the OECD relies on an evidence approach to policy making.

Furthermore, during that process, what was really unique was the recognition that all actors ‑‑ in other words, all of those who have an active role in an AI system life cycle and not just a tech provider, but also those that deploy, operate, and maintain AI systems, have roles in implementing responsible AI.

The multistakeholder process was a great experience and demonstrates the values that each stakeholder can bring to the table. So what is essential at this point is to convert these high‑level principles to practice.

As a technology provider, for us this, has two dimensions. Firstly, develop AI solutions and technologies that are trustworthy and work to promote the importance of such solutions throughout the broader AI ecosystem. I will come back to that. But, secondly, it is about providing data and AI to explore new models for public‑private partnership where there could be more effective policy making.

Internally, we're developing technology from our Microsoft research where AI can be a part of the solution to implement some of the principles. For example, promoting things like data sheets for datasets to make sure everyone understands what's in the dataset, promoting ‑‑ developing technology such as word embedding where words of bias can be detected and promoting risk management to identify risks and mitigate and find solutions.

We're then also working with an external organization, partnership on AIs to implement and share best practices on an initiative called About ML where the objective or initiative is to capture best practices and document a development of learning models and datasets. So this is a best practice in terms of accountability.

Internally, we have established an office of responsible AI to provide guidelines for our engineering and also services group. And we form an AI ethics in research, which is a senior‑level team that is responsible for recommendations in terms of implement taste of AI in sensitive users.

In terms of work with others in our ecosystems and other stakeholders, in addition to the partnership on AI, Microsoft Asia has also launched a new project ‑‑ actually in Singapore ‑‑ working in terms of implementation of responsible AI in the financial services industry.

We've just released last week a white paper sharing learnings from the implementation of responsible AI principles in the financial sector. Last week, we also launched the women's forum on how women can empower AI and how AI can empower women inclusively.

Secondly, with respect to providing data to enable for evidence‑based and agile policy making, as mentioned, we're working with the OECD to provide Microsoft academic graph, which is a graph that contains basically scientific publication records, including relationships between researchers across countries, other institutions, journals, and field of study.

Secondly, we're contributing to LinkedIn graph data which is a digital representation of the global economy regarding skills. Both supply and demand and trends of needed skills, including talent migration, et cetera. And the notion is that both of these will enable policy making.

We look forward to continuing working with the OECD and you on how to build trustworthy AI and create evidence that can enable more informed policy making.

Thank you.

>> YOICHI IIDA: Thank you. Actually, to be honest, I regularly learn from your colleague in Tokyo, and I believe it's strong evidence of the benefit of private‑public partnership or multistakeholder approach.

Let me introduce Mr. Mina Hanna. Mr. Hanna is the co‑chair of the Policy Committee of the IEEE Standard's Association's Global Initiative on Ethics of Autonomous and Intelligent Systems. He will provide tech community's perspective.

Mr. Hanna, please take the floor.

>> MINA HANNA: Thank you very much.

It is a pleasure to be here on the panel among some very distinguished speakers who represent many of the organizations that have been involved in what I would only characterize as a monumental effort to develop the OECD principles and building a key consensus among many organizations that represent multistakeholder groups and building key coalitions to agree what those principles that should underline trustworthy AI. AI, an environment where innovation is key and the environment where ethical principles really underline the development and deployment and use. That environment is built on principles that the OECD, the IEEE global initiative, many organizations have worked on. Papers published by allies like Microsoft, IBM, and others, and UNESCO and others we're going to hear about in a minute.

But the global initiative started in 2015. The goal was to be around the technical standards that would underline how technology components, how the ma factors, the researchers, academics, are going to build all of these AI tools that will be built on three key pillars. To us, the pillars are universal human values that stem from an understanding of the advancement of the basic human rights, as we understand them, of all humans; political determination and data agency, and third is technical ability.

That's principles, they're in the nomenclature, but the principles are defined. It includes transparency, accountability, technical dependability, among other principles. We thought the tools of how we were going to accomplish this, the very first thing we worked on was ethically designed framework. After following multiple versions that were accomplished in the years prior, it's a very exhaustive document. I invite you all to read it. We are going through a process of revising it and adding more chapters and adding more content there so it doesn't hurt to have more partners that represent additional geographies that we have not worked with before.

I would tell you to look at ethically designed documents. On top of that, there are the 14 standards, the P7,000 series and certification processes and other initiatives that includes engaging governments in the process of creating standards and so on. And, of course, there are multiple program where is parts of IEEE is IEEE USA that's focused on working with the U.S. Government, the executive and legislative branch and also from other committees, we work with the House of Representatives and if Senate on AI caucuses. We've had multiple ‑‑ some of the initiatives Secretary Strayer mentioned and from the White House and the state that focuses on diplomacy, there are a lot of things that have come from these conversations. There's an amazing document, by the way, on the use of AI in defense and what are those ethical principles that should be underlining that use.

So, to conclude, really, I would be remiss if I did not mention that we're very, very thankful we're working with OECD. It's such a great opportunity that we are really reforming the creation of what might amount to a uniform regulatory basis so that we have a regulation of how, for commerce, we're creating where innovation can flourish, and we have good partnerships that are built to increase the transparency. What we're focused on now and the scope of the projects we're building and the initiatives is from principles to practice. We're focusing on those implementations. A lot of that will focus on deploying those standards and working with policy makers and engaging in more and more of these conversations with the U.N. groups and so on.

Pleasure to be on the panel. Thank you very much. I will leave it here. Thank you.

>> YOICHI IIDA: Thank you, Mr. Hanna. Actually, we also learn a lot from the work on AI.

So next let me introduce Ms. Valeria Milanes, the Executive Director of ADC Association For Civil Rights and also CSISAC Steering Committee member. She'll provide the civil society's perspective. Please take the floor.

>> VALERIA MILANES: Well, thank you very much. I really appreciate the invitation because I want to highlight that I represent an organization that is based in Argentina, which is basically Latin America, which I am really grateful that we can have a voice here because all regions should be represented in this discussion.

Being Argentinian and being from a civil society organization in Argentina, I had the honor to be part of the group that delivered last April the civil 20 policy pack that was elaborated for more than 500 civil society organizations from all over the world that contained recommendations for the digital economy task group, including recommendations on AI and related to the work that the civil society have done with the OECD principles, I like to highlight and share with you the input that the civil society used that it was a document that was delivered. It's a huge amount of work.

The public voice is working on privacy and protection issues since 2009. At that time, the Madrid declaration was released, which was the first one on its kind. Addressing privacy protection, identifying new challenges, and calling for concrete actions.

Almost 10 years later, universal guidelines for artificial intelligence, this document was part of the input that civil society representatives working with the OECD provided in the matter. As it was stated, an introduction of said document, the rise of artificial intelligence decision making refers to accessibility and transparency. Modern data analysis produces significant outcome that have real‑life consequences for people, commerce, and criminal sentencing. The universal guidelines of artificial intelligence were proposed to inform and improve the design and use of artificial intelligence. The public voice stated that the responsibility for AI systems must reside with those institutions that found, develop, and deploy these AI systems.

The guidelines are 12. I will name them quickly. The right to transparency, the right to human determination, the identification obligation, which means that the institution responsible for the artificial intelligence system must be known to the public. Fairness and assessment. Accuracy, relevancy, and validity obligation. Public safety obligation. Cybersecurity obligation. Prohibition on secret profiling. Prohibition on unitary scoring. The 12th is termination obligation which refers to the institution that has established an artificial intelligence system ‑‑ it has the obligation to terminate the system if human control of the system is no longer possible.

So these guidelines can be found in the public voice. I invite you to read them in extension. Thank you so much.

>> YOICHI IIDA: Thank you, Ms. Milanes. Actually, apart from your recommendations, we also had a very good input from the Japanese chapter and the voice of civil society is also very important for the government.

Let me introduce next Ms. Sasha Rubel from UNESCO. She's a program specialist at Knowledge Societies Division of Communication and Information Sector and in charge of organizing UNESCO's discussion on AI. She'll share UNESCO's perspective.

Please take the floor.

>> SASHA RUBEL: Thank you very much for that introductions. I would like to express thanks for the work done at the G20 level as well as the AI report that underlines very clearly the way in which AI is transforming the way we live, the way we work, and the way relate. I would also like to congratulate you as well.

I would a like for the next couple of minutes just to introduce very briefly UNESCO's perspective and priority as it concerns AI policy and very clearly underline the link between UNESCO's work. AI has a direct impact on our field of competence. We work across five sectors with two global priorities. In the education sector, it's changing the way in which we learn.

In the natural and human sciences, it is changing the way we think about environmental management, scientific research, and philosophical reflections on AI and its impact on how we co‑design the future we want.

In the culture sector, it changes the way in which we can preserve and promote cultural heritage but also creative industries and cultural diversity. In our work on communication and information, it changes how we think about access to information and how we transform things online into the age of AI, and it has a direct impact on the human rights and freedom of expression. AI changes the way we think about disinformation. It changes the way in which we access information, and it changes the field of journalism more broadly.

AI also impacts two global priorities. It impacts the global priority agenda. There's issues related to embedded bias. It assures that women are also producers of AI solutions. It changes the way in which we think about our priority Africa, the growing digital divide cannot become larger by leaving the global south behind and positioning the global south as consumers and not producers of local solutions using AI for the future they want.

Lastly, it changes the way in which we think about democracy. I will cite the managing director of the IEEE when he talks about AI. He constantly says in his wonderful Greek accent, democracy, democracy, democracy. AI changes the way we think about the processes. This is an overview of the priorities we're looking at thematically, concretely.

UNESCO is working with many on how to ensure that young people and marginalized groups and women are equipped with the digital literacy skills to engage actively in the era of AI.

We are also looking at how to empower institutions in the governance of AI by ensuring policy support. In order to develop informed public policies, we need data. Data is also the life blood of democracy. We're working very closely with organizations like the OECD to inform policy development based on analysis and indicators.

We're addressing concretely questions of gender bias. I encourage you to read our report. I would blush if I could look at the impact of voice assistance as it concerns gender bias and AI. I would like to congratulate Microsoft. It's important to change the narrative in terms of women's presence in AI and also the presence of the global south. There are incredible innovations coming out of the global south as it concerns AI solutions. The center of Google, for example, dedicated to AII.

Lastly, one of the things we're working on, building on our framework adopted in 2015 is looking at how we approach digital transformation. It's built on the issue that it should be right spaced, accessible, and multistakeholder. In the spirit of this multistakeholder approach, last week at the general conference, our member states decided to mandate UNESCO to (?) In this process, they will work with member States and also stakeholders from the public and private sector, from the technical community, from media, academia, civil society, and national organizations to come together to discuss at the heart of UNESCO's work. We'll be undertaking a civil forum with universities to engage otherwise marginalized groups in a collective intelligence exercise to co‑design this work.

In closing, I would just like to underline ‑‑ you can tell I get very passionate about this work on AI. I think I may have gone over a couple of seconds. I would like to underline the linkages with the work of the OECD in this field.

Recently, Secretary General underlined that we're all learning from each other, and we'll be working very closely with the OECD in their public policy priorities. Their focuses on AI governance and good practice and economic and technical aspects and very much upstream, we're be translating this on the ground in our field offices around the world. Our approach is very complimentary. In this regard, we will work jointly in the observatory to assure translate from policy to practice. We want to build on the work you've already done.

Thank you.

>> YOICHI IIDA: Thank you. It's very much impressive. It's important to see these two international organizations working closely.

So, finally, let me introduce Ms. Katarzyna Gorgol from EU. She's Adviser for Digital Affairs and Telecommunications at the Delegation of the European Union to the United Nations. She will share the European Union's perspective.

Kasha, please?

>> KATARZYNA GORGOL: Good afternoon, ladies and gentlemen. Thank you for having me on the panel.

Let me quickly use the three or four minutes that I have to present the European Commission’s priorities on AI policy and also how we have been working with OECD.

So the first thing that I would like to say is that the commission and the OECD have been working very closely in this area, and, therefore, the use priorities are well reflected in the OECD's recommendation on AI. Essentially, what these priorities are, I can cluster them in three areas.

First of all, investment, secondly, ethical and normative frameworks, and, thirdly, transformation of society in the labor markets.

When it comes to investments, the member states are very well aware of the fact for Europe to be competitive in this area, we have to invest more. One of the issues we currently have on the table is to boost the investments in AI on the research and financing programs. To give you two examples of what we are planning to put our resource into, it is about upgrading the European research infrastructure by creating a European network of AI excellent centers and basically these centers will put together the best European research teams in order to work on action I development and deployment. They will also create synergies between industry and research and boost our capacities in key sectors such as high performance computing, robotics, and IoT infrastructure.

The second example that I would like to give is about data. Of course, we are all aware that data is a key asset for AI applications. In this respect, the European Commission proposed to create common European data spaces to be financed under the next final framework of the EU which will help enhance access to data across all industry sectors and based on agreed frameworks. We're currently having discussions with experts from various sectors, health, manufacturing, to understand what the needs are of each sector when it comes to data for AI applications.

Moving to ethical and normative framework, one thing you might be aware of is that the European Commission created a high‑level expert group on AI, which also came up with guidelines on trustworthy AI that have been published in April of 2019. These guidelines contain an assessment which allows the private sector and all interested stakeholders to pilot them in real life and this piloting is currently taking place to the first of December. I encourage those who have not yet participated, to join.

The second thing I would like to say is the legislation applies to AI already as it applies to other technologies. To give a few examples, data protection, consumer protection, cybersecurity, it all applies to AI. Issues that may need special attention and where legislation updates might be needed, you may have heard of an announcement by the European Commission that the new coalition will come up with an initiative on AI in the first 100 days in office.

Moving to the third point, which is about transformation of labor markets, here, the European Commission is working together with the member states on issues such as improving skills through training, life‑long learning, et cetera.

Mindful of time, I will briefly explain how we work with the OECD. First of all, it has been a very practical and hands‑on approach, meaning that our experts participated in guidelines, and OECD was invited to participate in the work with the European Commission.

It is also a forward‑looking group partnership. We're hoping to work with the OECD in at least two areas such as pillar four of the policy observatory, which is about monitoring national AI policies and strategy and measurement of AI because, of course, statistics are key to be able to develop informed policies.

With this, I would like to thank you.

>> YOICHI IIDA: Thank you, Katarzyna, for sharing the information with us.

We're returning out of time. Before we go to some questions, let me invite two commenters from the floor from Singapore and China.

>> AUDIENCE MEMBER: Good morning. I've got two minutes, I know. Let me just make a few points. Singapore, we believe in the power of AI. As Robert said, we wouldn't be here if we didn't believe in it. More importantly, we believe the power of AI needs to be complimented by trust and confidence in the system. What this means for us is the development of AI models by data scientists and the like. It cannot be done in isolation from the consumers and the users who rely on AI solutions.

Second, high‑level principles are very, very important. We congratulate the OECD as well as Japan for adopting these principles. However, this is an important, yet insufficient step. High‑level principles need to be converted into implementable practice, implementable practice that AI companies can use. We understand the OECD is already doing work on this. For Singapore, we are as well. We've adopted a model AI governance framework in January of this year. No surprise. It relies on very broad principles. One, it needs to be transparent and fair and AI solutions must be human centric. The most important part of the model framework is that it converts high‑level ethical principles on AI into implementable practices. At the heart of the model, we're looking at very detailed and readily implementable guidance to the private sector to address ethical issues and governance issues. We go down to quite granular level. The model framework is a document that we're reviewing and including cases from the industry that we feel are relevant. We'll find another opportunity to share this with the group.

>> Thank you. Let me invite China CIC. She'll share with us China's perspective.

>> Thank you, chair. Chinese are open for international cooperation and committed to return to mechanism. This year, the government issues governance at a level that's called responsible AI. We have some principles and concepts so similar like reliable, responsible, risk management, trustworthy, human‑centric. One thing, China also stresses the importance of the development of AI. I think it's most important character of AI is that it's empowering because digital economy is the most important part ‑‑ it's a very important part of the new economy. AI is the coal power of the digital economy. It's a very important development. So the government also puts emphasis on the AI to the people benefit and also like to reduce poverty. So that's a priority.

We also think it's time for us to think of people. We have a lot of discussion. In a civil society, there's a call for (?) Each year we have a lot of guests from different backgrounds talk about AI and have some very good discussion and welcome you.

Thank you.

>> YOICHI IIDA: Thank you. Actually, it was very impressive for me, personally, to reach an agreement on G20 principles with the Chinese government too.

So we need to skip some of the plan. I want to go directly to the questions possibly taken from online participants. So there's no one.

Let me ask the floor. I hope I can take one or two questions from the floor to the panelists.

>> AUDIENCE MEMBER: One in three users are children, under 18 years old. In design, I'm curious to hear in designing those government policy framework, have you ever considered to include the perspective of children, number one, and, two, have you considered to assess the impact on AI on the likes of children as Internet users? Thank you.

>> YOICHI IIDA: Thank you very much. I take one more question. Okay, please.

>> I come from the Youth IGF summit. I've read that accountability is limited or defined by the state‑of‑the‑art. There's this element of explainability. Would you recommit abstaining, for instance, using AI systems whenever the state of the law does not allow for sustainability in plain words.

(End of scheduled captioning