This is now a legacy site and could be not up to date. Please move to the new IGF Website at

You are here

IGF 2020 - Day 4 - OF21 Strengthening Implementation Capacities for AI Ethics

The following are the outputs of the real-time captioning taken during the virtual Fifteenth Annual Meeting of the Internet Governance Forum (IGF), from 2 to 17 November 2020. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 




>> SASHA RUBEL: Good morning, everybody.  And welcome to our open forum on "Strengthening Implementation Capacities for AI Ethics.  My name is Sasha Ruble.  I'm with UNESCO.  I will talk about concretely how to translate ethical principles into practice and the work that we are doing with other international and regional organizations on this panel, but also academia, and civil society.

As many of you know, in 2019, in November, UNESCO's general conference mandated the organization to develop a standard‑setting instrument in the field of artificial intelligence.  And my colleague, Dafna Feinholz will share more about that.

This conversation specifically will be looking at how to translate macro level principles into practice, and the challenges that exist but also best practices and opportunities that exist in this field.

I would like to highlight that prior to the general conference in 2019, UNESCO's Member States in 2015 adopted what we call the Internet universalism framework which argues for a rights based, open, accessible and multi‑stakeholder approach to digital guidance and this is guiding our work in the Internet, emerging technology and artificial intelligence more broadly.

Over the next hour, we will be looking specifically at how ethical principles or AI can be implemented and how do we translate high‑level principles into policies and what kind of human and institutional capacities are needed to govern a responsible and human‑centered artificial intelligence?

I'm happy to have with me here today by colleague Dafna Feinholz from UNESCO, and Leonard Bouchet from EBU, and Jed Horner, and Jan Kleijssen, and Nicholas Maiihhe, and Clara Neppel, and Sophie Peresson.  Without further ado, I will pass the floor to my colleague, Dafna Feinholz.  Now over to you.

>> DAFNA FEINHOLZ: Thank you very much for this invitation, first of all, and I'm very clad to share this panel with all the participants.  Many have been part of the process in one way or another, either during the consultation part or after now that we have been developing the first draft.

I just want to make a short introduction which is to explain that what I will explain ‑‑ to show to you and share with you is the current text of the recommendation that we are developing and that is going to enter now into the face of intergovernmental negotiations, but still, I wanted to say that it is very important for us to say first of all, that in this crowded universe of many initiatives while UNESCO wanted to take or was mandated as Sasha mentioned to undertake this endeavor.  It's a true global institution.  Also because the impact of AI is affecting all parts of life which is the mandate of UNESCO.  This is something that we need to take into account on the cultural and all the diversity when we speak about AI, and this is exactly also part of the mandate of UNESCO that is multidisciplinary mandate including culture, education, et cetera.  So this is why.

And then the last reason is because we have more than 30 years, almost 30 years of experience in dealing with ethics of science and technology and impact in the life sciences like bioethics and we have already a number of standard setting instruments out there in the world, which are unique in their kind.  So just to frame why.

And so the ‑‑ and before I start with AI, I will share my slides and I hope I will still see you Sasha with your signs and otherwise, you just go ahead and tell me you have two minutes or something and I will stop on time.

So just to tell you that ‑‑ so you will make ‑‑ you will understand better what I'm going to say, because I'm going to try to stick to a text otherwise my enthusiasm will not allow me to stick to my ten minutes.  So just to let you know that the first ‑‑ this first part of the process is a very independent process, and very inclusive process, because the draft that I'm going to present to you was elaborated first by 24 experts in their individual capacities, appointed by the general director of UNESCO, with a gender balance and with a lot of different backgrounds in disciplines and also in ‑‑ well, basically all the backgrounds that are needed, either from the professional and academic point of view but also if they had experience with private sector, with civil society, et cetera.

Then we had a very wide consultation process, in different mats.  There was an online consultation in which we gathered more than 50,000 comments and then we organized 11 regional partners with UNESCO and we also got a lot of input and we got some inputs from the deliberative workshops organized with MILA.

So I will start now sharing my screen with you, and please let me know if it works.

It's okay?

>> SASHA RUBEL: Not yet.

>> DAFNA FEINHOLZ: Oh, I already shared my screen.  Let me go back.

I will make it very quickly otherwise I will do without.  Okay.

>> SASHA RUBEL: Perfect.  We can see your screen.  You can play the slide show.

>> DAFNA FEINHOLZ: Okay good.  Now I don't ‑‑ okay.  Is that ‑‑ is that fine?


>> DAFNA FEINHOLZ: Great.  Okay.

So artificial intelligence may radically change the world that we live in as we know, but what will really determine and shape what the world would look like is the ethics behind the artificial intelligence.  So it is clear that we need a global normative umbrella that will help us to be truly united and coordinated in our efforts to tackle the digital divide, to eradicate bias and to simulate multi‑stakeholder governance and cooperation and ultimately simply put it to build AI in that works for everyone's benefit.

UNESCO has taken on the challenge, mandated by its Member States as we just said to develop this first global normative instrument in the form of a recommendation of ethics of AI. UNESCO brings together an extensive state of academic institutions, civil society organizations and private sector partners from around the world.

It represents an optimal platform for establishing and promoting a global normative framework for ethics of AI and therefore, has the global legitimacy and unique mandate for these purposes.  UNESCO involves a group of leading experts and it results in a normative and future looking document, that provides an anticipatory framework and offers concrete policy options to emerge the challenges.

It translates to what of ethics into the how of the relevant policy action offering concrete pathways for the realization of the ethical framework of universal values and principles which is what we are trying to discuss in this session.

The recommendation is a normative and valuable because of its global scope, because it has a very strong language, which is one of the things that was the result of the consultations, in ‑‑ and it has an emphasis on the how policy area, and it also offers concrete tools.  And lastly, because it focuses on the entire life of AI, the life cycle of the AI systems which is not always the case in the all the initiatives.

Rather equating ethics to law or human rights or a normative add‑on to technologies they consider ethics as a dynamic basis for the normative evaluation and guidance of AI technologies, referring to human dignity, well‑being and the prevention of harm as a compass, and rooted in the ethics of science and technology.  These recommendation aims to enable to take shared responsibility based on a global and intercultural dialogue.

The recommendation is based on the interrelated set of values, principles and policy actions and while the values inspire the desirable behavior and represents the foundation of principles, the principles impact the values underlying them more concretely so that the values can be more easily operationalized in policy statements and actions.

Foundational values have a role of necessary preconditions for principles to work, and in effect, to ensure ethics of the ethical AI.

The document highlights four search values.  It must be noted that environmental concerns have been deliberately elevated to the level of values and the reasons of an enormous effect and ‑‑ and the reason is the enormous effect of AI technologies have on it, and because it is being neglected elsewhere.

Further, diversity and inclusiveness have also been moved to the level of foundational values to underlying the problem of discrimination and bias, including on the bias of gender, as well as diversity, digital and knowledge device.

The value of living in harmony and peace is a normative notion for this kind of document which brings typically African and Asian approaches to the table, balancing out the prevalent agenda.

There are ten principles in accordance with which all actions must be undertaken.  Although many of these principles are similar in the name to the ones identified in other documents, their substance is enhanced by a UNESCO approach.  Although many of the documents provide for a principle of the trustworthiness, in this recommendation, it has been framed as an outcome of the operationalizations all the principles and the proposed policy actions are all directed by promoting trustworthiness on most stages of the AI life cycle.

Critically, the recommendation is not meant to be a beer declaration of values and principles, rather it is being developed as a powerful and transformative framework that operationalizes these values through impact oriented and normative impacts within ten policy areas.  The approach through which these actions are to be implemented at the national level is to pluralistic, multidisciplinary and multicultural and multi‑stakeholder.

The recommendation also takes a firm stance on some of the yet unresolved issues, for example, ethical and legal responsibility.  It can be attributed to natural and legal persons.  Gender equality, features prominently in the recommendation.  It is also one of UNESCO's two priorities.  We are concerned that it's in the digital industries including the AI systems where one of the most acute gender disparities persist.

Among the innovative elements of recommendation are the capacity building instruments to help countries step into the power of AI.  These instruments include the AI readiness evaluation tools for developing custom tailored national AI strategies and plans of action.  The capacity building initiatives also include advanced ethical impact assessment tools and due diligence and oversight mechanisms.

These instruments will be designed by UNESCO in a collaborative effort all over the house and involving also other partners to help countries identifying ethical impact, in particular on the rights of women and girls, vulnerable and marginalized groups, labor rights and the environmental systems.

As you can see, and I don't know if I changed my ‑‑ yes.  I don't know where I am.

As you can see the recommendation ‑‑ so as you can see the recommendation represents a comprehensive ethical framework that encompasses the full spectrum of ethical, from inclusivity, fairness, equality and nondiscrimination.  And it does so my maintaining focus on all stages of AI life cycle.  This is crucial because often bias and discrimination manifested downstream as experienced by the users is the product of the structural inequalities and exclusion that is upstream, such as the lack of diversity in the management or the technical teams.

So our aim and interest is ensuring all the work for that AI works for everyone's benefit, converged with those of other participants of this forum.  So either for ‑‑ we want very much to learn about your ideas and discuss how might this work further and to continue the conversations that we have already started bilaterally in other ‑‑ in other opportunities.

Thank you very much for this.

>> SASHA RUBEL: Thank you, Dafna, for that very comprehensive presentation of the draft recommendation and also the process and, again, thank you to you and your team, Tee‑Wee, maxim and others for all of their cooperation in the field of UNESCO.  And we have in the fame work of the multi‑stakeholder process where input was solicited in into the draft recommendation.  Thank you to all the colleagues here today would have contributed and particularly continue to cooperation very closely, notably our colleagues and friends from the Council of Europe and the OECD who respectively had their own processes for developing or have developed principles and frameworks and standards in this field, as well as the IEEE and IEC, and trust going forward.

The other Clara Neppel in prevent of the IGF mentioned something that comes from the European Commission's conception which is this idea of ecosystems of the trust which is essential to ensure that we cooperate and move forward with the development and translation into practice of responsible AI.

So Dafna mentioned this idea of AI ethics and translating practice into principle and I can think of no better visual image right now than Sally Radwan from the government of Egypt who is currently in a car and will be speaking to you from a car going to a ministerial meeting to talk about the implementation of ethical AI.  So Sally, welcome and very happy to have you here with us in transition from moving from principles into practice, which, again, visually represents this commitment of this panel here today and our work all together more broadly.

Sally, you are not only an advisor to the minister for AI in Egypt but a key actor in the Working Group of the African Union, as well as one of the experts of the ad hoc expert group of UNESCO, presenting what Dafna just presented.  As someone who works on the national and regional and international level all at once, what do you see as the main challenges that exist in translating ethical principles into practice?

And what in your opinion are some mechanisms and tools that are needed to translate principles from the international to the regional and national level?  Please, Sally.

>> SALLY RADWAN: Thank you, Sasha.  I will try very quickly before I get ambushed here and get dragged into the meeting.  I think obviously the main issue is achieving consensus that on something that is generic enough for everyone to agree on, but yet specific enough to still be relevant and possible to implement on the ground.

And the truth is these are two mutually exclusive requirements.  So what ends up happening in most of these recommendations is you keep watering down the language, it becomes too vague and, you know, to be of any kind of practical use.  And for me, the key elements here are relevance and applicability.  For example, we need to acknowledge that there are huge differences between, say ‑‑ I like to pick on my Finnish friends because they are always so nice.  So if we compare Finland, talking about capacity building for AI and Egypt talking about capacity building for AI.

On one side, you have a country with a population of 5.5 million people with 100% literacy rate, and practically everyone has graduated high school.  On the other hand, you have 100 million people, only 70% of whom are literate, according to official figures and we are not even talking about digital literacy.  So when you talk about ‑‑ so you can't set a metric like we need to educate 1% of the population on AI and have it be prevalent worldwide.

If Finland, you are talking about 50,000 people, in Egypt, you are talking about 1 million.  And before I teach them in AI in Egypt, I have to teach them how to read and write and digital skills and then teach them AI.  So the starting points are very, very different.  And it's impossible to Celt a global measure that is equally applicable for everyone.

In addition, of course, budgets, education budgets, for example in a country like Egypt are limited.  So what do you do?  Do you shift budges from basic education to AI?  Let alone AI ethics?  It's a tough call to make for any decision making.

And then secondly, whose ethical standards do you employ?  Of course, there are things that we can all agree on, but there are also nuances across cultures and across political systems.  So in some countries, we have more of social equality approach being things like safety nets for employees at risk are highly important, whereas, in more capitalist‑minded countries, the stance will be that you are stifling innovation and entrepreneurship.

So how do you reconcile the two?  And also applicability in terms of, for example, asking governments to put in place laws and regulations for ethical stewardship of AI, for example.  Some countries won't even have the legal and legislative frameworks to do that.  And many, many other examples.

And that's why I think when drafting the UNESCO recommendation, it was important to acknowledge the differences between countries, situations and needs and starting points and to propose a tool for making the recommendation relevant and implementable and that's the assessment tool that we are proposing.  It identifies ways in which UNESCO or other organizations or countries working together can help to bridge those gaps.

We realize this is a bold step, but we are hopeful that Member States will see it the same way and want to cooperate with us to ensure that this recommendation gets implemented.

Finally, I would just like to say in terms of aligning view points that what we found is that a bottoms up approach really works.  For example within the AI Working Group in Africa and in the Arab League one hasn't started yet but in the African one, we discussed different points that could form a potential African stance on AI, and we used that input in the UNESCO discussions which I think help present a somewhat unified front.  This was just very experimental and informal, but I think if we do more of that, it will really help in terms of the final negotiation of these recommendations because then you are potentially trying to reconcile six different voices in the case of UNESCO instead of 193.

So that's all I have to say for this morning.  Thank you again very much and sorry about the abruptness of everything and eager to listen to the rest of the speeches.  Thank you very much.

>> SASHA RUBEL: Thank you very much, Sally, for starting this conversation off with such concrete recommendations and particularly, the need to change our gaze as it concerns ethical and responsibility AI and stifling entrepreneurship and the need for how we can have a human centered and ethical artificial intelligence which could not stifle entrepreneurship but there's a lot of advocacy and the need for legal frameworks at the national level, as well as a bottom up approach that principles should be driven by practice.

Please don't apologize for being in a car.  I think it's a perfect image to start off our conversation because we are not sitting behind our desks talking about this.  We are doing this in action and everyone around the table is doing this in action.

Speaking of action, I'm happy to turn down to Jan Kleijssen of the Council of Europe and I would like to recognize the great recommendation from Jan to standard setting process and the affiliation with UNESCO.

There have been a lot of exciting they adopted seven reports concerning the impact of AI applications and various fields.  So these reports focused on among other areas the need for democratic governance of AI, the role of AI in policing and criminal justice systems, preventing discrimination caused by AI, and ethical and legal frameworks for the research and the development of neuro technology.

The work specifically to develop a legally index‑related binding instrument was supported by PACE which underlined that the instrument should ensure that AI‑based technologies comply with the Council of Europe standard and ethical principles such as transparency, fairness, security, and privacy.

Can you tell us what needs to be done to effectively build institutional capacities that govern AI at the heart of the Council of Europe's work?

>> JAN KLEIJSSEN: Thank you very much.  Good morning, good morning, everyone.  I hope you can hear me.  Does this work?  Yes?

>> SASHA RUBEL: Perfect.

>> JAN KLEIJSSEN: Good.  Very good.  Thank you very much for this opportunity and very interesting to be listening to the presentations.

The Council of Europe, perhaps I set out in two words what PACE and the Council of Europe means.  The Council of Europe is an international organization in Strasbourg which has 47 Member States and its mission is to promote human rights, Rule of law and democracy.  We do this through setting legal standards, many had of them binding.  We are at the origin of some 200 international treaties and a variety of fields from human rights to data protection, to corruption and fight against corruption, obviously, protection of children, fighting domestic violence, just to name a few.

And also dealing with technologies.  Already in 1981, the Council of Europe, had the mother of GDPR and we have the only legal, Budapest treaty, just to position us.  The organization is made up of governments but it is also ‑‑ it has a body which is call a parliament assembly, the PACE which you kindly referred to which is made up of national parliamentarians.  They sit in their national parliaments and then five or six times come to Strasbourg for sessions and adopt recommendations which often lead to treaties.

In fact, of the 200 treaties that the Council of Europe has, including ones in domestic violence, or the fight against the sexual exploitation of children, or data protection, they all found their origins in the parliamentary assembly, just to explain our business model, if you like.

The text which you refer you are indeed very, very relevant to the work of the Council of Europe in artificial intelligence.  We are at the moment working in two ‑‑ on two parallel tracks.

On the one hand, we are trying to establish a global legal framework, and, again, that sounds like jargon.  I actually hope it will be a treaty, a convention, with general principles that should apply to all forms of AI, are or users of AI, that directly have an impact on human rights rule of law and democracy and there are, of course, quite a few uses of AI, applications of AI that do so.  Not all, but a number of them do, and therefore also constitute a risk to this.

And CAHAI, which you kindly mentioned is looking at this together with our partners because you said what can we do to ensure that we have the human ‑‑ the human resources, the capacities, one of them is, of course, to team up with others and we're very happy to have UNESCO, OECD, and the European Union, all working with the Council of Europe to see whether it's possible and then how to establish this treaty.  The answer to whether it's possible should be given at the end of the year, if it is a yes, which I strongly hope, we will start negotiations as of the beginning of next year.

So it's all getting there fairly quickly and, of course, we will be basing ourselves to a large extent on the work also carried out and was just presented, for instance, by UNESCO, but also by the OECD and by the European Commission and others.

But we take it to a legal ‑‑ to legally binding standards, and I think that's something that perhaps unique contribution of the Council of Europe to translate the ethical standards which have been very carefully prepared by many of our partners into a number of legally binding standards which means that governments will be ‑‑ can be held to account that it's not just a commitment, that it's actually an obligation.

That on CAHAI, and in addition to CAHAI, the second is the criminal law aspects of AI, and if you ask me what is criminal law has to do with AI, think about self‑driving cars.  Judges and a number of countries already had to decide on cases for accidents were caused by automated driving systems.  And there is at the moment no international legal standard ‑‑ in fact, there are very few national legal standards on the specific criminal law aspects of AI.  And this very week, in fact, yesterday, we had a specific committee of experts on criminal law decided to pursue work on a binding legal instrument.  So, again, here a treaty but then a specific one on the criminal law aspects of AI.

A number of you may have also heard of the work on the AI in judicial systems which was prepared by the Council of Europe.  That is not a treaty.  That's a recommendation, but it was drafted by professionals, working in a judiciary to guide them as regards very complicated questions to what extent can you delegate to automated decision making to algorithm, if you like, decisions, that traditionally are taken, for instance by judges or prosecutors.  That raises a lot of ethical issues.

It raises a lot of legal issues, of course, and the charge that was developed helps to provide guidance there.

And a number of other fields we will also be ‑‑ be looking at the translation of the general principles into ‑‑ into specific fields.

As regards to give you also some concrete examples, what we think is important for the ‑‑ for this development, human rights impact assessment prior to rolling out an AI application.  If governments did e side to delegate to machines what is normally done by humans, we think it's important that an assessment is made of the impact that would have on the citizen.

A second is the question of certification and IEEE are the expert in certification and I look very much forward to Clara's thoughts about this.  But we think that certification of algorithms and AI systems is an important issue.

And thirdly, an example I would like to give is when you go to see a doctor, you trust that this doctor has actually finished his medical studies, that he actually is qualified to examine you or operate on you.  The same goes for pilots, when you take a plane, you would like the pilot to be a qualified pilot to have done his flight school.  Yet we trust more and more also AI applications who govern our lives and have huge impacts on what we do.  And therefore, the question arises should we not also give careful thought to the training and the qualification of professionals who design, develop and apply AI systems especially to those AI systems which have a huge impact on our lives because you mentioned human resources that also is a question we will be ‑‑ we will be looking at.

I leave it at that and I look forward to taking part in the rest of the discussion.  Thank you so much.

>> SASHA RUBEL: Thank you very much, Jan, for that overview of the very important work of the Council of Europe and UNESCO looks forward to cooperating with you both in the development and the deployment of the tools that you have mentioned, including the human rights impact assessment, but I would also like to ink that the council of 50 you are rope for their cooperation and the framework of our joint work in training the judiciary on this subject specifically, and on particularly predictive justice and artificial intelligence.  So we look forward to that cooperation going forward, building the capacity of judges here.  It really represents what we believe at UNESCO that the question is not only sharing for example, and training the judiciary in digital literacy but training the technical community on the ethical aspects and we thank you for your cooperation in that field.

I would like to turn to Karine Perset from the OECD with whom we have the pleasure of working with her and her team for several years now, both in supporting their OECD policy observatory but also the OECD principles on artificial intelligence.  Karine, in May of 2019, OECD principles on AI, they published a book that looks at the technical, economic and government landscapes and highlights key policy questions.  Then in November of 2019, you guys are very active, the OECD a working paper "Hello World" that's used in the public sector.

Shortly after in February, the AI policy observatory was launched to provide resources to policymakers and at the same time the OECD AI network of experts which was also launched which has Working Groups dedicated to different aspects of AI policy.  Specifically, one of these Working Groups is dedicated to tools on guidance on values based AI principles and another specifically on implementation guidance on national policies.

Can you tell us a little bit more about the outputs of these Working Groups and specifically, how they support Member States priorities to facilitate trustworthy and responsible AI policies?

Karine, over to you.

>> KARINE PERSET: Sure.  Hello to everyone, and thank you very much for inviting the OECD today.  So I would just like to.  First congratulate UNESCO and Dafna and Sasha for all the work you have done so far which is promising to be accompanying the UNESCO principles with the concrete implementation guidance and your focus on capacity building tools is really critical also.

So back to your questions, I will give you just a brief overview first of the AI ecosystem we have on building at the OECD with our partners, and an image is worth a thousand words.  This is not the simplest, so I will just illustrate what I'm talking about with a slide.  I don't know if you can see it.

>> SASHA RUBEL: Yes, perfect.

>> KARINE PERSET: Okay.  Great.

So as you mentioned, the AI principles that we ‑‑ that were adopted in May of 2019, those are 10 principles that are really at the core of our work.  That's the circle ‑‑ the red circle in the middle of the chart.  And those were also used in the G‑20AI principles in June of 2019.  So those kind of are the core of our work.

And then we launched the OECD AI policy observatory.  That's the multiculture around the AI principles.  That's not a group but an online platform that we launched in February of 2020, as you said, to provide resources to policymakers and anyone interested on, one, implementing the principles; two, AI in different policy areas from agriculture to text or ‑‑ or education or SME policy, et cetera; three, trends in data.  That's a really strong focus for the OECD, trying to find the evidence base and trying to assess what's going on, where, how fast, how fast is it developing and what's happening, what's happening on the job market front, what's happening in research, et cetera; and four, data on national AI policies and on stakeholder initiatives.

And so the next circle on the left is what you mentioned is the OECD network of experts on AI, that we call ONE AI.  And it's really structured guidance and good practices on these 10 principles and it's doing that with three Working Groups that I will mention in a minute.  But I just wanted to mention because this is fairly new.  The global partnership on AI was launched in June of 2019.  This is under the leadership of France and Canada, two advanced ‑‑ the group aims to advance cutting edge research and pilot projects on AI priorities, and we are hosting part of it Secretariat.  That's a complimentary stream of work that we're also engaged in.

And lastly at the bottom, is the OECD global parliamentary group, which is a large network of legislators around the world who are working on A I.  They are working with the Council of Europe and the European Parliament very closely to form a legislative hub and learning resources for legislators.

So it's really important our participation in these critical and complimentary initiatives some of which UNESCO's initiative, in particular is ‑‑ and the Council of Europe's are particularly important and the European Commission.  And I would like to stress those.

And I think it's important to say like Sasha pointed out, that there are really strong linkages between all of these initiatives that have really similar objectives, but come from different, you know, organizations with different mandates, and these different ‑‑ different organizations bring different ‑‑ very different memberships and different strengths to the table but we are all moving in the same direction.

Now to just move to the ‑‑ the OECD's work on policies for AI.  That is a multi‑stakeholder work that focuses on really three main areas.  The first priority and that's on the left is really ‑‑ it's a classification framework.  It sounds boring, but it's actually pretty fascinating.  This is a framework to help policymakers assess what types of AI systems, raise what type of policy issues, by using four core dimensions.

The context in which the AI system, is that's the sector, that's critical nature or not of a sector.  That's the risk to human rights, to the risk to well‑being.  The second dimension is the data and the input, the data and the input and how the data are collected and what they are, and what type of data, domains, et cetera.  The third is AI model itself and the last is the task and the output of the AI system.  So based on those four core dimensions we are coming one a framework that we think can basically identify really key policy considerations and can help policymakers assess the risks and the opportunities of specific types of AI systems, at a much more granular level.

So the second Working Group which is on the top right, developing tools for AI, that's really identifying tools that can help decision makers, actors, policymakers implement trust worthy AI and to do that we have leveraged what is already taking place.  So kind of ‑‑ it's first stock taking.  We have distinguished different types of tools.  First, process-based approaches, and so this is like corporate governance tools or risk management approaches.  The second type of tools are the technical approaches and here these are standards and tools for bias detection and explainable AI and research program and IEEE has a lot of initiative in this category.  The third is education and capacity building approaches and obviously UNESCO's work is critical nor this type.  Work.

We think not one tool is enough.  We need all of them depending on the circumstance, the context.

At the bottom right, is developing practical guidance on national AI policies including things like governance of AI systems at the national level, and how our country is doing that, and what seems to be working well, and we're taking a deep dive into over 60 countries, what those countries are doing through the working group, and looking at AI R&D policies and compute and software and governance schemes and test beds skills and labor markets are a big focus as is international cooperation.

So with that, thank you very much for ‑‑ I hope I didn't go over for your attention.

>> SASHA RUBEL: Thank you very much, Karine for that overview.  I don't think anybody on this call or on the other side of the screen participating thinks classification systems are boring.  So thank you very much for that the exciting work that the OECD is doing.

If people are not aware of the AI wonk coordinated by the OECD, I invite you to follow it, where it's regrouping a lot of very exciting thought leaders, thinking about artificial intelligence and we are also very happy to be contributing to that online as well and thank you to your team.  Luis, Laura and many others with whom we have the pleasure of working closely and for highlighting the fact that we are very complimentary in our mandates.  I think this is something that we are saying to our Member States that there's a misconception, that we are in competition but we are complimentary and working together in our respective mandates to reinforce each other, but also to build on each other's work.  So thank you for that spirit of cooperation here today with the OECD and, of course also with the Council of Europe, the ICC and the IEEE and many others.

We talk a lot about translating principles to practice but one of the key stakeholders responsible for translating these principles into practice is not only public sector but also the private sector and I'm happy to have Sophie Peresson here with us today.  In cooperation with UNESCO and other partners, the ICC had principles to action.  It focused specifically on the role of the private sector in the framework of developments of the ethical AI.  From this conversation what are some of the best practices of private sector counterparts that have developed ethical guidelines for their companies in line with the existing frameworks and standards?  In your opinion what are some of the barriers to adoptions of principles and guidelines, as well as potential incentives or motivators that would encourage more wide spread adoption or also help unify some of these recommendations globally and ensure that they are translated into practice?

>> SOPHIE PERESSON: Thank you very much, Sasha.  It is great to be here.  Thank you for the invitation.  Good morning and good afternoon, and good evening, I work at the international chamber of commerce.  For those of you who don't know us, we are the voice of business.  We represent 45 million businesses across the world, north and south, multinationals to SMEs.  We have an established track record in this area.

We have been advocating for the potential digital technologies to act as a catalyst for accelerated implement of the unit sustainable development goals and for that we have observer status at the UN.

The importance of this agenda and the foundational notion this cannot be underestimated in the context of the COVID pandemic, both in terms of containing the pandemic enabling a systemic economic rebuild.  We are committed to working on AI, and we acknowledge the opportunities and the challenges that come with that.

AI technologies are transforming all sectors.  Economy, creating new opportunities for productivity gains and economic growth.  Profoundly changing the R&D life cycle by significantly accelerating R&D which I think you will agree is critical when it comes to the clinical trial process to develop a COVID‑19 vaccine.

But like any transformative technology, it's creating new challenges for society, particularly around the role of human agency, transparency and inclusivity.  What is our response to this as Sasha is asking me this very easy question today.  Thank you for that.

Well, we are responding by developing a foundational paper on human centric and trustworthy AI, partnering with Oxford University.  This project is seeking to develop a set of principles supported by industry case studies and best practice examples that will help business implement ethical technologies.  Concretely, how are we going about this we are gathering information to form collective business positions among the international chambers of commerces membership which is made of national chambers across the world.

We want to bring to the table and gather perspectives from actors just starting out in the space, especially in developing countries, SMEs, and groups such as women and minority‑led businesses.

We want to demonstrate through case studies how business around the world already active in the field of AI are implementing the principles.  And we want to, of course ‑‑ and this is key, identify both challenges faced, but also good practices and applicable solutions on the implementation of principles give than we have UN observer status and we work with many international organizations and regional organizations that are here on this panel and in this ecosystem, we want to inform our advocacy efforts in the international and regional fora.

Ultimately, we aim to bring the principles to life through studies, that are taken from diverse sectors and geographical locations, the objective is to build an evidence‑based that goes beyond policy statements.

So Sasha, you asked me the question of what are some of the barriers to adopting some AI principles.  This is obviously a complex question.  I think Sally brought it to life today, we ‑‑ I'm going to provide you with a relatively general question because unfortunately I don't have the opportunity to get very granular.  According to our findings, policy and regulatory discussions concerning AI have either been subject to sector‑specific considerations and here I have in mind healthcare and finance, or issue specific, namely data protection, cybersecurity and antidiscrimination laws and guidelines and establishing our policy frameworks.

Arguably one of the barriers to the adoption of AI principles and guidelines could reside in the plethora of initiatives in this space.

Given that there are currently an increasing number of policy initiatives, spearheaded by either governments or coalition of governments international organizations, this could lead to regulatory gaps or create inconsistent or misaligned compliance obligations.  This could also negatively impact companies especially when it comes to the role and obligations in implementing such guidelines.  We should not underestimate the complexity of navigating through these different Molly is frameworks.

Business has acquired knowledge and gone through challenges that come with AI implementation along the way.  We believe that the international chambers of commerce' principles have the potential of supporting the entire business community to good practical guidance and for businesses to learn from one another.

We feel we need to create a common language, public sector needs to work with the business community to develop and further refine AI principles that the business communities subscribes to.

To reach the best end result, the policy initiatives being developed should be discussed in further refined in partnership with the business community, which is at the front‑line of implementation and practical applications on a daily basis.

There's a need for unifying voice to create a policy environment at the international level to foster trust in and adoption of trustworthy and rights‑based AI.  To continue to fuel innovation and increase the societal and the economic benefits of AI, while mitigating harm, such policy environments must be flexible, human centric and globally compatible and market driven.

And I would like to just take an opportunity ‑‑ the time to pause and to thank UNESCO, all of why you are colleagues, Sasha, Dafna, and the others who are online for including us, indeed, in this exercise that you have, which is looking at the drafting of the recommendation, the capacity building dimension which we can't underline enough, and to include the business voice here.

We feel we have a unique opportunity to convene the business worldwide, to provide a common private sector perspective and gather input from people already involved in this space and bring to the table would have not been heard, especially from the global south.  We want to gather the perspective from a global unified aspect.

If I could add one other barrier or hurdle that we could also think about, is that while business is spearheading such developments, in the field of AI, we know that not all business is equal here.  If you look at SMEs, obviously, they don't have the same type of resources to develop and to engage on this front, and that's why I feel that the capacity building dimension that you are developing at UNESCO and in other fora will be absolutely critical on this front.

Thank you so much for your tune.

>> SASHA RUBEL: I love your emphasis on needing to develop a common language but also highlight some of the innovation and work coming out of the innovation south.  Two of the colleagues that your colleague and you highlighted from the pre‑event that we had yesterday, which particularly struck me, is this idea that just because it's technologically feasible doesn't mean we should do it and secondly, that we need to stop speed dating between international and regional organizations and the private sector but instead invest in long‑term, perhaps not monogamous, but long‑term relationships between international and regional organizations and counterparts from the private sector.  So this is something that I take away in building these ecosystems of trust is the need to inscribe these cooperations in the long‑term both between international and regional organizations but also the private sector.

I would like to turn now to Leonard Bouchet from the EBU.  The AI and the data initiative of the European Broadcasting Union for which you are a chair has been convening a Working Group of broadcasters to see how to translate values of public service including diversity and accountability into AI deployment into the media sector.

You have been working to develop case studies and best practices and practical guidelines to translate ethical principles into practice such as for example, the BBC check list of ethical machine learning to support accountability.  Can you tell us a little bit more about some of these best practices you have identified in the media sector which is one of the core stakeholder groups of our work at UNESCO and what lessons can be drawn from them to ensure that principles are translated into practice?

>> LEONARD BOUCHET: Yes, I can.  I can try at least.

What we have learned ‑‑ you have mentioned one of the main thing I want to talk about is the great work of our colleagues from the BBC.  They have done ‑‑ they have done, I think, great work of putting what are their core values as media into principles for machine learning.  I think there are several interesting things about that to mention.

First is that they actually didn't use AI wording anywhere in this principle.  They just used machine learning engines principles and that's it.  And that's interesting to me, because actually, in the field right now, that's what we are really experiencing.  That's ‑‑ that's all the real tools everyone is using now and that are really trending.  It's, of course, not the raw feel of AI.  I have think one of the lessons that we have learned when you bind the principles that actually use, and then it goes to them more.

And the other important and interesting lesson, I think, that they have learned in that ‑‑ in that process, is that they really used their values, the value and the ethics a public service media and they try to implement this type of principle into the new tools and new technology field.

And that's, I think, what makes this principle quite strong, quite efficient and quite practical to be implemented elsewhere in order of public media service sectors.

So they have made it very specific and according to ‑‑ it also resonates with one of the questions in the Q&A session here today.  According to what they have done in terms of avoiding biases, for the moment, honestly, I think ‑‑ I would love to share lesson learned and best practice about how to avoid biases in machine learning systems right now.  But actually, I can't, because we are all facing this more as a challenge.  It's really a challenge.  It's a real one.  We are now starting to discover it and trying different ways of trying to circumvent around this challenge, but for the most, honestly, technically speaking, it's really a hard one.

And I don't really have the answer on how we will tackle and succeed this challenge.  The only path I can see right now amongst the members of the EBU we are talking.  We are talking about sharing what we are do.  We are talking about integrating what others do and the hardest thing that we try to do is also try to collaborate, to collaborate among the institutions and with the private sectors and the scientists and also with the general public and also within the EBU and people from the south or anywhere in the world, even though we are Europe brand organization, we love to exchange also with others with different perspectives.

And in my opinion, as far as I understand the field, I think it's one the only way that we can think about tackling this challenge of how to really avoid diversity biases in this kind of systems.

A lot of things I have seen in the chat and if you allow me, Sasha, I would also like to answer a bit that, is how can we actually also make people from communities who do not have access to digital word benefit from the artificial intelligence techniques.

Well, that's also a very interesting point.  I take that back to ‑‑ to the ‑‑ to the community I have the pleasure to be a head of.  One thing I want to mention in that field is that that challenge, we also have to maybe not make a guidance or say principle, but to have more investment to develop techniques to lower the barrier of energy needed to make these tools one.  Now, the amount of energy needed to do proper machine learning things to get proper results is really quite high and we have to find new ways to do that with low energy staff and, of course, that will ‑‑ well, it allows us to deploy these tools in more context, in more context where we are not as technical as agile as we can see in Europe.

I don't know if that is what you asked Sasha, but that's what I can say at this stage.

>> SASHA RUBEL: Thank you very much.  And thank you for placing the emphasis on the need for collaboration, both within the media sector but also beyond.  And for also raising something that is central to the work of UNESCO on the work of artificial intelligence which is the question on the amount of energy needed and the relationship between AI and the environment and we are exploring both in the framework of our recommendation, but also in the framework of foresight with regards to what the AI of tomorrow will look like and its impact on climate change.  And I'm very happy now to turn to Benjamin Prud'homme who is from the Quebec institute of artificial intelligence and is working with us specifically on foresight related to emerging issues in artificial intelligence.

Benjamin, in 2017, the government of Canada appointed CFAR to develop and lead the panelist Canadian Artificial Intelligence Strategy, worth $125 million, it is the world's first national AI strategy with an objective to translate AI research discoveries into applications for the public and private sectors leading to socioeconomic benefits.

The Quebec Institute for Artificial Intelligence, also known as MILA have supported UNESCO's work on AI, notably through the open dialogue on AI ethics to contribute to the UNESCO recommendation on the ethics of AI and in cooperation, of course with Algora Labs and the University of Montreal.  You are the world's largest academic research center in deep learning and MILA has ensured social influence and continued to stimulating a democratic dialogue not only on the importance of AI but the responsible development.

This is also hosted by MILA.  I would be curious to hear from your perspective, what is the role specifically of academia and research institutions in ensuring principles are also translated into practice and that capacities of all stakeholder groups are built so they are equipped intellectually to do so?

>> BENJAMIN PRUD'HOMME: Yes, thank you, Sasha.  It's a very large question with very limited time, and knowing academia is not always the best at being concise, I will nevertheless give it my best shot.  So the first thing that I would say ‑‑ first thank you for the introduction.

The first thing I would say is AI is very specific in the sense that industry is a big player, including in fundamental research.  And so I just want to narrow ‑‑ like, or mention that I will be focusing more on the university research center front of things and less so on the ‑‑ when the fundamental research to integrate with an industry.

So I think the role of acting is both essential yet limited.  It's essential because academia in all disciplines including in AI is always very important both at the inception stages, fundamental applied research but also as watchdog, as the research kind of, you know, goes further within the life cycle, and then we kind of come back and say, what have you been doing with what we have produced?

Also, AI is very young.  If you, you know, position it in comparison to many other disciplines and so a lot of it is still at the inception stages, sorry, and therefore, a lot of it is still within universities for research centers and so that increases how important it is for academia to be thinking about this.

It's also limited because academia, is one of the many stakeholders life cycle.  I want to pivot the question towards how can academia fill that role or that responsibility and I think there's many ways to do so.  I listed five but there's really ‑‑ I kind of wanted to put those on the table and have us discuss them.

The first thing is when you are doing fundamental research, you should ‑‑ whether it's research oriented towards, you know, climate change or very, very fundamental research on deep learning and so on, we should always be very aware of both epistemology and constantly ask myself why am I doing this research and how should I be doing it.  The second piece is you should make ‑‑ as a professor, you should make ethics as part of your AI teaching.  You have a wonderful opportunity to be speaking to everyone who is going to then be going to governments and to ‑‑ and to industry and you have the privilege of instilling in these people the notion that ethics is not separable from AI, and actually if you want to be doing good AI, even if it's fundamental research in A I., you need to be doing it ethically.

I think the third space is applied research, where we can actually be a keep player in developing practical tools.  And so you know, the recommendation that UNESCO is putting forward, talks about transparency, explainability, biases, but we're going to need tools to measure these, to audit these and we can also be key players in developing those.

The third ‑‑ of the fourth one, sorry is one that's very dear to me because of my human rights background, but you need to be embodying those principles in practice.  It impacts the knowledge you create.  And so you want diversity and inclusion to make sense.  You need to hire women, indigenous people, People of Color.  You want to do ‑‑ to help develop capacity, you need to partner up with emerging countries and universities.  You want literacy and you need to connect with civil society and citizens and we have had, you know, the chance to do it via the open dialogue with UNESCO, and thank you for the opportunity.

And the last one that is very dear to me we need to partially initially remove that from the shoulders of individual professors but to the institutional how can I as the director of MILA create an institutional culture that makes it very clear that ethics is front and center?  I can ‑‑ you know, I can put mandatory courses.  I can devote grants to that research.  I can choose professors and then promote professors who put that at the center of their work.  I can have ethical committees and so I know that my time is now done, I believe.  But basically these are practical ways in which we should be advancing ethics in AI institutions and I think that the key to that is always reminding ourselves that ethics is ‑‑ it's an essential part.  It's not ‑‑ it's not possible to disassociate but then we need to look at what we are, and not only what we are producing.  We need to be embodying those principles because that creates impactful knowledge.

I would really turn it back to you and thank you for your work both on the recommendation and for holding this panel.

>> SASHA RUBEL: Thank you very much, Benjamin.  I would like to thank you and all the panelists for responding despite the challenges to hard questions.  The questions are hard on purpose because I think that it's also a way to show how we are sitting in this uncomfortable and creative and iterative space and finding solutions together through this kind of dialogue.  So thank you for accepting those hard questions but also for being so concrete in your proposals.

I'm thrilled to have academia on this panel, also just so that we can have the word epistemology on this panel.  But also your emphasis on this need for ethics by design, and diversity and inclusion, which is really at the heart of you are work at UNESCO and one of the reasons why we appreciate our cooperation with MILA is how no involve more women, indigenous communities, minority and disabled groups, more in codeveloping the future of AI and institutionalizing this type of approach by ethics by design.  Thank you very much for your concrete proposals and your cooperation.

I would like to turn now to Jed Horner.  Jed, you are a part of standards Australia which is a member body of the ISO, and one the things that ISO is perpetually underlining is how AI presents new and unique challenges specifically to ethics.  No ethics systems that are leveraging AI can be implemented by many different users and different ways across various application verticals, from healthcare to mobility, with completely different requirements and sometimes, in fact, also with markets and regional differences as well.

To address this type of diversity, the ISO and the electrotechnical commission launched the will range of work items through their joint technical subcommittee, dedicated to AI, with 47 participating member countries including Australia, the Working Group on trustworthiness also worked to collect and identify the ethical and societal considerations related to AI, linking back to trust worthiness projects the ISO is working on.

You also developed a technical report, which highlights the specifics of ethics and societal concerns in relation to other more generic ongoing projects of trustworthiness, risk management and bias among other issues.

How, Jed, does this work provide guidance to other ISO and IEC technical committees developing standards for domain specific applications that use AI and in your opinion, what are the capacity gaps you have identified and what do you think could be done to address this?

>> JED HORNER: Thanks so much.  Good evening to everyone else in my neck of the woods, to those Australians staying up for this panel.  I think a lot of you Europeans are probably waking up for it as well.  It's great to be with you all as well tonight.  You did mention the work of ISO SC42 and JC1, I would preface saying it has 47 participating member bodies including countries which is really fantastic to see.  So broad reaching in terms of the reach it has across countries and also corporations as well.  You mentioned Working Group three which as you alluded to Sasha and you nailed it, absolutely it's a committee.  It's work going on across AI what are the salient ethical rights concerns and human rights concerns and how did it map to the work that we are doing at the technical level, not just in the committee, but broadly across the ISO community. 


Contact Information

United Nations
Secretariat of the Internet Governance Forum (IGF)

Villa Le Bocage
Palais des Nations,
CH-1211 Geneva 10

igf [at] un [dot] org
+41 (0) 229 173 411