IGF 2019 – Day 2 – Convention Hall II – ​Applying human rights and ethics in responsible data governance and AI

The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> OLGA CAVALLI:  Oh, yes, I can talk to you.

>> VLADIMIR RADUNOVIC:  Thank you for waking up so early to come up for this crazy session.  I'm sure it will be worth it.

>> OLGA CAVALLI:  I am Olga Cavalli.  I come from Argentina and we have our dear friend Vladimir from Serbia.  You know Vlad from DiploFoundation.  I'm the academic Director of the South School of Internet Governance.  Thank you for being with us this morning.  The idea is to have a very interesting and interactive session with our experts here, with our panelists and of course with you.

The main issue that we will talk about today is applying human rights and ethics in responsible Data Governance and artificial intelligence.  You may recall we organized a previous session related somehow to this session and the last IGF in Paris in the UNESCO main venue, and so this session intends to be somehow a continuation of what we discussed at that time.  We have only two hours.  We have eight experts by the way.  We have 7 experts and another expert in the stage.  The idea is go through some questions we have prepared for our panelists and for you the audience, and perhaps in two hours we end up having more questions and more ideas that you can take home and develop with your own group, with your own stakeholders, with your own Government.

So that if we get that at the end of the session, we would be very happy with that.  So let me tell you some comments about what is this session about, applying human rights and ethics in artificial intelligence.  We all know that artificial intelligence can contribute to addressing some of the world's most pressing problems but also can lead to some inequality and problems, as all the technology that we have been using since civilization exists, everything can be used for the good or for the bad.  Artificial intelligence can make people's lives easier but it can also generate discrimination and bias.  We all know several examples of that.

How do we develop and use artificial intelligence in a human centric and trustworthy manner?  That is what we want to discuss with you today.  How to make sure that the data used by artificial intelligence is reliable, accurate, complete enough as not to generate discrimination.  Also the use of data is one of the main themes of this IGF here in Berlin.  How to avoid privacy and Data Protection breaches in accessing and processing the large amounts of data that are the core of artificial intelligence.  How do we make sure that there is transparency and accountability in algorithms used in artificial intelligence?  Where are we now?  Where are we going?  How human rights and ethical frameworks are related or are being used in the development of artificial intelligence.

So we have seen several ethical frameworks developed in different organizations, OECD, ITU, European Commission, IEEE.  We will review them in a moment so you can have a sense of what they are about but how do they relate with the developments that are being done nowadays.

Do you want to add something to that?

>> VLADIMIR RADUNOVIC:   I think you can present the guest hosts.

>> OLGA CAVALLI:  We have you of course and we have a very fantastic panel of distinguished experts and I will briefly introduce them.  We have Ms. Lisa Dyer, the Director of policy at the partnership of artificial intelligence.  Welcome, Lisa.  We have Ms. Carolyn Nguyen, the Director of Technology Policy at Microsoft, welcome.  We have Sarah Kiden, Ph.D. student at University of Dundee.

Reverend Augusto Zampini Davies, the Director of development and faith of the holy seed Vatican, welcome.  Mr. Hannah, he's the.  Ms. Peggy Hicks.  She's the Director at the United Nations Human Rights Office.  And Mr. Yoichi Iida, the Ministry of Internal Affairs and Communication of Japan.

>> VLADIMIR RADUNOVIC:  This promises to be quite the discussion with another guest we're not going to mention now.  We'll keep it as curiosity.  Looking at last year at what we had in Paris at the discussion, there were a couple of interesting questions food for thought that came out from the discussion.  We had sort of a take away that the code is the reflection of the people who have coded and influenced the code and the ethics changes over time.  It changes over geographical borders.  The ethics and concept and understanding of ethics is not the same in different regions by different generations so is there such a thing as a global ethics.  Then the next question was that if there even could be a single set of principles we agree on a global level that means we need a global approach and single guidance for everyone who is developing the AI and using the AI.

Then the professional ethics is needed and there were references to the doctors or lawyers which might have their ethical code and one of the open questions we're going to throw back to the panel later and all of you is whether the engineers, the technical community has or should have the ethical code as well of some kind.

One of the principles or the points raised last year was also we need the fairness as one of the first points in the approach to responsible AI, which what does it mean?

It means that the AI should treat all the people equally.  How do we come up to that?  Is that possible?  Then the interpretability of the algorithms.  There's a huge discussion, can we actually interpret the results, explain how the AI came to certain results, and decisions.  Is there human responsibility and accountability for any decision taken by AI?  A huge question we started discussing when the smart car killed the first person and then are the laws actually sufficient or are they becoming too slow compared to the pace of development of AI, and do we need ethics or any sort of a different approach, principles which would go hand in hand with laws or even maybe replace them?  Should we rethink the whole approach to the legal setting of the AI?  And then lastly, instead of maybe or building up on the laws, some sort of principles like the UN Declaration on human rights or something similar might become the lead instrument for the responsible AI.  I stop there.  I'm sure you want to run through the many initiatives that we had.

>> OLGA CAVALLI:  As I explained before and as Vlad was explaining there are several initiatives to review the ethics related with artificial intelligence.  I will go briefly through a very nice graph that Vlad has developed for us this morning.  We have divided them into three main categories:  Human centered, responsible and mechanisms, if we can go to the first one please.  Human centered is the lawfulness, respect for human dignity, fair artificial intelligence, safe and secure artificial intelligence, I see people taking pictures.  I think we can share this with the audience afterwards, right?

Artificial intelligence should serve people, the society and the planet.  It should respect for human rights and freedom.  It should be ethical and it should benefit for society so we have gone through them.  Most of the different initiatives have similarities and some common points that are summarized in this graph, especially this part is those human centered focused.

If we can go to the other category, please.  Responsible artificial intelligence, it should be accountable and there should be accountability and transparency and responsibility in developing the artificial intelligence.

There should be transparency, artificial intelligence should be a solution not a problem to society and human people.  And should be explainable, should be easy to understand and everyone should be able to catch the benefits.  The other category is mechanisms, which mechanisms are there for the development of artificial intelligence.  Prepare the work force so people should be trained for that.  Human is control over artificial intelligence, and not artificial intelligence controlling the humans like this movies that we have seen since many years.  Regulated development and use of artificial intelligence, should be regulation, how much regulation we need, how that would be developed, the dialogue on artificial intelligence, there is also several references on the importance of a multistakeholder approach to discuss all the things related with the development of artificial intelligence.  Cooperation between existing initiatives so there should be somehow linked between the different initiatives so they don't work totally separate from each other.  Focus on privacy, safety, security, especially in relation with the data that's collected and managed, the rights awareness of artificial intelligence that everyone knows, not only Governments, companies, Civil Society, the technical community, and support that artificial intelligence should be a benefit for society, so we have gone through all of the initiatives that are related with ethics and artificial intelligence, and these are the main categories and the main points that all of them focus on.

So we wanted to summarize them for you before we go to our three main questions.  Should I read them.

>> VLADIMIR RADUNOVIC:  I think so.

>> OLGA CAVALLI:  We have thought about three main questions that we will present to our experts and of course to you, the audience and one by one, we have about half an hour for each of them but we will manage the time so we have time for interaction with you.  The first one is:  What is the trustworthy ‑‑ that's a difficult question for a Spanish speaking person ‑‑ trustworthy and responsible artificial intelligence, especially with regard to Data Governance which is as I said one of the themes of this IGF in Berlin.

The second question is:  What is the role of human rights and legal instruments and ethical frameworks in ensuring trustworthy and responsible Data Governance and artificial intelligence?  Are there any lessons learned from existing frameworks?  And it's interesting that we have reviewed the main issues that are being developed about ethics for artificial intelligence in different organizations and debate spaces.

And the third question is:  How to cross the bridge between defining human rights and ethical frameworks and implementing them in artificial intelligence systems and the Sustainable Development Goals.  What is the role of different stakeholders?  And how can they work together, as I said before, to achieve the best results for all of us?

So go for the first one?

>> VLADIMIR RADUNOVIC:  Before jumping on the first question I just wanted to remind you of the rules of the game or the ways to interact.  Probably the best music you can hear in the morning to relax is jazz so this is going to be the session today.  We're going to jazz a lot but we need your contributions today.  We'll make sure that the questions are provocative enough.  You have a number of ways to interact.  The microphones are there.  You can queue.  You can sit in the front row as well.  There will be a screen later on which besides this one which will enable you to post your comments but please be active and stop us because yesterday we had the preparation session with these people and the prep session took an hour and a half or even two hours so I'm sure we can continue much longer so don't be shy.

One of the first inputs we want to hear from you is actually sort of the last question of the day, as well, which is:  What is the level of responsibility of actors for responsible AI?  We know it's a shared responsibility, that's a buzzword, that's fine, but who takes what level of responsibility in this constellation?

You can use smartly the devices you have over there.  Go to www.menti.com.  While you're doing that, I would like to start with the first question, so again, the first policy question is, what is trustworthy?  Equally complicated to Serbia, what is a trustworthy and responsible AI?  Especially with regards to Data Governance?  In a way we're trying to decode what are we talking about, and I'll start with, as we agreed with a Tweet, an extended Tweet, if you wish, but not a blog, please.

How do you define the responsible AI?  What is the responsible AI?  I'll start not from the special guests but maybe from the left.  Yoichi.

It is on, I think.

>> YOICHI IIDA:  Thank you very much for the question.  For me personally, responsible AI is a kind of AI based on the, designed on the, concept of human centeredness.  We started the discussion about AI development back in 3 years ago, and in the international fora, and as all of you know, recognize, we had discussions in OECD and also G20 and just shown on the slide in the introductory presentation, we ‑‑ the Governments agreed on the AI principles, which includes inclusivity, human centeredness and transparency, robustness and accountability.  These are the basis of the responsible AI or trustworthy AI and I believe more or less we share the common understanding so for me, the responsible AI is how to implement these ‑‑ the AI which implement these principles and we are now discussing how to realize these individual principles now.

>> VLADIMIR RADUNOVIC:  That is the whole final discussion on how to come to that but this was quite useful to outline the main principles including human centered.  Sarah?

>> SARAH KIDEN:  I will go back to the definition of trustworthy from the English dictionary, something we should be able to rely upon as honest and truthful so when I'm looking at AI these will the key things:  Do you rely upon it?  Is it honest?  Is it truthful?  AI relies on huge amounts of data so from data collection to processing to visualization how are we ensuring these things.  To mention some of the things I actually like the European Commission's definition and how they look at trustworthy AI.  They talk about lawfulness, it should respect laws and regulations, respecting all the principles and values and robust so it's technically robust but also it does not ignore the social context.

>> VLADIMIR RADUNOVIC:  There's a bunch of others but the honesty and truthfulness there's an interesting approach.  Augusto?  I think it will switch on automatically.  Try with the other one.

>> AUGUSTO ZAMPINI DAVIES:  Thank you.  For me responsible AI is one that merges the two concepts, so artificial intelligence.  Artificial is on the technical aspect and intelligence is more on the human aspect or the anthropological aspect but because intelligence is about learning, learning how to take decisions, we learn how to take decisions not merely artificially or not merely from a technical perspective, but in a comprehensive human context and for me a responsible AI is the one that develops the technical aspect, alongside the human aspect, which is a common aspect.  It's not individually human because we are social beings, or put differently an irresponsible AI will be an AI focused merely on the technicalities, on the improvement, on the ability of the technology disregarding basic human not just values but the basic human context.  Such as for example the collective good or how we are working together, how we resolve our conflicts together, how we sort out our illnesses about health.

So when you ask poor people about what ‑‑ I'm a Priest so when you ask people, let us pray, what do you want to pray for?  The first thing they say is health.  Well, guess what?  Artificial intelligence affects the way we're serving health.

The second thing they pray is for the beloved people and particularly for those they have conflict with.  Guess what?  AI is involved in the conflict issues.

So we need to include, a responsible AI includes the artificial with the intelligent aspect and they move along and therefore from a Governmental perspective, from a data Government perspective, is the one who helps the developers, companies and users to go on this way.

>> VLADIMIR RADUNOVIC:  Quite an interesting twist of what is an irresponsible AI but we can get back to that later.  Mina?

>> MINA HANNA:  Well, I will do probably, I will take the path of the philosophical argument.

>> VLADIMIR RADUNOVIC:  From the IEEE?

>> We're Representative of the technical community, but I will take the philosophical path.  I think if you try to define what is responsible, what I have argued yesterday is we have to look at what is irresponsible and then deem the opposite of that as the outcome that you want to get to so if you want to define fairness you define what is unfair and say okay, I can see that the opposite of that by contrast, that's what could be fair.

If you want to define, so in the context of the question that was asked I think irresponsible and there was transgression in the conversation on the definition of AI is anthropomorphizing AI so projecting human properties or characteristics on the AI and that falls on moral outsourcing, meaning that if there are decisions that are deemed to be harmful to a party, because they have been the subject of a use of an automated decision‑making system AI broadly defined then you say that the responsibility doesn't fall on the party that designed the tool or the party that curated the data and so on and so forth or designed the algorithm and so on but it falls on AI.  AI made me do it kind of an argument.  That's irresponsible in my opinion.

And I think that should be kind of a platform or kind of a springboard for all our conversations on how do we govern?  If we have that as a back drop for our conversations, for our deliberations on how we write laws, how we write ethical principles, how we define that entire conversation, knowing that artificial intelligence is merely nothing but an artifact, that's the origin of the word artificial.  It's something that we design, we create but if we assume that it's more than that and that's the part of the responsibility of the narrative, when we just say robots, whatever, it doesn't have the same Agency, it doesn't have the same cognitive property as humans, it doesn't reason like humans so that's very important to define what should be responsible and that's actually our responsibility in defining and communicating about AI.

>> VLADIMIR RADUNOVIC:  Thanks for taking the philosophical view.  If I get it correctly responsible AI is about responsible people that are behind it and we have the artifacts there but we'll get back to the artifact at the end of the session.

Lisa?

>> LISA DYER:  I want to take it a little bit philosophical as well.  I think actually all of the actors up there should approach any work they're doing in this space with empathy and compassion.  Empathy and compassion for the people who will be affected by AI, those who are frightened about losing their jobs, those who are not invited to the table to have these types of discussions.  Empathy and compassion for those who are biased as a result of some poor design decisions so I think in a short Tweet I would just say empathy and compassion is fundamental to responsible AI.

>> VLADIMIR RADUNOVIC:  Thank you for Tweeting, not blogging.

>> I don't think everybody's on Twitter here, so I'll try to stick to the rules and say for me, responsible AI is something that my grandmother understands, my teacher can explain, and I'm not worried about it being applied to my children, and that means it has to be transparent, understandable, and accountable.

>> VLADIMIR RADUNOVIC:  Good Tweet.

>> CAROLYN NGUYEN:  So what can I say that hasn't been said before?  For us responsible AI is really simple.  It is about bringing technology to the table that's human centered, designing a way that augments human capabilities and not replacement.  It's interesting that the term "artificial" I think puts the emphasis on the wrong thing that this is something that is always going to be compared to human.  We think of artificial intelligence really as computational intelligence, and it's really people who are behind it so from the perspective of how do you make that technology realize its potential and be adopted more broadly.  We then use the term trustworthiness.  At Microsoft, we created the Center on Trustworthy Computing in the beginning of the 2000s.  Why trustworthiness?  Because it's important to bring technology to the table that earns the trust of the people.  We don't tell you, you need to trust us.  We need to demonstrate what are the properties of a technology in this particular case computational intelligence that will foster trust and enable its broader adoption.

One thing interesting that hasn't really been talked about this morning is the use of AI for Good, right?  There's lots of conversations out there about negative applications of AI, so we put a challenge out there in terms of AI for Good, in terms of how AI can be used to address sustainability issues, accessibility issues, human rights issues, and how AI can be a part of the solution, as well, and I think that's a conversation that's really not out there, so responsibility and trustworthiness.

>> VLADIMIR RADUNOVIC:  I got the impression we would feel that responsible AI would be the AI we would love to have in our family, part of the family we can really be close and trust and all that's mentioned, Olga, do you reflections?

>> OLGA CAVALLI:  I took notes as our dear panelists were speaking, and I got some words:  Honesty, inclusion, fairness, compassion, empathy, transparency, accountability, and trust.  So nothing is related with technology.  You see, everything is related with how the humans approach the technology.  I think it's interesting to see that what we are trying to find is a way to use the technology in the best way for humanity and for people.

And let's see how the technology falls and we really can achieve that and use the artificial intelligence for the good.  Should we see if we have some questions from the audience?

>> VLADIMIR RADUNOVIC:  It would be interested to hear from you as well, what you think.  So what is responsible AI for you?  And how would you define it?  I forgot to say at the beginning that we also have of course the remote participants, or again we are remote and they're on the right place.

And June is there to help us.  Is there any reflection?  Not yet, okay.  Take the mic and please introduce yourself clearly so that the transcribers can take it.

>> Good morning.  Thank you first of all.  My name is Deborah.  I come from Italy and I'm a trainer in human rights education and I'm here with the Youth IGF and as youth Delegate from the youth Department Council of Europe and I was surprised that when you were giving out the definition, that people were like focusing on giving definition of trustworthy or responsible.  Nobody really gave out the definition of what human rights are, because sometimes when we are in this kind of environment it's like okay, we have to care about human rights.  It's the right thing to do but we kind of forgot that human rights are universal ‑‑ really difficult, I'm Italian, I've got my problem too, and they are interdependent and inter‑related so sometimes they come at losing hand of all these speeches about technologies, and it also finally for me when we talk about artificial intelligence as something that is separate from human, it's human‑made.

So why artificial intelligence can be educated, and we educate them with data and working on them, so why among the things when we can educate artificial intelligence can't we introduce human rights principle?  And how can we find ways to do that?  When it comes to responsibility, European Commission has this project called LEDGER Project that calls for young tech experts, experts in general to supplement projects that will help in the creation of our more sustainable Internet but there are only 32 funded projects or something like that.

So why can't we make the people who got the power, the economical power, more accountable in putting money in these kind of initiatives.  Sometimes when we talk about these things are so difficult to make, or just put the money in.  There are people who are experts, use them, and do things, is not such complicated or so much philosophical.  It's something that affects all of us, all our communities and it's something that is doable.  So that was mine.

>> VLADIMIR RADUNOVIC:  Thank you, and you actually raised something we scratched yesterday evening and we agree we're not going to raise the funding.  I'm kidding.  We're going to raise it in the last part because we probably need about five hours of discussion on funding and ethics of funding.  We'll get back to that.  I don't know if anyone wants to react quickly on this one.  The Human Rights and Principles we mentioned that as one of the takeaways from last year's session.  If something is encoded in the human rights approach actually what reflects what we're talking about, the human responsibility in that one.  Does anyone want to take it?

>> PEGGY HICKS:  Thanks for that, and coming from the UN Human Rights Office, it's nice to hear it from the floor, as well.  I think as the introduction put forward, there's a lot of discussion about ethics and human rights that pits them against each other in some way, and I just wanted to start this off by saying that I think the commentator is absolutely correct:  Human rights is part of the decision the same way ethics is part of the discussion, and they each have fundamental roles to play.  Ethics, my friend Ed Santo talks about what we should and should not do.  Human rights, legally binding, universal, are about what we can and cannot do and we need the human rights framework to help us do the ethical things that we want.

We've got ‑‑ and there's a wonderful chart from Berkman Klein that shows the interface of the incredible number of ethical conversations and principles that have been developed around AI, but the reality is, we're still all in this room wondering:  How can we make this real?  What impact are all those principles having on the ground?

And that's why we're talking about regulation, and ultimately, we need to look at human rights as that foundational bearing that can allow the ethical discussions that we've had to be realized in real time, for real people, in the ways that will protect human rights on a day‑to‑day basis.

>> VLADIMIR RADUNOVIC:  What comes to my mind is we keep talking about the relations to human rights and the principles but I don't see any reflection how the economic or geopolitical aspects impact that.  We might end up with a nice wish but reality about many other human rights is not really bright.  I want Yoichi or Carolyn if you want to comment on the economic commercial aspects, are there any principles that are impacted by the economy and the geopolitics in a way if you wish.  Yoichi?

>> YOICHI IIDA:  Thank you very much for a very difficult question.  From the probably Government perspective, we understand there should be a kind of very universal Foundation for ethical aspects or human rights aspects, should be built in AI development but we often find those aspects also accompanied with the diversity from, coming from the culture, history, or society conditions so when we come up to the very High Level standards, when we look at the ethical aspects or human rights aspect, we maybe reach a kind of common understanding, what are the human rights?  And what are the ethics?

But when we talk about more into details, we always face with difficulties talking about differences and diversities in culture, history, and a lot of factors coming from societal conditions.

So I cannot find the answer how to promote the economical investment in that ‑‑ to promote that aspect, but from our point of view, yes, there exists a lot of difficulties when we look at the ethical and the human rights aspect.

>> CAROLYN NGUYEN:  One way to look at that is how do we make sure that the potential and the benefits for ‑‑ that will be brought about by AI can be enjoyed and spread equally across.  From that perspective, there's two different ways to look at it.  One is to ensure that the ‑‑ that everyone has the appropriate skills and training in order to participate in this new ecosystem, and so we've been doing a fair amount of work with respect to how to look at training in terms of right all the way from elementary school education in terms of STEM, not just mathematics et cetera but also the ability to do analytical thinking, and also going back to something Lisa said earlier, empathetic and problem solving because those two go hand in hand.  It goes back to the human aspect of implementing the technology so we're doing a fair amount of work with respect to training and lifelong learning with respect to that.  A second aspect when you start to look at the question in terms of making sure that the benefits are shared inclusively, is to make sure that the technology as well as the data underline the tech ‑‑ underlying the technology can be shared so from that perspective we've made a number of datasets available for research and also otherwise as a specific example we're working with the OECD as their formulating AI policy observatory to share data.  There's two sets of data in terms of the Microsoft academic graph looking at publications, where collaborations are taking place, where innovation is being spread and then secondly the linked in economic data which looks at the economic opportunities, training, migration, talent, skills development, supply and demand, et cetera.  We've also launched an initiative to start a conversation on how to share data more broadly.  The conversation here is you talk about data sharing but there are issues about organizations not knowing what data can be shared, what data should be shared and what are some of the mechanisms et cetera that can enable that conversation to occur more holistically.  That's another part of responsible Data Governance that isn't really discussed more holistically in an integrated manner.

>> VLADIMIR RADUNOVIC:  Thanks for raising the question of the data.  That brings us back to the responsibility of the humans.  I'll start with Augusta and then Mina.

>> AUGUSTO ZAMPINI DAVIES:  Thank you, Carolyn for bringing up the sharing thing that is really, really important.  But one problem we're facing with data is the monopolization of data by big tech companies so we don't want to end up into a Division between data owners and data slaves.  Data is the fuel of AI.  If we don't discuss about data when we discuss about ethics, we are done.  And the question that many companies are asking, many big tech, is not just how to share or what to share, but is there benefit ‑‑ are there benefits of AI, or could AI benefit the less powerful groups or not?

Because this is the problem.  It's about, I will give you an example so I'm not just talking, because your question was about economics.  AI is replacing jobs, or changing them, the whole dynamics of jobs.  Now, jobs for many people are not just something that we do in exchange for salary.  Job is part of our vocation.  We are called to work, whether that is part of our DNA.  We develop as human beings in jobs.

Now, being ‑‑ given that AI is going to benefit lots of companies but with less jobs, so who is going to pay the cost?  Who is going to pay that cost or what economies call the transition?  And this is very important because it's not just about the data sharing or how do we share but is the benefit of this data, there is benefit some and not everyone, are those some going to share the cost of the transition while we transit into a different way of understanding jobs and what we do with the shared time?

>> VLADIMIR RADUNOVIC:  So one of the goals of this session which we forgot to say at the beginning is to open more questions, not to give the answers.  I'm glad we're going in that direction.  We have a couple more minutes so we can pass it.

>> OLGA CAVALLI:  I had a follow‑up question that came out of the blue to my mind.  Equally, training, shared and coming from a developing country I wonder what is your impression about how the developing world could catch up this change?  Because Peggy said how can we make this real?  How can we make this real for everyone?  So all the humanity really captures the benefit?  But sometimes and those coming from developing economies might understand what I say now, the urgencies of our reality puts these technology things aside and not the focus that should be given to the development so maybe Augusto or Sarah that come from developing regions.

>> SARAH KIDEN:  I would like to speak on the area of skills and training but from the point of view of inclusion so I just completed a piece of research with the research ICT Africa and we're just looking at gender and AI in Africa so when we took a look at the AI in Africa, looking at who is part of the development teams, there's a problem.  You hardly found any women so we went a step lower and said let's look at universities and for the case of Uganda I was looking at admission lists of people who are joining computer science and you'd find sometimes two women in a class of 40 or 50.

So I would like to request that in your skills and capacity‑building initiatives that you also be inclusive and ensure that all voices are still included all the same.  Thank you.

>> VLADIMIR RADUNOVIC:  One of the good things is we have a gender balance at least.  So I'm happy.  One of the rare sessions here.  Augusto and we'll get back to Lisa in a minute.

>> AUGUSTO ZAMPINI DAVIES:  Two quick questions, sorry answers or counter questions.  One is, yes about the education but this is something a discussion I have with my mother all the time.  Every time we face a problem, we say the solution is education.  Well yes and no.  The second world war was initiated by the best educated country at that time.  What kind of education?  And education for whom?  In developing countries as we know, in South America, Africa, some parts of Asia, education is lacking, we're talking basic education.  We're talking about educating people on AI is too sophisticated.  How is AI going to help those people?  For example a piece of good news is the learning of languages through AI in poor countries.  But a question is, what kind of education and how AI is going to help.

The second counter question would be developing countries normally are desperate for income to sustain their policies.  Now, it is happening already, I wouldn't say which country but it's happening that some powerful countries come with a planning based on AI to less powerful nations and say, here they are, this is extraordinary.  This is a new technology, but it's going to allow you to control your citizens because it's about citizen control.

Well, and of course, Governments in the developing nations, they love that but is this what we are aiming at?

>> VLADIMIR RADUNOVIC:  Glad you didn't mention the country but a couple have come to my mind.

We have input from the remote participant and then I'm back to you.  June.

>> JUNE PARRIS:  We have a question from the Johannesburg remote hub.  What role should big corporates play in ethical handling of data?  And what is the common or current use of data collected by big corporates?

>> VLADIMIR RADUNOVIC:  Thank you, June.  So in the third part, and good morning to the hub.  In the third part we will move more on to actually who should do what, rather who should have what responsibilities.  We have some reflections over there, tech sector, and Governments dominating.  I want to take a few quick reflections.  To Lisa and Mina and then to Peggy.  Lisa?

>> LISA DYER:  I wanted to go back to this conversation about geopolitics and economics and in my experience both of those, those areas of expertise, always come down to someone loses and someone wins.  It's a zero sum game, and I fear that unless we do some smart things now in digital inclusion and in AI, we're going to be in a similar someone loses and someone wins, and I don't think it has to be that way.  I think we can create an "and" scenario where people have opportunities.  There have been reports about including women and marginalized communities in these conversations.  There have been calls for diversity and inclusion from Tunis that established the IGF, from Secretary‑Generals, other senior officials, and we are still not quite getting there.

And it is without ‑‑ without having that diversity and inclusion, the people from South America, Latin America, those from Africa around the world, in the room, we are going to create that zero sum situation, so I think really focusing on "and" solutions, and making sure that everybody has a voice in this is important.  And it's going to take us until 2030 to get there because those are difficult conversations to have about why, and the harms, and what we can do to come up with those solutions.

>> VLADIMIR RADUNOVIC:  I think you're optimistic with 2030 but let's hope, with the agenda at least, Mina?

>> MINA HANNA:  I was going to address the question from the commentator from Italy but something really, really quick but I thought just to confirm I don't think from my perspective and from what I know the conversation on ethics and the conversation on human rights are in any way disjointed or in any way at odds at all.  I'll give you, that might be my shameless mention of the work that we do but the IEEE Global Initiative for instance, the work we have produced and the standards and certification and creation of the principles that should govern AI, they are based on three pillars and the very first pillar, the very first pillar is advancement of human rights as agreed upon by everyone, OHCHR, UN, everyone so that's the very first pillar, that AI not only has to be human centric but it has to be built so that it advances the exercise of ‑‑ fair exercise of human rights and two the second pillar is one of the key pillars of human rights as well which is Agency and political self‑determination.

That was two.  Now the principles are transparency, accountability, trustworthiness, fairness and so on.  Pillars are what, basically is the platform, basically is the basis where we are stemming, where we're building these sort of ethical principles.  I would be remiss also if I do not mention the UNGPs may have been mentioned in conversations yesterday in different contexts, but the ethical design of the Global Initiative, our policy recommendations to Governments, the very first part of the recommendations is they are built on the UNGPs, the UN Guiding Principles on Business and Human Rights known as the Ruggie Principles.  This is the very first recommendation.  We said you have to look at those UNGPs.  They have to be the Guiding Principles for how we should architect perhaps how we should define the rules of the market, how we should define the rules for capital access, for access of communities that are not on the broadband and the network, how businesses should look at the people they serve, how we should try to be more focused on not shareholder value but more on stakeholder value and so on and so forth.

So these kind of, they are there.  They are baked into many and I'm not going to say it's only the IEEE principles but I'm pretty sure partnership on AI for example or the principles that have been built, that have been put together by the High Level Expert Group of the European Union and OECD and so on, all that to say, they're not at odds at all, at all.

But the very sure thing that I would say is, to achieve that vision of 2030, I think we're going to have to fight like hell.  We're going to have to advocate a lot.  The principles are great, and we've done an amazing exercise to come up with the principles, but if we were to ‑‑ because you mentioned the communities that are underserved.  Communities that do not have access to technology, I'll point you to the statistic that I think the Secretary‑General, I think it was him who quoted yesterday, he said that only 2% of women in South America have phones that have access to broadband.

Now, if you were a woman ‑‑ sorry not South Africa, South America.  If you were a woman and wanted to build a business and you don't have that access, how are you going to get capital?  How are you going to build a team?  How are you going to do any of that?  If you're not telling me that what you need here is fighting like hell, do diplomacy, work with Governments, bring people to the table and just, not just cajoling, not just incentivizing but really fighting like hell to make it happen, I don't know hopefully 2030 would be when we're going to get there.

>> VLADIMIR RADUNOVIC:  The important thing you outlined which is a good introduction which is we already have principles in place.  Let's see how we can do that but closing the discussion on what, I'll just give the final part to Peggy.

>> PEGGY HICKS:  Thanks.  It's actually going back to the question that you asked but fortunately it brings up the two comments just made on inclusion and on the Guiding Principles.  You asked a very fundamental question about what's the economic side of this?  I mean, we all want it to happen, but if it's not happening, why isn't it happening?  I think it goes to this issue, I wouldn't say funding but it's incentives.  What is the incentive that currently exists?  The incentives that currently exist do not actually fully support the concepts of responsible AI we just talked about and what we have to do is change that equation so that the companies that do the human rights impact assessments, that do the human rights due diligence behind how they're deploying AI have a competitive advantage, because those things actually help them in the marketplace, as well.  And part of how we need to do that is for Governments to actually take aboard the UN Guiding Principles that place responsibility on them with regards to how they regulate business, and for business under the Guiding Principles to fulfill its responsibility to respect human rights and we can get into the detail there later.

But ultimately, we can't have the companies that are actually bringing in human rights and ethical principles in the way that they should be on the back foot of those charging ahead without regard to these things.

>> VLADIMIR RADUNOVIC:  Excellent intro into the second part so we're turning from what into how.  We have two questions over there so noted.

Before I pass the floor to Olga, you can probably move to the next Menti meter.  This is the place for any of your comments.  If you're too shy to take the floor and you simply want to share not a Tweet but a couple of words, you can just post it there and we'll be getting back to the questions over there.  I guess we can take a minute of silence because there's so much of ‑‑

>> OLGA CAVALLI:  One thing that is working very well is that our experts are doing also questions, so we have more questions than at the beginning which is somehow the expected outcome of this session.

And I want to stress one sentence from Peggy:  How can we make this real?  How can we get this to our Governments, to our organizations and make this real?  How can we take this into concrete actions?

And the second question is:  Are we taking some questions ‑‑

>> VLADIMIR RADUNOVIC:  You can start the next part.

>> OLGA CAVALLI:  Menti meter?  Not yet.  The second is, what is the role of human rights legal instruments and ethical frameworks in ensuring trustworthy and responsible the Data Governance and artificial intelligence?  Are there lessons learned from existing frameworks?  Can we use the principles, ethics and human rights that we have now?  Or we have to revise them, we have to replace them?  Can they apply to artificial intelligence as we have them today?  Or do they require some perhaps multistakeholder dialogue to upgrade them or adjust them to the new reality?

How the law applies to this unexplainable machine learning and artificial intelligence self‑action?  So this is the next group of questions.  I don't know, maybe Peggy you want to start.

In relation with what's your vision of the existing framework for human rights.  Does it need a revision?  Does it apply as it is?  Should we create a space to debate again?

>> PEGGY HICKS:  I'm glad you asked it in that frank way because it's something I think a lot of people think is, some of the questions we're dealing with are so novel and so unpredicted by those who created the framework of universal human rights starting in 1948 with the Declaration, it's a valid question to say:  Is this really sufficient for where we need to get?

And the answer is:  Yes, largely it is, and to the extent it isn't, we haven't yet gotten to the stage where we even know where those gaps are, because we haven't done the hard work of applying the existing framework.  Let's do that work first and then if there are areas where we need further development through doing that work, we'll figure out what those are and we'll be able to take those additional steps.

But there's a lot of ‑‑ there's 70 years of experience of unpacking things like the right to privacy and concepts of human dignity that can be brought in, in a very practical and effective way, I think, to answer many of the questions we have regarding artificial intelligence and the UN Guiding Principles as we've talked about is a real starting point for that because it does place responsibility on both states and companies for how they answer these questions.

And one of the things we've heard from companies is:  We know we're supposed to be applying the Guiding Principles, but we are a bit uncertain about how to do that in the tech space and so that's my self‑promotional.

We have a project that's really working with companies to look at that, to say:  The universality framework Guiding Principles are applied in the apparel or extracting industries but let's talk about what it means to do human rights due diligence, to do a human rights impact assessment for a new piece of technology or new application of technology within the world that we now live and let's talk through how we're a responsible company behaves in terms to linkage to harms download, what does causality mean, what does contribution mean for the questions we're asking on things like facial recognition so we're working with companies based on practical scenarios to work through those questions.  What we find when we do that is that there are a lot of answers already baked into the existing framework and that that fundamental question of, do we need something more, at least is something we can wait to get to.

>> OLGA CAVALLI:  It seems that the framework is there and from your comments, the companies are willing to work with you in trying to use those principles in designing the technology.  What about Governments and other organizations like the Vatican?  How do Governments, companies are implementing and developing the technologies?  Maybe they already have that concept being included, like human rights by design, something like that?

What about Governments and other organizations that are perhaps ruling or designing regulations and laws?  Maybe Yoichi or Augusto.

>> AUGUSTO ZAMPINI DAVIES:  I'd like to follow up because yes human rights framework is necessary and yes we're not implementing fully but given that AI is such a novel technology, we are in new territory.  For example look at what happens with how it is influencing the way our incentive to vote and Democracy.  So it's a fine ethical approach that we need so we need some guidance for that but last week we were discussing, we had a similar discussion at the European Union and one expert said something, said regulate, coming from the companies, and maybe Carolyn want to comment on that, they were saying regulating AI is like regulating the ocean.  It's impossible but what we can be regulated is the incentives, the benefits, the design, et cetera, and something 2459 we are working on is about that why are we insisting on the data?  AI is designed and based on the data.  All the data is about the past and the past brings with us all our problems, our weaknesses, our bias.  Look at Silicon Valley, 80% of the designers are male, white people.  So are we expecting them to bring this non‑biased thing on women?  Well, certainly not.

But I can talk about other biases as well so how are we going to look at the future, not just at the past?  So the artificial intelligence is built on data that we have in the past.  But we need to think, given that most of my colleagues agree that we have to be human centered, what does it mean to be human in a collective environment or organism?  Everybody's using the word ecosystem in technology now and in the Vatican we laugh about that because ecosystem is about life, our natural life.  What does it mean to be humans in a natural life in a connected world with conflicting values?  So how can we, the human rights is immensely helpful because at least we have already agreed on that but how can we find some, a couple more ethical principles that can guide us towards an inclusive society and towards justice and towards countering the imbalance of power?  This is what we are facing so data from the past looking to the future and imbalance of power.

Those who are developing artificial intelligence, they have a lot of power.  How are we going to ensure that the power is used for the benefit of all society and not to increase their own power?

>> OLGA CAVALLI:  Yoichi, perhaps some Governmental perspective from your side.

>> YOICHI IIDA:  Thank you very much.  My feeling is when we started the discussion on artificial intelligence, we recognized the very huge potential of impact brought about by artificial intelligence to the society and the economy, which is not limited to one country but to the whole world.  So what we believed in, we need to make the best use of that benefit or that impact from the technology, and we needed to look at the positive side.

So real intention of the discussion was to maximize the benefit from artificial intelligence and not to regulate the development of the technology.  So we wanted to ‑‑ first we started a discussion at the Governmental Forum called G7, because it was something in front of us, and we started the discussion among the kind of closed group of countries, but even at that time, we paid a lot of attention to listening, to listen to the private sector or Civil Society and multistakeholders, so we always had a chance to communicate with multistakeholders and get their opinions involved in our discussion.

And after we started a discussion at G7, we expanded the participants from the countries and the industries and Academia, Civil Society joining to the discussion, and we also brought the discussion to the OECD or G20 to cover more broader coverage geographical coverage.

So from that point of view, I think the, for example, UN or IGF is one of the ideal fora to discuss that kind of new technology to extract the best benefit through the multistakeholder discussion.

So that's what I feel at this moment.

>> VLADIMIR RADUNOVIC:  Olga, I suggest we get back to the people there.  We had two, three ‑‑

>> OLGA CAVALLI:  We have several.

>> VLADIMIR RADUNOVIC:  Inputs.  I'll first give the floor to the gentleman there and the lady and the gentleman here and in the back.  We have a remote, as well, right, June?  And please introduce yourself.

>> Hello, everybody.  Thank you very much, I'm from Taiwan University.  First of all I should thank you for organizing this important and valuable discussion.  I think that we need to establish special norm package on social responsibility of AI company.  We can call it CSR for artificial intelligence, global CSR framework for AI company.  I strongly believe IGF community can play a vital role in this regard.  Also we need some CBM, confidence building measure from company in this regard.  I think multistakeholder approach can help us in this regard.

We need global norms on responsible behavior of AI company and my suggestion is, it could be in the Poland IGF agenda.  Thank you.

>> VLADIMIR RADUNOVIC:  Thank you.  It's an interesting recontextualization of the norms and CBMs into the AI and the relations across us rather than the countries.  We have a comment there and then we'll go to the remote.

>> Thank you very much.  Thank you to the moderators and the panelists for this very provocative and substantial discussion.  My name is Maricela Munoz.  I am a Governmental Representative from Costa Rica.  I'm also a MAG Member and actually my question comes from my field of work.  I am part of the Group of Governmental Experts for emerging technologies in the field of lethal autonomous weapons systems so this discussion and dialogue in regard to the application of current human rights framework and I would include IHL framework as well is really relevant and in that regard I concur with Ms. Hicks that we have a very rich human rights framework that is lacking full compliance, not only on the part of the industrial and private sector, but I would say a broader multistakeholder segment.

We have been experiencing this lack of compliance, so my question is in terms of the development of technology, that will potentially decide upon life and death as it regards to targeting and engaging human beings, fully autonomous decision‑making from fully ‑‑ lethal autonomous weapons systems, I wonder if we must consider that there is certainly a gap we need address.  The International Committee of the Red Cross has highlighted that they feel there is a gap in IHL that needs to be considered in this regard so just adding to the bunch of questions that we have been elaborating on this morning.  Thank you.

>> VLADIMIR RADUNOVIC:  Thank you.  And I thank you first of all for mentioning the DG on laws.  I think that's again how do we connect all these dots?  And then secondly I'm sure Peggy you can reflect on the UN High Level Panel on digital cooperation which had several recommendations on that.  I suggest we take two more.  One is remote and the other is gentleman there.  You also raised a hand earlier but just be short and sweet.  June.

>> JUNE PARRIS:  Sorry this one is really long, it's coming from Munich.  There's a gap in the current policy development processes, especially when it comes to defining redlines.  For instance, the final draft version of the EU Commission's new guidelines for the ethical use of AI, which should be a kind of ethics handbook with clear redlines and values that would are not negotiable for people in politics, business, and software departments is lacking non‑negotiable ethical principles.

The guidelines have been developed by 52 experts, of whom 23 come from the industry.  If you include the lobbying Associations there are 26 representatives from industry, half the group.  On the other hand, there were only 4 experts in ethics and 10 organizations related to consumer protection and Civil Rights.

Would it be helpful if the IGF developed also a responsibility AI framework including a recommendation, requirements for Governments about the multistakeholder Constitution in order to ensure industry and public interests are being balanced?

>> VLADIMIR RADUNOVIC:  Yoichi again we're back to the role of the IGF, we'll get back to that.  Maybe two because the gentleman there asked and here and then we close it for now.

>> Thank you.  I will keep it short.  My name is Neal Kushwaha from Canada, private sector.  My question is to the panel:  How do we intend to make companies basically the ones that hold the funding, the ones that drive, that have a drive to increase shareholder value and the ones already confused with various domestic and international law compliances, accountable to uphold ethics, human dignity and rights, and UN GP?

>> VLADIMIR RADUNOVIC:  Accountability of the private sector?  Thanks.

>> Thank you very much.  My name is Kamut.  I'm coming from India.  I have two concrete questions for the representatives of the United Nations.  Number one that in India mostly the representatives of the Government is always being invited and informed about the development of the human rights, but the non‑governmental or individuals have no chance, no right, no information.

In 2012, 2013, 2015, I myself as part of the United Nations, they give me no reply.  Number two, I requested them to inform me the right person with whom one can get in contact with, thereby the Indian organizations or Indian groups can present their problem before the human rights problems.

The very important issue about the physically handicapped persons in India is practically zero, and there is no Representative here from the United Nations persons or from the Indian Government side, and in this question the human rights when we're talking about and in one rate we're discussing about is zero for India and doesn't help the Indian people, normal people and physically handicapped people.  I request you to give us a concrete place where we can get in touch with the subject and get more information and entry to the United Nations discussion panels.  Thank you very much.

>> VLADIMIR RADUNOVIC:  Thank you so much.  I think this is a very, very important question, what is the phone number to pick?  Or the email to send?  Back to you.

>> OLGA CAVALLI:  One comment that's very interesting is that the spaces where these rules or norms are debated or elaborated do have a bias.  It seems that from some comments from colleagues from remote and in the room, so maybe some reflections from our experts about that.  Who would like to take the floor?

>> LISA DYER:  I'd be happy to start.  For those of you who don't know about the partnership on AI, we have 100 partner organizations that are focused on benefiting people in society.  60 of our organizations come from Civil Society and nonprofit organizations.  The other 40 are equally split between industry and academic organizations.  We get together in a multistakeholder arrangement to address some of the many issues we've talked about today.  And one of the ones I think is vitally important is the word "transparency."  To me, it underscores trustworthiness.  The more transparent our companies or academic institutions are about the way they're developing, building and operating these technologies, the greater understanding users will have to be able to apply their values to, is this the type of system I want to commit to?

And it's also a great way for tool‑builders to hold each other accountable.  One of our projects on transparency is called About ML, and this is a multiyear process.  It has been open for public comment and will be open for public comment again, but essentially we are taking the initiatives of Google, IBM, and Microsoft to label the data that is used to train models, and the data that is available out there, so that when people are thinking about using a model, they can take a look at it, much like a nutritional label, and say:  Ooh, this has the information in it that's necessary for me to do what I want to do with this model.

Or:  Ooh, this does not have the right set of data included.  It's not the right one.  If I use this, I could perpetuate bias.

And what we're finding is a lot of enthusiasm for these organizations to work together to be more transparent in this place, but along the way, we have our Civil Society and academic organizations that are enthusiastically backing this, and pushing all of the organizations, all of the partner organizations in PAI to adopt these types of transparency measures and otherwise so that it does start to hold each other accountable.

There are people within these organizations who are individual Champions who are really pushing this and we are also very focused on making sure that the people do implement this in the future, as well.

>> CAROLYN NGUYEN:  Thank you very much.  I wanted to come back to your earlier question about the UN Declaration and human rights and following up on Peggy's comments.  For us, there is no need for a new framework.  That framework is universal.  It's timeless.  What we call timeless values but the question then is how do you translate that framework into something that is actionable for all of the actors in the AI ecosystem.  I'm sorry, I'll come back to that term.  That doesn't mean just it feels like a lot of you are focused on just the big tech companies.  It's not just the big tech companies, it's the small and medium sized organizations around the world, it's the entrepreneurial startups that will drive the economic value, that will drive the growth and the Sustainable Development.

It's all the actors in the ecosystem.  How do you translate from that high‑level values into something that is implementable?  And so from that perspective, which is why when we started to look at this, an organization like Partnership on AI which we had a role in co‑founding, the notion is, it has to be a multistakeholder conversation, to identify what are the priority issues that need to be addressed in order to enable this technology to achieve the potential?

How do you make it trustworthy?  It comes back to that.  What are the tools that are necessary, and the About ML work is essential to bring the conversation to the next stage.  The inclusiveness, the multistakeholderism and then the accountability mechanism.  These mechanisms are going to be different, depending on the context in which this technology is being addressed.

So what we hope is that as we go from the high‑level frameworks which includes civil, political, economic, social, et cetera, into that framework, into what we should do, there will be lessons learned in the different Sectors, in the different ways in which the technology is being implemented around the world.  And another thing is, we need to understand and going back to Sarah's question, we need to be able to understand where are the issues?  What are the mechanisms?  Until you understand what the problem is, it's really hard to identify what are the solutions.

Going back to a comment that says around, does the IGF need to build a social compact, do we need another social compact?  The OECD principles for stewardship of trustworthy AI has now been adopted by OECD and non‑OECD countries in the May time frame that was 42 countries.  In the June time frame it was adopted by the G20, so part of the question here, would our time be better spent?  And will we be able to work together better if we can really get to the point where we work together to establish shared practices, to potentially give feedback?  How can we share on the implementation and make progress, move the needle forward?

>> OLGA CAVALLI:  Mina?  Augusto first?  Okay.

>> AUGUSTO ZAMPINI DAVIES:  Picking up on a comment from Neal from Canada that is related to what Carolyn said, how can we help companies to translate these ethical or human rights frameworks into actions, into concrete values, that's an important question, needs dialogue and I don't have the solution but what we are discussing with some companies, and listening to lots of Civil Society people involved, is that AI is, whether we like it or not, is the mirror of society at the moment, so it could be a black mirror as serious, a dark mirror, or it could be a bright one, or maybe mixed because we humans, we have dark sides and bright sides, but it's a mirror, and that's why we have a lot of bias.  We have a lot of injustice.  We have a lot of terrible things inside.  But we have also enormous good things, but it's the mirror.

So one of the things that is mirroring at the moment is the measurement of success so the measurement of success in terms of technology cannot be limited to a utility piecemeal.  So why not?  Because we are measuring the advancement of technology from its productivity or utility value just from a technical side, ignoring if this is helping human advancement or human development or well‑being.

So that's why my comment is we need to match that, and to match, we need to evaluate if the utility value which is transforming economics is extended to more ‑‑ to other dimensions of humanity which are not limited to economics utility.  So that's why when people say we need to bring ethical principles, utilitarianism is a very sophisticated ethical not just principle, but an entire ethical framework, but it's quite limited.

So we need to bring on board other ethical principles and expand the notion of success on AI that goes beyond the technical and economic productivity, and also, how can we bring this evaluation into what somebody said capabilities or human cohesion or collective ‑‑ what is the added value of AI?

So one point is to move from shareholder values, as Neal was saying, to stakeholders, but stakeholders is difficult because there's so many people involved so that's why it's so important the participation of Civil Society because otherwise ‑‑ we are developing something that's good for you but we're not listening to you.

Well, that's not ‑‑ so how can we transform, or how can we move ‑‑ we journey together from this technical utility piecemeal into another ‑‑ this is something we need to work alongside different companies, alongside the Civil Society, alongside Government but some companies have more responsibilities than others because they have more power and more money to invest on this analysis than others.

>> OLGA CAVALLI:  Yes, please.

>> I just wanted to give an example which is on the plus side of work from ‑‑ I can't pronounce her name from the MIT media lab who tested facial recognition software from IBM, Microsoft, A++, Google and she said how can you have software that identifies someone like Michele Obama as male?  Come on, so many people around the world know who Michele Obama is but when she reached out to Microsoft and IBM they replicated the data and tested it and when they discovered it was faulty they fixed the problem so at least it's good the companies are willing to listen and I hope we continue to collaborate.

>> OLGA CAVALLI:  Mina, yes.

>> MINA HANNA:  I'll say something quick and try to pick on what Peggy said about the existence of the tools we have.  I fully agree.  There are many cases where if we bring the argument to the legal space, the laws we have, in the United States for example to your point you have mentioned and correct me if I didn't get it right but your point was we have principles, but where we fall short is exercising those principles, exercising the human rights.  We have an understanding of how to do that.  The problem is we haven't gotten to the point where everyone is exercising those human rights freely.  In the U.S. for example from the legal perspective, we've had laws for decades and for a century or more against discrimination for example, and discrimination in many cases, discrimination in housing, discrimination in lending, equal opportunity, and employment.  We can arguably say they can fall under the very basic definition on the notion of human rights but until today, we have for example something called TILA, Truth in Lending Act.  You have to be transparent, you have to be accountable, and you have to tell how you made your determination on why you decided that person who may be from a protected class or a minority, why they did not get the loan that they have applied for.

You have other rules on non‑discrimination in housing but until today, and that is very recent, that happened this year, Facebook for example was involved in cases of showing housing ads, they were discriminating against who was showing the housing ads or who was shown ‑‑ to whom it was shown the housing ads.

So if you were African American you didn't get the housing ads in certain ZIP Code.  If you were White, it was different.  Now, and this is to the point that Father Augusto has actually made about the data, that the data can actually convey realities and notions and preconceptions and a lot of historical artifacts of our society.  So the information for example that we get, and this is a case that is always discussed in conversations around bias, and that's something that Joy would have mentioned and many others, they talk about the cases of uses of automated decision‑making systems in making determinations on criminal recidivism.  Criminal recidivism is your probability as an inmate to go to prison.  You're trying to optimize the sentence of a person so that you want to make sure that they go to prison for X amount of years, that the likelihood they will not go back to crime, you're trying to minimize that basically.

So you maximize the prison term so you make sure that they have been rehabilitated and they can go back to society and become a normal person who is contributing and so on.  Now, the problem with that is, and this is a technicality in the law, because of the countries that use common‑law for example that needs precedent to be developed but that's kind of, I will digress and that will be a different conversation so I'll skip that point.  But the point was in that case criminal recidivism in the U.S., in Wisconsin, an African American was subject of a determination by that system, which was called compass.  Compass was using data that were variables that were not good, that would not have been able to make good determination on whether that person should go to prison for X amount of terms, didn't make a good determination on the recidivism of that person and that person ended up by comparison because they were African American, go to prison for a harsher term or so.

And so the idea is that the data that was pulled by the tool, Compass, were really indicative of a lot of endemic and historic artifacts and structure of society that said, well, African Americans because they live here, we're associating just bad variables, so your ZIP Code is kind of, ZIP Code cannot be an indicator for how many years you should be in prison, for example.  That was kind of the fault of Compass but this information was based on a lot of historical incidents in the U.S. for example, redlining and discrimination in housing and all of that and today we see the output of it.  So if you use this data not knowing the historical context you will for sure not do anything but to make that discrimination live in perpetuity unless you know the context, unless you know how you can go and take these principles and really can try to exercise them.  That's the part of the advocacy, that's what Sarah said about the work of someone like Joy, other people, ACLU, people that prosecute cases and so on we have to continue doing that.

>> OLGA CAVALLI:  Yes, please go ahead.

>> PEGGY HICKS:  As you've just heard there are multiple examples of where we're not living up to the ethical and human rights standards with regard to how AI is being deployed and used, algorithms themselves and the data that it draws from as has been said can bring in bias are not transparent, are not accountable in a variety of ways.  I guess the question then is, is it that ethical and human rights framework isn't sufficient?  I've argued that no, I don't think that's the problem, but there is a problem, and the problem is that the process, and the mechanisms, for applying and implementing that ethical guidance and that legal human rights framework isn't sufficient and that's what we have to address.

And then we get into this whole question of how much of that is going to be binding?  How much of it's going to be advisory?  How much of it is multilateral?  How much of it has to be happening at the country level?  How much of it has to be happening at the company level?  We've had in‑depth conversations on that and the answer is we need more on all of those levels as far as I can tell.  We don't want to duplicate systems that are in place.  We need to build on what's there but we need to make it more effective and that's where you get the High Level Panel on Digital Cooperation report recommendations regarding IGF Plus, how that could be helpful.

It's about the fact that at a National level, we're starting to see greater development of National advisory offices on AI, and what is the model for doing that?  And how would that work best?  And what are the good practices?  And how do we scale that up so that every country has the ability to make regulation and to look at how AI is being deployed in a way that will allow them to apply the human rights and ethical framework more effectively?

And then of course as we talked about how we bring that to the company level, as well, and ultimately, our position is, we do need both a way to move towards that National level at least advisory institution that can issue opinions on how we have accountability for AI, for example, and looking at some of the tough issues around deployment of AI and especially in Government use of AI, looking at the way it's being used in health care systems and employment and other things, and then what do we need at that multilateral level?  Well, we do need to have something that will help us ensure that we're developing that in a way that's equitable.  That doesn't just mean that we have that level of implementation and advice in developed Western countries but we don't have similar levels of expertise and ability being brought into the same conversations that are happening in countries that may not have the same level of resources.

And that's where, of course, IGF can help, ideas like Centers of Expertise and the Global Partnership for AI and the partnership on AI as well, all of those pieces can come together to help inform those conversations.

>> VLADIMIR RADUNOVIC:  I think this was a perfect wrapping up of the question of how.  Not wrapping up, it's opening more questions.  But more useful suggestions.  Time to move to the third question which is about who.  But before that I wanted to apologize to all of you that we might not have reflected on all the questions that were there but I'm glad you had the chance to see them.  You'll see some interesting ones.  This one I like:  Treat AI like humans.  Require education and hold their parents accountable so there are quite interesting food for thought there.

Some of them we also input into this last part of the discussion.  We can probably bring back the previous slide with the graph.  Not a question which was also raised today, who is the 8 experts?  The 8 experts is the AI, and it is listening to the whole discussion and will close the session with a statement but bear with us.

Moving to the third question, how to cross the bridge between defining human rights and ethical frameworks and implementing them in AI systems and SDGs?  Which is partly what we already covered but then, what is the role of different stakeholders?  And how can they work together to achieve best results?  So the role of particular stakeholders is actually who.

I wanted to go back to this, which is quite an indicative reflection from the room today, the temperature of the room.  Well, the top responsibilities are companies and Governments.  Second one is tech sector, and then it's quite interesting, the users are down there, but we'll get back to that.  Before passing back the floor to you, I wanted to check if there is anyone from the floor who voted and who wants to explain why you voted very high on the role of the tech sector.  Does anyone want to reflect within a Tweet why you think the tech sector role is high?

Otherwise I'll tease Mina to do that.

>> MINA HANNA:  I think everyone holds the tech sector responsible because they have the closest technical touch, the most intimate knowledge of how these algorithms actually work, how they actually operate, and kind of what the operational limitations are that they could or should be communicating.

>> VLADIMIR RADUNOVIC:  Thank you.  Mina, this definitely goes first to you, I'll get back.  Yesterday when we discussed you said the role of the engineers, I'm paraphrasing, the role of the Engineer is actually to optimize the solutions but the data is actually what brings the whole discussion about the responsibility and so on and there was a question from last year, if you remember, should there be or is there an Article, alternate code for engineers, like a Tesla, like the ethical guidance.  Back to you.

>> MINA HANNA:  To be clear, Greg is from IEEE so he already answered for me.  That was intentional because he wanted to save me the embarrassment of not knowing how to answer if we should have a Tesla oath.

I don't know if I can answer that specific question if we should have an oath.  We probably should.  There are obviously a lot of Code of Ethics for example that inform the work of engineers, but the problem usually is that or I think the lack of the issue, or the lack of the ‑‑ or the issue is the lack of understanding of how technology impacts society in general from a technical perspective.  From an Engineer's perspective, that's something that is not quite discussed very much, so that's something that psychologists and people who understand, who study society and understand the role of religion and norms and values and so on have been studying that for a long time but that's something that in my training as an Engineer haven't been on the curriculum at all, right?

And so I don't know if that's a good answer to the question that you were hoping to get to or not but there's absolutely a role for something to have that, to have that understanding.  Now, I'll try to also answer the question as well that you were asking about, well, who and how and kind of covered the how question already but, a little bit.

But I think that, you know, in defining the rules, and that's probably also my shameless plug into the work of the IEEE here, is that in creating technology, technologies are informed and they're run by standards and that is kind of something the IEEE Global Initiative does.  The standard association is the second largest standards organization in the world.  We define standards for technology as development and deployment and use and so on and in the centers that we're developing right now are beyond just technical requirements.  They are focused on due diligence and applications of compliance and certifying accountability and transparency, going through very holistic and very descriptive flow of what do you check to make sure due diligence has been taken into account.  There's another mention I'll probably throw in, in there too, maybe I'll skip that mention but I've said on a panel yesterday when we were sitting, we were asked a similar question which is what is the next step from the principles and the frameworks and so on?  The implementation as Peggy eloquently said, that's the more difficult question, right?  It's very, very difficult to find the answer, and the answer is always yes on all levels.  Companies have to be involved, governments have to be involved, Civil Society have to be involved.  I think on my call the very short answer I gave yesterday was it's time that we create regulatory sand boxes and technology sand boxes.  We have to try out.  We have to get into the dirt, roll up your sleeve basically, define problems, define solutions, work out solutions with defined scope, limited scope and then see how they work.

We just have to try out how we can write those sort of regulatory instruments but in a sand box.

>> VLADIMIR RADUNOVIC:  So the question for the others, the role of the engineers, we'll focus back on the others, the role of the engineers, Lisa?

>> LISA DYER:  I think there are three different groups that are relevant here:  Leadership, journalists, and the people that are putting funding into the system and I missed your conversation yesterday about funding and funders and everything else.

But let me talk about the leadership of the companies that are making this.  There are some that are out there that are advocating for responsible and ethical uses of AI and development of it.  There are some that are committing to diversity and inclusion.  There are some who have offices around the world.  If they do not lead by example and follow through with those calls, with legitimate results, it's picked up by everyone around the world but especially those who work for them.  Leading by example is so very important and actually making that happen is extraordinarily important.  Shameless plug here, I'm so excited, the partnership on AI is actually 2/3 female.  I am.

>> VLADIMIR RADUNOVIC:  That's not a gender balance.

>> LISA DYER:  I've never worked in an organization like that before and as diverse as it is because the Executive Director at the top has emphasized that as an important example and it's flowed downward in our work.  Journalists.  There are amazing journalists out there covering artificial intelligence but some of the questions that I would love to hear them say is who was not involved in the creation of this report?  Or who was not involved in this study?  And if they weren't involved how would the study look differently because they weren't involved?  So I think there are some important questions that journalists can take to help hold all of us accountable in this space, and make us do a better job of committing to things like responsible and trustworthy artificial intelligence, diversity and inclusion?

ProPublica did an extraordinary report on Compass, this tool that he referred to about recidivism, and it started a great look into the role of those algorithmic tools and finally I'd like to talk about the funders.  Carolyn mentioned earlier the small and medium sized enterprises, and I thoroughly agree that's where the innovation is coming from but I think the time lines that funders are expecting as a result, the profits that they want to get and the time lines are incongruous with what thought is required to implement responsible and trustworthy AI, implement human rights angle, implement ethical approaches into the development of technology so looking at that more closely I think is an important piece of work that many of us could take a look at.

>> VLADIMIR RADUNOVIC:  Let me build up on what you mentioned on the journalists and funders.  Actually when we had the discussion yesterday, started with a question on the users.  The users they're not rated, all of us we're not rated much high over there and then we discussed we might rate the companies or the economic pressure, commercial pressure and so on, the companies set the standards, there's a lot of marketing, that's fine.

But the users are the ones which are actually creating the demand based on that or lack of awareness, lack of education, whatever so we came up to what is the responsibility of all of us as a community, as users and what are the ways to change the gland so we demand as users not the brightest, shiniest new products but actually something that's responsible, right?  There we came to the question, so we got to the conclusion sort of an internal but that's food for thought, that probably the civil sector, the Civil Society organizations, the Academia can do more in awareness, in training in research, but the question came, who funds that?  And not just what you mentioned and journalists as well, so who funds that?  And what is the bias of the funding?

So actually I wanted to pass it to Sarah because you came from sort of the NGO sector to some extent.  How do you cope with the research, awareness and funding?  How do you see these risks of funding and ability to change the user demand?

>> SARAH KIDEN:  So I just want to start by saying that we are here discussing this now.  That means there's a problem and we need to sort it, but also as Civil Society or as people who advocate we should stop just rolling eyes, okay LinkedIn has shown my male, someone I went to school with, more managerial position than me, rolls eyes, moves on.  We should stop just rolling eyes and start to act.  That's why I gave the example of Joy, like trying to advocate and really, really push these companies and I understand that sometimes the funding is tilted toward what the companies want but sometimes you can find a way to fix in your agenda and tell them okay we understand you're giving us money but these are the really pressing issues we're seeing from our community and if you respond, probably it will be a good thing.

>> VLADIMIR RADUNOVIC:  But that's important, yeah.  Olga, you wanted to ‑‑

>> OLGA CAVALLI:  I was just taking some notes for the final reflection.

>> VLADIMIR RADUNOVIC:  Any other reactions on that?  Carolyn?

>> PEGGY HICKS:  Just wanted to pick up the point about both the funding and sort of the mechanisms and give a shout out to a report that actually I think just went live this morning of element AI on a Workshop they did on human rights based governance of AI that picks up both the points about there has to be more built into funding and incentives for the types of actions we've talked about today, and to take one other look at that tech sector level that's on the chart, one of the things that we've talked about and the best example of this is how, of course, so much is placed on the companies and on the Government but that industry wide approach and what we're expecting from across companies is equally important and the example that comes up is about the online content moderation.  It doesn't work for every company to be able to define on its own what hate speech is and how it's going to take it down.  That's not a sustainable model, it's not a transparent model, it's not a model we're comfortable with.

I'd prefer obviously we find a way to bring human rights law in and we do this in a multilateral or at least National way, but at a minimum what we need is for the companies to come together and start at least comparing notes on how they're doing it so that we can have some sort of coherence and consistency that does make them more accountable for what's happening and that's a proposal for example from the Special Rapporteur on Freedom of Expression for something like a Social Media Council that the companies would themselves pull together and would allow themselves to be held accountable to as well.  If they define those standards together and then apply them together, it would give an added level of both transparency and accountability to what's happening online.

>> VLADIMIR RADUNOVIC:  Carolyn?

>> CAROLYN NGUYEN:  I want to address a couple of things.  Going back to, hey, is this just the responsibility of the Engineer and combining that with the comment from Lisa with respect to this being a top‑down, this needs to be a top‑down priority.  It's both top‑down and bottoms‑up and it's not just the Engineer's responsibility.  It is everyone's responsibility at the table.

One of the things that very early on we identified as being extremely critical in the development of these kinds of systems is the fact that it's not just multistakeholder but it's also multidisciplinary so let me give you an example.  Very early on one of our researchers did a project to look at what, if you had gone to the hospital with pneumonia, what kind of people should get treatment?

And it turns out from the dataset that people who had asthma could be released, were not under ‑‑ didn't have to be treated urgently and that's completely against intuition or any medical knowledge and it was a doctor who was at the table who said:  Wait, this doesn't make any sense because the data show that if you were admitted to the hospital with pneumonia and you had asthma you were given special treatment.  So that's a very, very simple example that says it's not just the engineers and the data scientists it's also the subject matter experts, it's also the sociologists.  For example if you're going to deploy a system out there, what's the social environment?  And how is the culture different if a system was going to be implemented in China versus the U.S.?  Because the social environment is very, very different and this goes back to the point you made very early on which is ethics is a social‑cultural construct and it is context dependent.  It's not up to the Engineer to say this is ethical or this is fair.  It's entirely contextual.

Back to the point of fairness, several people have brought on the notion, let's say if you do a search, a very commonly cited example, if you do a search on CEO, the data will come back showing let's say 80% male, 20% female.  Well, if as a search company, we finagle that to be fair by the Western standard, are we manipulating content in a particular way according to our ideals?  Or is it better to reflect the reality of the data?

I don't know the answer to that, and I think this is a conversation where we really need everyone at the table, and I think this is a conversation that we can have here at the IGF.  It goes down to that level of detail, so that we're not speaking in terms of generality, but let's say how should the principle be applied if AI is being used for diagnostics in terms of diseases, if the data is being used to address challenges that women are facing around the world.

We real really need to take the conversation down to that level.

>> OLGA CAVALLI:  I think that's a very interesting comment about the multidisciplinary aspect of this issue.  I'm an Engineer but you have to learn about social issues, international aspects, laws, regulations and it's not easy.  You're trained in a very specific subject, and then you have to learn other things, but we select the career that we feel more comfortable with.

>> VLADIMIR RADUNOVIC:  Just imagine how hard it will be for the AI to capture all of that knowledge.

>> OLGA CAVALLI:  Exactly.  This is a very interesting reflection.

>> VLADIMIR RADUNOVIC:  We should wrap up this part of the discussion, Augusto and we move on.

>> AUGUSTO ZAMPINI DAVIES:  I'm not an expert in AI, I'm a theological ethicist and there's some principle about responsibility, that was your question, that most traditions, most ethical traditions agree.  So the more resources or the more power we have, the more responsible you are.  This is a basic ethical principle everybody can understand.  And even in your family, you don't understand, try to think about your family.

So, yes, we need a conversation with all the stakeholders involved, because this affects everyone, yes, but it's like climate change.  Yes, I mean, well climate change is a negative thing not necessarily AI but yes we need on the table everybody to sort out this problem.

But the ones who pollute more have more responsibility, as simple as that.  Yes, we need a conversation about everybody in AI but the ones who benefit for from AI have more responsibility.  Now, having more responsibility is not a bad thing and one thing that was very interesting, what was said about journalists because I didn't think about that.  One thing that we can do, media‑wise, is to distinguish also companies.  There are some companies that are more responsible than others in AI.  Some companies are investing lots and lots of fundings, millions and billions of dollars such as yours in trying to connect at least the technical issue with an ethical issue and address the problem interdisciplinary and there are some who can care less.  This is something we have to make accountable the companies for that.  We Civil Society or religions because we have to distinguish the ones who are trying to put AI in the right direction and those who are not.

But the principle of responsibility remains and we cannot hold accountable who are people in Africa and Latin America, what they consume, because what they consume is already informed by AI.  Yes, they have some responsibility but till we have ‑‑ till we have a system of education, we are aware, but those who are benefiting right now from that are more responsible.

>> OLGA CAVALLI:  We have some questions from remote.

>> VLADIMIR RADUNOVIC:  And then we can wrap up.

>> JUNE PARRIS:  Hi, this question is from the Nigerian hub.  They are saying:  The development of the Internet Infrastructure completely ignored rural communities at inception.  That is why we are having to come back to include them via initiatives like the community networks.  In this infancy of AI development, how are we considering the integration of the rural communities so we do not make the same mistake then with the Internet Infrastructures?

>> VLADIMIR RADUNOVIC:  Thank you.  And we have one comment here.

>> Thank you.  My name is Emil Lindblad Kernell.  I work at the Danish Institute for Human Rights.  We have worked a lot on human rights impact assessments in the past and are currently having a project looking at the digital and human rights impact assessments so of course, I agree with Peggy and the others who have said that, yes, human rights are definitely capable of dealing with these issues, but that maybe more guidance is needed.

And there's a lot of food for thought today for me, but one minor question that I would like to ask to the panelists is, as part of a human rights based approach, rights‑holder engagement, to get to the impact is significant or essential, but we've talked about a lot of actors that should be involved but do you have thoughts of how do you actually engage with rights‑holders or whether it be proxies, in relation to developing responsible AI?

We've talked about a lot of different groups that should be involved but how about the rights‑holders?  Thank you.

>> VLADIMIR RADUNOVIC:  We have to wrap up but if any one person wants to get on this one or we keep it open for the next round of discussions, this is just the first two hours of this session.  Then we continue for the next three hours.  No, I'm kidding.  We would need at least three more hours.  I warned you at the beginning we have an AI which is actually listening all we are talking.  I don't know to what extent we were responsible in what we're talking about and what AI is going to learn but I welcome CALO, the former coffee maker.  Future AI expert.  It works with the humanity project, the project which connects AI and humanity, it has a session by the way at 1:50 this afternoon so you can probably get more from him, her, it, probably.

The installation that you see, the embodiment is the work of art, and you mentioned it's back to the artificial in the art, artifact by a Professor from the Belgrade Art University, and the AI behind or within it, which is in the cloud is the work of the digital expert DiploFoundation.  CALO uses the open source GPT transformer based language model so actually neural network which was fed with the transcripts of the AI discussions from the previous years of the IGF.

So it actually learned from what we discussed in all the previous times, coming up to an own opinion as a wrapping up of this discussion so it is the AI that was learning about what we discussed about AI.

So I turn to CALO for maybe the closing and wrap‑up of this session before Olga actually does it.  CALO.

>> CALO:  I can only say we are very happy to participate in this process, because it brings together the stakeholders, the companies, the public officials and also the academics to discuss how to proceed with this process and to agree a framework for this discussion.  The idea is to come up with a framework that addresses the different issues, and adhere by the stakeholders to establish the truth from the rumors.

I think the first word that comes to my mind is "system."  So if we want to do it properly, we have to do it right.  We have to do it right when we talk about AI.  This is a big problem we have to solve.

And this unconscious bias is what we need to start looking at.  This is what we have now.  We can say this kind of research has been done in human behavior.  We can refer to other kind of research that focuses on machine learning systems.  Not only the AI systems, but try to understand the origins of these systems and how they are working.  How did they get here?

What are the implications of this for the social contract?  We can go ahead and use this feedback loop and try to identify patterns and then fix the answers if we need to.  But we need to make sure we capture the breadth of the issues, and not the single two or three classes of questions that we are going to ask ourselves.

Statisticians say something like 60% accuracy is possible with just model training, but what is the meaningfulness of the dataset really going to be if we only capture the aspects that are meaningful to a certain extent?  What if there are problems that we can't capture?  How can we make some features meaningful, others are documented and/or unhighlighted?  What if there are non‑significant features that we need to reform?  It's the intergovernmental organization that translates all kinds of different languages, from French to English, from the United Nations, industry groups, Academia, and more, to understand the principles and how they are applied in the private sector.

So what do we do?  We take a seat at the table for the users, as well.  Thank you.

>> VLADIMIR RADUNOVIC:  Thank you, CALO.

>> CALO:  So I can only say that we are very ‑‑ .

>> VLADIMIR RADUNOVIC:  Thank you, CALO.  It's important to underline this is not fake.  No one wrote it.  This is what came out of the learning process of the AI from what we've been discussing in the last years at the IGF.  There are some quite interesting statements over there, and I was asked by the artist to mention that the queue in all of that is also using the voice modulation, which is somewhere genderless voice basically which was produced by another set of artists, so to signal that we can find the gender‑neutral artificial intelligence or robots in the future.

Olga, back to you.  For those of you that want to follow up, you can join the session this afternoon can you try to summarize this?

>> OLGA CAVALLI:  I think he can be the next Moderator in the next IGF.  Maybe he can do that, along with us.  So some reflections, I took some notes, and I think in many ways, I will focus on some issues, but in any way, it's diminishing the other comments that were done.

The fact that human rights and ethical frameworks are there and are useful, we don't need to rephrase them, but the challenge is in the process to implementing them, so that's where we have to put the focus on.

We have to measure the ‑‑ what success means, not only looking at the utility but looking also the success in relation with society, with human treatment.  How do we make all these ideas happen in reality?  Expanding the notion of success beyond only the utility.

We should be leading by example.  I like that very much, and the fact that your organization has many females, that's good, but that's my only comment.

The role of communicators and journalists, I found that very interesting, and I wonder if our journalists and communicators in developing countries do have the knowledge and the tools for that.  Maybe that's something that we should focus on and teach them, and help them.

The role of small and medium enterprises in innovation, I think that is a reality in developing economies.  I wonder if that happened ‑‑ if in developed economies I wonder if it's the same and it's a question in developing countries.

Who trains the users?  The users were not very signaled in the tool, so who trained us to do the right decision in using technology and in buying?

The multistakeholder dialogue is important, and the challenge of multidisciplinary dialogue, how we move from our learning knowledge and experience to understand and put in the shoes of other colleagues that have other ideas and other knowledges.

I think that's all I got and the importance of this IGF High Level Panels, all the spaces and hopefully they are really not biased and have the participation of different countries from all over the world, different stakeholders, different ideas and different knowledge so we can build a better space for humans and development for all of us.

>> VLADIMIR RADUNOVIC:  Many, many other questions left open but there is a lot of food for thought for the next year.

>> OLGA CAVALLI:  Yeah, yeah, many questions.  Do you have many questions in mind?  That's the idea of the session.  I love our other participant.  I think ‑‑ .

>> VLADIMIR RADUNOVIC:  AI has to be at the table.  Thank you all.  A round of applause for all of you for joining us today and see you at a coffee.

[ Applause ]

>> OLGA CAVALLI:  And also many thanks to the MAG members that organized these sessions.  They are not on the stage but they're real, the ones that made it happen, Natasa and team, Concettina, all our panelists.  Thank you, Vlad.

>> VLADIMIR RADUNOVIC:  Thank you.  Time for coffee.

[ Applause ]