This is now a legacy site and could be not up to date. Please move to the new IGF Website at

You are here

IGF 2020 - Day 9 - WS125 How do you embed trust and confidence in AI?

The following are the outputs of the real-time captioning taken during the virtual Fifteenth Annual Meeting of the Internet Governance Forum (IGF), from 2 to 17 November 2020. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 




>> MODERATOR:  Welcome, everyone.  Thank you to much for joining us today. I'm Catherine Kummer, Global Deputy Vice Chair of Public Policy at EY.  We have a great group of speakers for this panel, all of whom have deep experience with emerging policy issues surrounding technology, the Internet and AI.  We have a diverse group representing perspectives from intergovernmental organisations, technical and standards associations, business, NGOs and think tanks.  I will let them each introduce themselves in a moment.

But first, we all know that the rapid adoption of AI is having a tremendous impact on business and society.  Like all technological change this creates opportunities as well as risks.  We see this firsthand in EY's business as well as request clients.  With AI, one significant risk is trust.  Trust that outcomes are accurate and unbiased, trust that data is protected from unauthorized access, trust that technology is used ethically.  EY and the future society did a study on bridging the AI trust gap, we surveyed policy makers and those in industry and found significant differences in how the two groups think about AI, trust, ethics and governance.

For example, our survey found that the biggest absolute gap in ethical priorities between the private and public sectors concerns fairness and avoiding bias.  In seven of 12 use cases policy makers placed this among the three most important principles, but companies did not.

The second largest gap was around innovation.  Not surprisingly, industry prioritized innovation more strongly across the board.  This could suggest the companies may not be fully assessing ethical risks during their R and D process or at least not in a way that meets expectations of policy makers and the public.  Poor alignment diminishes public trust in AI which in turn slows innovation and the adoption of critical applications.

Our view is that strong governance and appropriate consistent regulations can lead to ethical trust of AI.  If AI is fair, transparent and explainable, this will increase public trust, facilitating further adoption.  Focus multi‑stakeholder discussions like the one we are having today will help further the debate on what governance could be implemented to strengthen trust and confidence in the use of AI.  Closing the AI trust gap is a top priority.  So let's get the discussion started.

We are going to begin by having panelists introduce themselves and make a few opening remarks, then we will move into a moderated discussion followed by audience Q and A.  So please send in questions as we go through the discussion at any point.

So first let me turn to Dr. Koene.  Dr. Koene is one of my public policy administrators who serves as a global ethics and regulatory leader.

>> ANSGAR KOENE:  It's great to be back at IGF.  Let me talk about what my role is at global ethics leader.  Part of this engaging the global public policy team to help with supporting the way in which public policy in this space is being developed by providing insights from the kinds of information that EY is getting through its internal work and also through engagement with clients.  Another important aspect of my work is working internally in helping to shape our own governance frameworks around AI to ensure that any types 6 of AI projects we are developing internally will adhere to best practices around the use of AI to make sure that it doesn't cause unintended harms, does not run afoul of any kinds of regulatory requirements, but also thinking about the ethical and societal implications of the kinds of projects we are working with.  A third aspect of my work within EY is to support the teams that are doing, developing our trusted AI framework, which is one of the offerings within our consulting side work that is helping business community in thinking through the implications of introducing AI systems.

So my thinking actually and my engagement with the work around trust in AI is actually predated even my joining of EY.  It was even part of my work at the University of Nottingham in the academic space on two projects that were really focusing on the impact of digital technologies on society engaging with how do people experience their engagement with these kinds of technologies, what are the issues that they feel are problematic.

One of these kinds of issues, for instance, being questions around fundamental understanding of how do these systems work?  Why do they collect certain types of data about me in order to do this kind of processing?  Am I retaining sufficient control over the direction in which these things are going?

Part of the work we were doing in the academic space was that we really wanted to translate this into, you know, take it out of the academic bubble and translate it into the whole technical and business and also policy space, which is one of the ways in which I started to engage with the IEEE.  So as part of my work with the IEEE I'm Chairing one of their standards Working Groups to develop standard algorithm considerations.  I'm also engaging with NGOs and foundations around the rights of young people online, so really trying to look at a broad perspective how are using these technologies, impacting society, how can we make sure that there is justified trust in these systems.

>> MODERATOR:  Thanks, Ansgar.  Next is Dr. Clara Neppel.

>> CLARA NEPPEL:  Thank you for the invitation.  It is a pleasure being here and thank you, Ansgar for the collaboration through all of these years.  My name is Clara Neppel I'm the Senior Director of IEEE Europe.  Ansgar mentioned it briefly.  We are if you want, the world's largest technical organisation in the world.  We have around 400,000 members worldwide in almost every country in the world, and our members are just as Ansgar involved in different phases of technology development and standardization, so from research and development to deployment.

And also I think what is interesting when talking about technology, it is from electrical engineering, which is in our name, two of the Es, and up to quantum computing, Blockchain and obviously AI.

So the reason why I'm here is because IEEE also is a technical community started quite early on, I would say with this effort on recognizing the responsibility of the technical community for the impact of the technology that they are creating that was 2015 with initiative which is called the global initiative of autonomous systems and since this then this initiative created quite a lot of outcomes or maybe the most important one being the report on ethical AI design where engineers together with different stakeholders from civil society try to identify the issues around this autonomous intelligent systems and also to come up with recommendations on how to address this.

Some of these recommendations being for us the technical community, but a lot of them being also addressed to policy makers and since then we are engaging with a lot of international organisations.  We are part of the OCD on AI group, the Council of Europe, similar initiative of the European Union and so on.

We are also engaging in standard setting since we are a standard setting organisation, probably all of you use now our most popular standard, WiFi standard but we are also now engaging in other kinds of standards that I'm going to talk about in a minute.

Thank you.

>> MODERATOR:  Thanks care La.  Our next panelist is Dr. Yohko Hatada, CEO of founder of Evolution of Mind, Life, Society Research Institute.

>> YOHKO HATADA:  Thank you very much for having to invite me for this exciting and timely event and discussion.  As Catherine introduced I am founder and Director and CEO of EMLSRI.  Before I was academics in particularly as a whole industry in the question of evolution and development of life.

So I was working a bit longer time span, 4.5 billion years of how the global ecosystem evolved, and now, and after then actually from 2010, actually I was a little bit involved in Japanese politics.  Opposition party got the power and I was excited at the time, and two years there in Japan.  I actually experienced Fukushima nuclear disaster and how the products of democracy could be easily, you know, kind of a disaster moment if the democracy is not founded very deep in society.

And I saw the moment at once overnight actually.  And I fought there, but I left to go to Paris.  So I came back to Britain back, and then EMSRI, actually the integrity of society is dependent on democracy system which actually we are seeing a transformation of the American side.  It is very much related to this AI system issue as well.

And also the gradual so much from Snowden, degradation to Britain also Brexit and American election.  So very much looking at this technology issue in terms of societal and democratic domain and civilizational development, struggle towards global civilization to develop.  So because of my background in academics was developmental neuroscience and also about neuropsychology and very much I'm interested in brain, how we process information related to the other external input, how we eat the information and how we create and construct our mind, and then how we become or develop as agency, and then act on back to the society itself, and how we create information and then at this moment, I think as whole civilization, I think we are trying to construct the global society because already Internet is connected to the world, but we are still in the process, and hopefully we can bring to the positive side because we know Stephen Hawking, he was very worried about how technology can destroy the human civilization.  We are the last generation could be, he is talking on.  And I hope this actually we can avoid and actually more constructive positive flourishing civilization to build.

And this meeting, I think, we hope we can contribute something.  Thank you very much.

>> MODERATOR:  I think folks will hear throughout the session today such an important aspect when it comes to trust in AI so thanks Yohko.  Next we have Parminder Jeet Singh, Executive Director of IT for Change.

>> PARMINDER JEET SINGH:  Thank you for inviting me to this panel, maybe the most important question, perhaps, most important content question about how we regulate, ensure the so called behavior of the, what is the most strong, not only economic but social culture, of course, around that.  I come from IT for Change, which is an organisation which has been around for about 15 years which deals with the intersection of the digital with social change.

We have been involved with, we do practical work in the field also, but a lot with regulatory questions about who goes on the Internet, what is the just Internet.  We have a global coalition called Just Net Coalition and we have to look at just AI aspect.  What are the concepts of justice.  We tried to bring these concepts from traditional, theoretical and basis of practice, because justice does not change.  Equity, these concepts don't change, but they have to be now applied to a completely different arena.

So how does the application take place is the kind of questions we raise and try to deal with.  For example, it's much before AI came, there is this element that ICTs or technical systems or digital systems can get lost in rules embedded in them.  And once you have embedded rules into technical architecture, you never, perhaps, need those rules to, you know, make actors comply.

And now those rules and laws would get embedded into AI systems and what their implication is and how do societies take control on their democratic rule making processes.  These are the kinds of questions we bring to AI governance.  Thank you.

>> MODERATOR:  Thanks Parminder.  Last but not least we have Abdul‑Hakeem Ajijola, Chair of the African Union cybersecurity expert group.

>> ABDUL-HAKEEM AJIJOLA:  Thank you very much for inviting the African Union.  I happen to be a stand in so I can just do my best.  I will start with a disclaimer, the views that I may express are mine and not necessarily those of the African Union so that people in the African Union won't be saying oh, my God what has he said.  But let me just mention a couple of things.  First of all, there is an African proverb that says tomorrow belongs to those who prepare today.

And interestingly, many of us ‑‑ thank you.  So many of us are knowingly using AI systems in our daily lives.  For example, our Smart Phones with their smart assistants, like Google Assistant, Alexa, Siri or Bixby, sometimes we use our phone, phone cameras in the portrait mode like I noticed Parminder was doing.

We also use AI systems in, you know, smart cars and drones.  And we all know about Tesla cars, but maybe not as many of us know about robotics in South Africa that is helping farmers with drones and data powered by AI, even in Kenya, the wildlife service is using drones with AI to catch wildlife poachers.  There is a company in Rwanda called zip line when is using AI and drones to deliver blood to remote locations.  Indeed, in my country, AI supported drones are used in Hollywood to make movies.

We also see the use of AI extensively across the continent in social media feeds.  We are beginning to see media streaming services.  We have our Spotify and Netflix and YouTube, Africa Magic, Euro TV.  They are all beginning to rely on these things.

A lot of Africans are getting into the online video gaming.  In fact, I have been asked to be a patron of the E‑sports association, which is getting registered.  Certainly when we see the online ads on our social networks, these are all on the line and driven by AI.  Some of us, again, across Africa, we use navigation and travel tools such as Google or Apple maps for calling an Uber or booking a flight tick ticket.  Certainly African banking and finance are using AI driven mechanisms for customer service, fraud protection, investment and more.

Some of us may be fortunate enough across Africa to have smart home devices.  It's a growing area, and many devices including smart electricity meters are somehow tied into AI and certainly those areas of security and surveillance object recognition, facial recognition, including door openers that we are beginning to see across the continent.

Sadly, AI, especially in the developed world is also being used to develop killing machines and new machines of war.  And frankly, almost all advanced nations are working on some kind of, you know, autonomous systems from smart missiles and drones to tanks, guns, and even a new generation of smart bullets that can actually go around corners.

So, you know, we must ask ourselves who bears responsibility for the mistakes of AI developed in one part of the world, but applied in another place that may have different very environment, and I mean different environment in the broad sense, social, economic, not just physical environment.  It is interesting to know that about only, I think, 28 out of 157 UN countries of the UN are currently seeking to prohibit full autonomous weapons under the Convention on global conventional weapons and sadly notably countries like the United States, Russia, Israel, Australia and South Korea are not willing to support such negotiations for legally binding set of instruments that would ensure meaningful human control over these critical functions of these weapon systems.

So I really ask myself and I hope we will touch on as we go through, where does Africa feature in some of these discussions, at least in these conversations?  Where do we feature in moving some of these things positively forward, constructively forward, because at the end of the day, once these systems are in place, all Africans, the underserved, the unserved, the unborn are going to have to live with the precedence, are going to have to live with the results, are going to have to live with the effects, positive or otherwise of Artificial Intelligence‑driven systems.

Thank you.

>> MODERATOR:  Thanks Abdul‑Hakeem Ajijola, and you had warned us that your connection is not great where you are, so we do appreciate you joining us, we lost audio, but at least we can still hear you so thanks so much.


>> MODERATOR:  As the audience can hear, you have now seen the great panel that we have, so please feel free to send in questions.  Like I said, we will start with moderated discussion, but I'm happy to address any audience questions as well.

So I wanted to start out with trust and trustworthiness are so fundamental to functioning society including the adoption of innovative tools and services such as those for which AI is being used.  So from your perspectives what are the key attributes of a trusted AI systems and outcomes?  And Parminder, why don't we start with you?

>> PARMINDER JEET SINGH:  Okay.  So it was twofold, the few attributes of what we call AI as which we have to keep in mind.  One is that they are ubiquitous.  If they are not, they would be ubiquitous.  You are getting an appointment with your barber would have AI element in it, what Abdul was saying weapons would have AI.  So it's kind of a systematic brain which is running all aspects of our life, all aspects means all aspects.

There would be Artificial Intelligence in home equipment, everywhere.  So one is that it's everywhere.  Second, it is extremely powerful.  I think it is easy to see that of any system, it's the brain or the controlling and coordinating part which is most powerful, and AI would be the brain of the weapon systems, of your organisation and all.  So it is most powerful and ubiquitous.  We have not seen a thing like this before.  So this is the first part.

And, therefore, our, when we see its accountability or trustworthiness, we need to be able to see at many levels.  It's at the level of an individual, as at the level of a Committee, a level of a national system.  Within organisations who produce IT, the ethical behavior would be of the person, the worker, it will be of an organisation, there would be industrywide, you know, self‑regulation rules, but then there are larger social regulatory governance and political.

So it's on all of these levels, our governance and ethical systems apply.  So it's all across.  We can now work or focus on some or the other and that will need to be done especially for organizational, it takes community ethics, politics deals with powerful systems, but they need to be separately seen but also need though be synced.

So the systems need to have enough governance and ethical systems at all levels.  It could be personal ethics right up to the political system.  I would, when I talk today, I would largely, and that's the work IT for Change does focus on the Government systems and the political tool regulatory level, and system design level which comes in the form of a regulation.

And I think two principles are key.  The first is the political aspect of it.  The systems are so strong and they can almost control your life to that extent that it's the trust but verify principle.  Finally people may be good, organisations may be good, but this is the kind of thing which I need to know who governs the system, who owns the system, what kind of actual control I have over them through somebody I may trust.  Who is a representative of me, a representative of the Democratic system and organizational representative system.

So those political systems become very important and I come back to it later because we would be making while governing these powerful systems tradeoffs between efficiency and their trustworthiness, accountability, explain ability.  The tradeoff is going to stair in our face again and again.  We can say we can manage to not take certain degree of efficiency, but we need to make the system complex, but it should be explainable.  So political tradeoffs of this kind of done at a little level.

We get together as a community, we have Democratic systems, but first of all we need to understand that it is at a Democratic and political level that the largest decisions will be taken.  It will not be enough to trust some people's goodness.  I will later on discuss the open AI organisation which does start in certain manner but has faced criticism because you can't really just go by a set of do‑gooders, and I myself is an NGO and a do‑gooder, so I am not running them down, but people need to be able to trust people and that is whom you trust, is generally the people who represent them.

The second part of it is the basic system design which should be behind how AI develops after the first level.  I think we in the first phase, it is a demo phase.  We have seen what is AI.  What is it it's power, how can it govern system and in a few years we would have seen it all.  But before things go out of hand and systems by default become scattered as code is law and architecture is policy, before that we need to intervene to give some directions about systemic manner in which AI systems are developed.

For me the example of financial systems come to my mind.  You know, finance is something which is very, very powerful and there is extreme accountability.  That is why in our day‑to‑day life, the organizational systems, the financial systems are extremely complicated.  Everything has to produce a receipt, it could have a double entry, you know.  Everything is copied over a few times.

Why?  It could have been made much, much more efficient that people just go and do something, and there is a transaction and somebody gives money, the other guy takes money and its richer.  But things are copied many times over.  I think that's a good example, AI systems should have systematically integrated these kind of copying, record keeping systems across all systems, a generic design which would be fine-tuned to specific context in designs.  So I think one principle of political regulation and other principle of architectural regulation.  So at this point I will stop.

>> MODERATOR:  Yohko, your thoughts on some of the key attributes.

>> YOHKO HATADA:  When I saw this title, the confidence and trust, how we can gain in the system, I have a little bit of changing questions, I mean, it is the confidence and trust, this relationship between, it's actually first of all very much ‑‑ I'm trying to, I'm trying to speak from the regulatory form or aspect even politically how we can systematically as a function try to consolidate and then build up.  And it is actually really important.  And actually at the end, policy maker or leader, as each individual would have to participate, and citizens also.

It's not like some are victims, some are governor.  It's not.  Actually everybody has to create because we don't have yet, I think.  And when I think about this relation between confidence and trust, first of all, we have to understand who we are.  And quite often, we are asking who we are at the end using this system, we have to, you know, answer to this question.

But I would like to ask that question first, who we are.  Unless we understand actually self, we respond on the system wrongly, and that's actually sometimes someone can actually abuse the system.  So we have to have self‑understanding.  And if you deeply understand, then, of course, you have to understand the world, and then when you understand the world, then you can have capacity over confidence within yourself.

And then confidence, many are fake and which one to trust, which one is not trustworthy, that kind of judgment moment to moment is done by the individual, and the policy maker, how to, we should make a system?  Should we, integration, we are not actually promoting everybody's discussion.  All of those kinds, the confidence is I think when we understand, when we are really, in relation to self actually that has meaning.  Without proper understanding, even you have confidence, it's just confidence, and it doesn't make any positive development.

So you have to understand how much we are understanding in judgment to self, the confidence, is it right or not?  Those are actually when we interact, we see the consequence.  And then sometimes actually overconfidence is lost by response from what has happened in the world.

So the right level of confidence and then if we understand, if your understanding is this level of what needs to be done, then actually regardless of environment, the confidence is built in within the system also.  Then where to tackle.  Then we can become kind of a user, which means trustworthy system, the user, but actually the trust is first of all, self‑understanding and then built on the self‑understanding we have confidence and then with proper confidence, we make a trustworthy system.

So basically what I think about what is right kind of question to ask, and what kind of system to build because if society itself, we are not having a proper democratic society yet, then actually when we are trying to build democratic system, the judgment from a different sector, multi‑stakeholder actually very difficult to come to the conclusion same line and then it is difficult to go ahead, for example, in the world, different regimes, we are trying to make some kind of a, you know, development together, and how can we make it?

That's ideally, we have to understand how we can, what kind of future society we are trying to make.  Because if you have to compromise to let's say make some kind of a deal, then actually what we are losing by that?  And something we cannot do at all, then how can we make the international global kind of rule.

Actually we are now under this krona crisis, krona crisis is actually giving us some kind of window, because talking about finance, many corporations are struggling and Government is really bringing a lot of money, and I think it's even before taxation it was so difficult to implement, but now suddenly some, you know, halfway almost transformational is required otherwise we cannot tackle the krona pandemic because it's global.

>> MODERATOR:  Yes.  Great.

>> YOHKO HATADA:  I think all together, it's the confidence we have to build from our own self‑understanding.

>> MODERATOR:  Yes.  Great points.  Thanks so much, Yohko.  Abdul‑Hakeem Ajijola.  Your thoughts?

>> ABDUL-HAKEEM AJIJOLA:  I think there is a question by somebody or a comment, forgive me for butchering the name, but I think the key take away from their observation which I will reiterate here is the need for a high level of transparency.  What you find is that in too much of the value chain of AI development, there is a lot of opaqueness, and so it's back to the issues that Parminder had mentioned about governance.

Who really determines at the end of the day what the boundaries, what should the boundaries be?  Bearing in mind that even though Parminder had indicated Government and, yes, Government must have a strong say, but what you usually find is that Governments and especially legislation are too inflexible to keep up with rapidly evolving sectors such as Artificial Intelligence and the like.

So we must have mechanisms that are a bit more responsive, and probably we could look at situations where Governments can devolve some of their powers to regulators, to the technology community probably to some kind of multi‑stakeholder platform so that those boundaries can be adjusted as the technology evolves.  Thank you.

>> MODERATOR:  Thanks Abdul‑Hakeem Ajijola.  Next question, and if we can ask the panelists, there is so much to talk about in these areas, I know we could have a half-day session on this, but if we can try and keep responses to about a minute.  The first question clearly when we talk about key attributes just covers all sorts of realms so we did cover lots of territory there.  So I think that was great.  The remaining ones get a little more granular, so maybe if people can give a minute or so of their thoughts, that would be great.

One thing we wanted to talk about and we touched on a little bit, lack of accountability is cited as a cause of irresponsible behavior with clear allocation of accountability in organisations seen as providing really the backbone for high quality outcomes.  So who do you think is accountable when something goes wrong in an AI system?  Ansgar, why don't we turn it to you.

>> ANSGAR KOENE:  Yes, as you pointed out, there is a strong question about the question about accountability and key attributes around trust.  We see accountability frequently mentioned among the high level principles for AI, and, for instance, if we are thinking about the way in which to build trustworthy systems within an organisation, it is important to have clear lines of accountability to know who it is within the team who is taking responsibility and will be held responsible for the levels of performance, be it the accuracy, the reliability, the robustness of the system, but also understanding and anticipating the way in which the system is going to be impacting on different kinds of stakeholders.

So accountability is a very important aspect.  The challenge, and accountability, of course, is always a key aspect for trust, be it AI or other kinds of technologies.  The new challenge that arises when we are using AI is especially in systems that are using machine learning that it can blur the lines as to who really is responsible for the ultimate behavior of the system.

Can the party who is building the tool hold complete responsibility for performance if that performance is going to be impacted by the kinds of data that it experiences throughout the time of use.  There is also the large practice of using third party tools, using libraries in order to build these systems, which can make it more difficult to understand where exactly, which component has contributed to something.

So in order, but in order to be able to establish clear trust in the system, it is important that we will be establishing clear lines of accountability, that may include the requirements around, for instance, the use of standards, contractual requirements around that if you are going to use a third party system, that third party has to provide certain levels of continued monitoring, perhaps, so there is accountability is a key element, and we will need to have clear guidelines around how to allocate and clearly communicate who is taking responsibility for which component of the system's performance.

>> MODERATOR:  Thanks Ansgar and Clara, we haven't heard from you yet, what are your thoughts on accountability?

>> CLARA NEPPEL:  Thank you, so I agree with Ansgar that accountability is closely linked to responsibility.  I have often also to liability.  So actually, that's maybe, of course, it's important, but it is important to take out, so in order to arrive to trustworthy systems which is the basis for having trust in these systems, it would be important to take it out from this corner of liability and to make it let's say positive challenge in the sense that if you are engaging in responsible AI development, there is also an incentive, a social and maybe also possibly an economic incentive to aim back on this journey.

And so, of course, traditionally accountability when we talk about products and services, it's the entity who puts that product and services into the market, but, of course, as Ansgar mentioned, this is more difficult in self‑learning system because they might change the output throughout the time, and or I think it's also important, order is also important.  They are also very specific on the context that they are being used.

For instance, you cannot use a companion robot which was designed for elderly people to child care, for instance.  So the question is, of course, who is liable also, not only putting it on the market but also how you use the systems.  And I would just like, I know I don't have too much time, but I think there are two things we need to take into account, namely the line of responsibility that was mentioned that is inside the organisation, and also what do we need to do to draft an ecosystem of trust because we are now having systems that interact with each other and are integrated.  So just within organisation, I would just like to echo the position of the engineer.  So I'm also myself a computer scientist, and we had several sessions with our members, and also to see where they see their responsibilities as the developers of these systems.

And I think that one outcome is that although engineers clearly see an individual responsibility for ethical dilemmas, they also, it is clear that they are within hierarchy of power in an organisation.  So there is always also a tradeoff, also within let's say creating a culture of a no blame culture, let's say, so that engineers are encouraged to address ethical dilemmas if you want, and of course to have certain consequences.

It's important to have as you just mentioned the structures and hierarchy so that they know with whom to discuss if they encounter ethical dilemmas.  So this is just a little bit I wanted to say about with issues within organisations.  But as regards the ecosystem of trust that we are starting to build, we have now object recognition systems which are embedded into transport, which are used in manufacturing, or public services that are using speech generation, GPD3 for instance that was just mentioned before.

So I think that here when you have citizens who will require, already require, of course, these public services to be fair and transparent, and it will be important then for organisations to integrate solutions to be informed so we have something that can inform trust.  And one way to do this is, of course, to make your internal measures public.  So if you have only internal measures like ethics, that may not be sufficient because then the others don't know what you actually mean by being transparent.

So as was mentioned, measures to do this would be through audit certification and standards.  Thank you.

>> MODERATOR:  Great.  Thanks.

Abdul‑Hakeem Ajijola, you kind of addressed this lack of accountability question previously, but anything to add to that?

>> ABDUL-HAKEEM AJIJOLA:  Yes.  My understanding of the major cause of AI mistakes is actually bias.  And so the quick answer is that it depends.  For example, if you have flaws in the models and algorithms arising from the limitations of the developers, then arguably you would hold developers accountable.

However, if the data used to train the AI systems is either flawed or partial due to lack of, for example, African developers and African oriented data, then, you know, there is an onus on the developers and the AI trainers.  But if you have faults arising from the misapplication or, you know, endeavoring to adapt AI systems to processes that they were not designed for, then arguably, you know, you could lay the blame at the feet of the original equipment manufacturers and particularly the vendors.

And, of course, there are errors that arise from the end user misuse or misuse by the end user of these AI platforms, and so, you know, you would lay it at the feet of the developers.  Thank you.

>> MODERATOR:  Thanks.  Good point.  I want to go to some audience questions now.

The first one I think, Ansgar, it would be good for you to address.  And the question is isn't it helpful to assess and determine human bias against AI systems which trying to implement those systems in a healthy way?  In different geographies, different requirements and principles maybe might be adopted respective to the society's approaches too those AI‑supported systems as a whole.  Your thoughts?

>> ANSGAR KOENE:  I think this is a very good question because it touches on a number of aspects around bias which would be the reasons why it's such a difficult issue to ultimately resolve.  In a sense, Abdul‑Hakeem touched on this, there is the question about we have cultural kinds of differences and in need for developers to understand the kinds of cultural context in which the system is being used.

I think Yohko also already talked on an important aspect regarding self‑understanding amongst the people who are developing the system.  So within, for instance, the algorithmic bias consideration standard we are developing within IEEE, are sort of a guiding concept around our work is the need for the people who are developing and deploying the systems to really consciously engage with the kinds of decisions that are being made when developing the system.

Why did we choose this particular optimization target?  Why did we select this data set to train the system to add a data system?  And not just internally why did we do this, but do we have a clear set of justifications that we would be able to present to the stakeholders who will be impacted by the system, and would in fact be willing to engage in a dialogue to explain why we consider these to be the appropriate justifications for our choices that went into the system.

There is a slightly, a potential flip side also to the way in which the question was formulated around bias within people, and the way in which they engage with using AI systems.  And so that is also an interesting kind of question which particularly gets at, for instance, the potential of loss of benefits by not using an AI system, simply because of a lack of trust in it.

So, for instance, medical diagnostic systems could potentially, automated systems could potentially be better than the available medical personnel doing certain radiological image assessments and those kinds of things.  So I think that gets in a sense back to one of the things that Yohko was mentioning around confidence, the need for people to be able to glean sufficient confidence in how these systems operate in order to make that kind of judgment as to whether we should be trusting these systems.  So that's where things like training not just of the technical people involved in the system, but the wider population becomes an important aspect.

>> MODERATOR:  Great.  Thanks.  Ansgar.  We have another audience question that Parminder, I think, it would be great for you to address.  So the question is it how do we neutrals navigate trust in AI from the perspective of the China U.S. digital cold war, because really who on what side of the differ vied do we trust between these juggernauts?

>> PARMINDER JEET SINGH:  That's a high level question and that is what I was talking about earlier about relativity that people trust with all of the flaws of our democracies and nation‑based global systems, we tend to trust the people we have in power and you don't trust colonial masters which, like polar world of China and U.S. would control much of the intelligence.

And I go back to the example I was earlier talking, you have an appointment with your barber that will be mediated through AI located in one of these two centers.  And they become like brains of everything which happens in the rest of the world.  So this is actually a challenge, and we need to, therefore, go back to, you know, there is a principle of subsidiary in these courses of democracy and Governments that something which can be decided at a lower level in an effective manner should not be decided at an upper level.

So the intelligence systems of our world should be made in a manner that I have hand over things which affect me, and generally people have a handle over things vis‑a‑vis circles which they can influence.  So intelligence has this tendency to hyper concentrate and that's why we have a polar world and this hyper concentration and its technical and economic propensity has to be by human effort to deacon sent trait decision making.  That's kind of a system design which we have to be consciously thinking.

>> MODERATOR:  Thanks Parminder.  I think we have time for one more audience question.  And there was a specific question for Clara.  The audience member asked if you could share more about the interaction between the technical engineers and ethical moral frameworks during R and D phase or assessment and how that relationship can be more solid?

>> CLARA NEPPEL:  Thank you for that question.  So I think that coming back to how to establish trust in this ecosystem, so I agree that one important thing is the design.  And I will just give you the example of now of the contact tracing apps that we are all talking about, and that we would so much desperately need in this second wave of the Coronavirus.  Still it's not used too much, and the question is why?

Definitely it's a question of trust.  It is an individual trust, so stakeholders in my view were not involved sufficiently.  So it's maybe, you know, the designers can take into account one parameter, namely privacy.  And I think that this was very much, very much respected.

However, if you think now about the contact tracing apps, probably we would need also these contact tracing app for public so they can identify clusters and prevent lockdowns, however, this is not possible with these contact tracing apps first because they were not adopted, and second, they would not even be possible to share this data with healthcare officials.

So my point is if you take into account the end users and also the larger stakeholder groups, then you are able to do these tradeoffs between privacy and also freedom if you want, so freedom of movement now as lockdown, and then you can make this decision which are technical decisions how to best design a system which will bring the most profit, let's say.  So the best tradeoff for the stakeholders.

So the technical community through now standards that we are developing but also through certification and so on, they start or they have to interact with the end user with the stakeholders, and take that into account in the design, and also in the deployment of the systems.  I think that is the key.

>> MODERATOR:  Thanks Clara, and we are running down to the last few minutes, so I did want to give all of the panelists an opportunity to do a lightning round of what you think is, you know, the critical point to drive future action or dialogue or any other key point that you had wanted to make.  We have about 30 seconds or so per panelist, and we will go in the reverse order of introduction.  So Abdul‑Hakeem Ajijola.  Do you want to go first?

>> ABDUL-HAKEEM AJIJOLA:  Yes.  Okay.  Very quickly, from an African perspective, we must thoughtfully consider articulating our own Artificial Intelligence philosophy and I will give you a couple of examples.  The U.S. Western European AI philosophy in the battlefield, for example, is to eliminate false positives, IE, you let some people through or some bad actors through while avoiding killing innocents, whereas arguably in some of the eastern powers, China, Russia, for example, their philosophy is to eliminate false negatives, which means just don't let anybody through, innocent or otherwise.

So it's important for us in Africa to develop, and just to reinforce, trust is the key.  Without trust, no one will use the platform, and we news a serious of confidence‑building measures, some level of predictable, be able to seek clarification, create understanding and the like.  And back to the African situation, I must emphasize, Africa and the rest of the world must factor when looking at these AI and related issues the underserved and the unborn.

Like I said, because they must live with the precedence that we make today.  And frankly, they will not know a non‑AI world, so once it is done, it is done from here on forward.  Thank you.

>> MODERATOR:  Great.  Thanks Abdul‑Hakeem.  Parminder.

>> PARMINDER JEET SINGH:  So very quickly, I think the central concept is trust, and it's not the first time civilization or human systems have faced this problem.  Trust is the basic problem is collective living starting with the social contract.  It's a social contract it's a new trust method.

So I think trust directly translates into social systems called institutions.  Institutions are what gives us predictable rule‑based behaviors, and this AI‑based world is going to be very new world.  We need to develop corresponding rule‑based institutions.  And I think institutions is what trust really translated into as social structures and institutions are produced not of technologists but social scientists, politics democracy, and I think the two should be matched.  Thank you.

>> MODERATOR:  I think we have time for one or two sentences from the remaining three, so Yoko, what would you say is key?

>> YOHKO HATADA:  I think we are now on the corner to build a foundation for long‑lasting system as Parminder said is the most important for all of us, citizen, industries, organisations, international, I think we really have to build an international governing system.

>> MODERATOR:  Great.  Thanks.  Clara.

>> CLARA NEPPEL:  Yes.  So I think that it is important to have open and consensus‑based processes involving different stakeholders in order to define the principles, the tradeoffs that we need, but also to define those principles and how to implement and validate them.  I think that's towards what I mentioned before to have this informed trust.  Thank you.

>> MODERATOR:  Ansgar, you get the last word.

>> ANSGAR KOENE:  I would like to reiterate the point about we need engagement by all stakeholders and with that I would mean not just the different regions in the world, very important to include Global North and Global South, but also to make sure that we involve not just the policy makers, not just the technical community, but the communities who are going to be impacted by this use, this societal, various societal groups.  So really a broad policy stakeholder approach to this.  Thank you.

>> MODERATOR:  Thank you Ansgar and I want to thank all panelists for a great discussion.  I want to think the Internet Governance Forum for hosting this event and for all of you for participating.  Have a great day, everyone.



Contact Information

United Nations
Secretariat of the Internet Governance Forum (IGF)

Villa Le Bocage
Palais des Nations,
CH-1211 Geneva 10

igf [at] un [dot] org
+41 (0) 229 173 411