You are here

IGF 2018 - Day 2 - Salle IV - WS170 Accountability for Human Rights: Mitigate Unfair Bias in AI

The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> BERNARD SHEN:  My apologies.  We're going to give it a few more minutes because I hear that the entry security check has a bit of a long line and we're still missing one of our five speakers.  Can I see the audience is still a little bit sparse.  I assume people are caught in line to enter.  We'll delay at least till our final speaker arrives so that we have the full panel of five speakers.  Thank you.

>> BERNARD SHEN:  Welcome to accountability for human rights:  Mitigate unfair bias in AI.  I am Bernard shin from Microsoft.  I work including human rights dimensions in Artificial Intelligence.  I will several as the moderator to facilitate the discussion.

The core of Microsoft's DNA is to provide tools to every person and organization around the world so that everyone is empowered to achieve more.

Yes.  We use AI technology ourselves in our own services, but increasing well, we are also providing AI to customers so that they can infuse their and transform their own operations with AI.

Through that experience, we learn that leveraging the best of human intelligence and artificial intelligence can bring enormous been fits to society.  But whether we're using Artificial Intelligence or human intelligence, there is always the risk of unfair bias even though the bias may be unintended.

We also understand in working on these issues in the company is essential and it is not enough.  AI has been used in so many fields of human endeavors both the benefits and issues in each field is unique and contextual.  There is a lot to learn and a lot to figure out and we commit ourselves to reach out, engage with customers, governments, Civil Society, academia and other companies to learn together and make progress together.  This session is an example of that.

We have a panel of speakers to discuss and look at the issues from a broad range of perspectives and expertise.  Let me representativelo deuce them.

To my left, Wafa Ben has seen.  To her left is Scott Campbell, senior human rights officer at the United Nations human rights office where he leads their work on technology and human rights in a Silicon Valley area.

Further to his left is Sana Khareghani.  DCMS and the department for Business, Energy and Industrial Strategy in her majesty's government.  And to her left is David Reichel, FRA.  David is responsible for managing the work on Artificial Intelligence, big data and fundamental rights.  And you notice a missing chair.  My colleague from Microsoft research, Montreal has not been able to arrive.  I heard the line through security is half a block long.  Hopeful I she's just caught up on that and not some other emergency.

So let me briefly introduce her and hopefully she'll be able to join us soon.  My colleague is a data scientist, research manager at Microsoft research Montreal.  Her research folk ruse on language based AI technologies such as conversational agents.

I wanted to introduce to my right my colleague camile FRS yank A. she's going to manage and facilitate the online attendees in the session.

So please join me in welcoming our panelists.

[APPLAUSE]

In order to get to an interactive conversation as quickly as possible, we talked in advance and decided that we're going to forego some formalities.  We're not going to have formal opening statements by each panelist.  We will launch right into the discussion.  We're starting a little late because of some technical difficulty trying to make the online participation works.  So we're 10, 15 minutes late start.  10 minutes after the session, there is another session.  I will take a little time to make up for that lost time.

The session will have two parts and there will be opportunity for each audience to participate in the room and online.  When we get to that, we're going to alternate.  Hopefully there will be questions in the room and Ron line and we'll alternate between in room and online participants.

Part 1 first, we're going to have a brief explanation of what we mean by AI.  Layla was going to cover that.  Maybe I can substitute is to provide a common understanding content for the discussion of specific issues to follow.

Second, we're going to discuss the benefits and opportunities of AI to advanced human rights and Sustainable Development Goals.  Third, we're going to explore human rights questions and concerns on the responsible use of AI.

Then part 2 has two parts.  The first is discuss government, laws, relyowleddations, policies and actions that would promote innovation and responsible use of AI particularly to discuss concerns of unfair bias and the opportunities of advanced human rights and SDGs.  For example, gender equality and reduced inequalities.

Second and last, we will explore ways and opportunities for different stakeholders to collaborate, share learnings, good practices and find out how we can work together beyond IGF to make progress together.

With that, we will launch into part 1.  Layla, if she were here, she was going to give a brief explanation for AI.  I don't know the background of audience members.  So I will keep it very brief because I'm not a data scientist.  I will try to live up to what Layla would have been able to do in a couple of minutes.

Basically AI can mean different things, but we generally think of that as using a lot of data, feeding that through mathematical models to achieve machine learning.  There are so many of different types of math and statistical models that based on the purpose you're trying to achieve, you feed relevant data through that that can help provide with you recommendation predictions so that as you're trying to do whatever you're trying to do, you have new data coming in and it will spit out a suggestion, a prediction and you can take that into account and make your final decisions as humans.  So that's the best I can do as a non‑data scientist.  If she was here, she would be able to do much better than I do.

Let's go to policies and implications of all this.  The first question I will start with is the more constructive, positive Azpicture.  What are the benefits and the opportunities that we can for good that we can use AI for?  With that just to kick off the conversation, I am going to turn it over to Sana.

>> SANA KHAREGHANI:  There we go.  Thank you.  Thank you, Bernard.  There are quite a few benefits to AI, not least of which introducing productivity in the things that we do.  There are two big areas that we are looking at from the UK government.  One major one is productivity in the public sectors.  So how do we use automation and machine learning and other algorithms to make ourselves more productive.

The second is look broadly across the sectors and how do we use Artificial Intelligence technologies to make public services better for society.

Finally, we're looking at some of the technologys and how they are being used within the sectors and the effects that they have.  And this is where you have ‑‑ when you start talking about decision making algorithms and targeting and other kind of capabilities that these technologies have is where you start to run into some of the questions that we'll be getting into later which is the double edge kind of benefits of AI where on one side, it's a wonderful thing to have.  On the other side, it's really important to think about all the consequences that might fall out of something that's quite beneficial.  For example on targeting and profiling, that's done online.  It's quite lovely in a way that it makes your online experience much better.  It allows you to see ads that are relevant to you and makes sure that you don't waste your time looking at a bunch of stuff that isn't relevant.  Similarly, automated decision making which helps in a lot of ways remove bias that we inherently as humans have and introduce into systems and we have seen lots and lots of effects of this where certain people before lunch, after lunch may make different types of decisions using decision making by algorithm takes out some of the biases we have ourselves and allows us to be more productive.  These are examples of areas where we need to be very careful in terms of what the parameters are and how we make sure the benefits are actually what we gain equally across society rather than seeing unfair bias or other things play a role in this.

>> BERNARD SHEN:  Thank you, Sana.  Scott, I know your office is focused on human advance.  What are some of the works or future opportunities that the office may be thinking about AI could be used and applied to help with that mission?

>> SCOTT CAMPBELL:  Thanks, Bernard.  There are seemingly endless opportunities to use artificial intelligence.  And really on a wide range of fronts that encompass the full spectrum of human rights.  So it's a very long list of opportunities actually that we struggle with in terms of prioritizing and determining where we should focus our energies.  In parallel with that, there are seemingly endless opportunities how they reach the STGs.  One quick example looking at civil and political rights in Artificial Intelligence which is an area perhaps less looked at in terms of how human rights can be enhanced through the use of Artificial Intelligence.  If we look at freedom of expression and building on what Sana was saying, access to information, the possibility to both reach specific kinds of information and to reach specific kinds of people or constituencies or industries or stakeholders that you may want to reach as a human rights advocate.  Using Artificial Intelligence you can greatly increase your impact in promoting a particular cause or human rights issue.

Similarly to that, access to information empowers people very broadly and linking that with the SDGs.  There are many examples of how Artificial Intelligence can be used and again I very long list.  We'll focus primarily on SDG5, gender equality and SDG10 reduce inequalities.  And AI is being used as also as Sana said a double edged sword here, but AI can be used to reduce inequalities to detect discrimination on a number of grounds including gender, but also race and a wide range of factors.  Maybe finally just know how AI linking it with the SDGs can just be leveraged to provide such a broad range of social and economic opportunities to realize a broad range of human rights that are intertwined with all of the SDGs that are more online economic opportunities to reach social and economic rights.  I'll leave it at that for now.

>> BERNARD SHEN:  Thank you, Scott.  As I ‑‑ as we progress, feel free any panelist to check with comments and short of that, I will move through all of the ‑‑ next I want to talk to you, Wafa.  I know that Civil Society human rights organizations have a lot of ‑‑ rightfully had a lot of questions and concerns with the use of AI.  But in terms of opportunities to use it for the positive purposes to help in advanced human rights, what's the Civil Society perspective over particularly your perspective or access on how we could use it?  We're going to get to the risk and problems soon enough in the conversation, but what is your perspective on how we can use it to propel advance and help human rights?

>> WAFA BEN-HASSINE:  I would echo what Scott just said on AI or machine learning facilitates primarily at least from my perspective the freedom of expression and the right to access information.  So these kinds of processees will help citizens all around the world be more active in their societies and get the documents they need to have or get the information they need to have in order to be active participants and citizens and residents of the places they live in.  And again, the potential to use these kinds of machine learning processees is endless.  You can use it for an endless horizon of types of issues.  So but I think whatever it will be used for and we'll get into this later still need to respect very basic principles of not just human rights, but also holding private actors accountable and holding states to a higher standard as well.  So ‑‑

>> BERNARD SHEN:  Thank you, Wafa.  David, you're with the FRA and you look at these issues extensively.  From your perspective, what are some of the positive constructive views we can apply AI science to help advance human rights?

>> DAVID REICHEL:  Thank you.  For those who don't know the agencies, we're one of the over 40 specialized U agencies carrying out specific tasks.  And our mandate is to collect comparable data and provide expertise on fundamentality rights issues on member states.  Our focus is the EU.  And the chart ever fundamental right is more or less the basic document we work O. it has the same legal value and that's our starting point.

As many others, of course, we realize the importance of technological developments in the area of AI and big data and whatever terms of being used to describe this new technological developments.  And since it was mentioned before, the impact on all areas of life in one way or the other, we also see fundamental rights minus all fundamental rights and you can read that as human rights.  Don't of you that don't know human rights go beyondna, but they mirror all the core human rights as well.  And all of them are impactd one way or the other in positive and in negative ways.  It is impacting in a broad way fundamental human rights framework.

What we do in our work, we started looking into the positive and negative aspects of impacts of new technologies.  We try, of course, to gather some evidence in Europe.  A lot of discussions are in the U.S. at the moment and we try to see what is going on in Europe.  Generally looking, we see among large companies in Europe, two years ago it was 25% of large companies in Europe said they do some sort of dictate analysis.  You can view this from two sides.  This is quite a lot.  But you can also say it is only every fourth company.  When you look at all companies, it is 10% in Europe.  So it is interesting to see how this new technologies are picked up and then when people use new technologies, are they aware of potential human rights implications and that's what we're going to do in our project which is currently kicking off.

Speaking about the positives, I just agree with what was said previously.  New technologies can be used to detect human rights violations.  We have all this new data and new technologies which make its easier to structure problems and issues.  This way, of course, we should embrace it apart from technological innovation.

And that I think we'll discuss more on discrimination.  I think discrimination is one of the rights that is picked up most often.  Although, other rights are impacted on as well.  But detecting unfair treatment and discrimination and mitigating and reducing it is one of the advantages of technological developments as well.

>> BERNARD SHEN:  Thank you, David.  So you and others talk about detecting discrimination.  I wonder if we can unpack that a little bit.  AI technology is starting to go into many, many fields and not just the tech companies.  A lot of industries are trying to explore using that.  So a couple of examples that I'm thinking about is if employees are starting to use it to help make hiring decisions, so in the past usually, that's a human decision making.  Honestly, it is hard to know whether human decision and hiring decisions is it really fair on bias or not or if you're trying to apply for a loan from banks.  How did they make the decisions?  If no AIs are involved, how do they make those decisions.  I invite you to comment.  Specifically how would we approach these historically human decision making mechanisms and find ways to see if AI technology can help?  Specifically, what does that mean?  How do we go about doing that in the fields that decisions are made by people?  Anyone?

>> WAFA BEN-HASSINE:  It's interesting because I think it's important first to distinguish and not (?) as a human.  It is algorithm running on data sets and we need to remember it is us who is feeding the data sets and the algorithm learning from the data sets.  For example for all sorts of good reasons, they can still come out with bias solutions and it is the importance of us finding what those are and why they happen.  So very recently, Amazon found that its hiring algorithm was very biased, very, very receipt within the last few weeks.  They had trainedd algorithm on data sets from their own people within Amazon over the last 10 years.  And that just happened to be men.  And so the algorithm then was unfairly being biased against women in the area.  It is literally making decisions based on a data set that was fed into it by the company.  And the company then realized what the problem was and have fixed it.  But I think it's also really important to realize that we ourselves can be very biased and what we seeas the right choices.  So in identifies and understanding and identifying the short comings, I think it gives us that ability to move beyond our own biases and allow as supplementary decision to our own.  I don't think in the moment we're at a place where it can make a decision without any input, but I think as a supplementary point, decision point to our own, it gives us the ability to move beyond our own biases with some help.

>> BERNARD SHEN:  Thank you, Sana.  I have a follow.  I love that Amazon example because I read that recently and not to ‑‑ I have no intent to pick on a fellow American company, but that's a vast example.  I remember reading they had existing issues and it is probably predominantly male.  It is biased against women candidates.  What I read was what they so concerned and I don't think they were able to fix it.  They were able to stop or suspend.  It occurred to me if that's the case, that means if ‑‑ I don't want to focus on Amazon.  If a company's existing employees and data is predominantly male, that suggests that up till now, the hiring decisions might not ‑‑ there might be some issues in that it lacked female employees.  So, you know, to their considered, they probably tried to make it more active to create this model and they realize it is not working and stopped using it.  But the problem remains because most of the employees are male.  So how would you solve the problem then and the pre‑existing human decision making is probably biased.

>> SANA KHAREGHANI:  Agreed.  In that specific case, I think there's a more fundamental issue in getting more female students in stem courses and through the educational system.  So I think just there is a fundamental question there and something for us to work on together across the world to insure more women especially on the western side of the world that more women are taking these classes and therefore, the pool to choose from is more equal.  I think there is a little bit of ‑‑ it is a symptom of the situation rather than an inherent I am going to only hire males to do this job.  My fault.

>> BERNARD SHEN:  Thank you.  Wafa, you have a comment?

>> WAFA BEN-HASSINE:  Yeah.  Also follow up on what Sana was saying to answer your question as well.  I think the use of an AI system was done by a human being does not necessarily remove the standard requirements that we had in terms of responsibility and accountability that human decision making process.  So I think the question remains if we're use machine learning to make decisions or whether we're using the machine learning process to actually make the decision itself.  So I think there should always be a human in the loop especially for high risk areas like criminal justice, getting access to healthcare, border control, et cetera and for those sectors, I think human oversight is especially important and necessary.

>> DAVID REICHEL:  I think it's a really important point that Sana makes that essentially we shouldn't expect the machines to be better than the humans behind them in terms of their ability to produce outcomes that are not discriminatory.

On the other hand, I think we can use the machines to try to correct and detect and identify bias and to in some ways fix it or point the humans towards how to fix it what is commonly known as an AI audit.  But you can use the machines to try to fix their own biases to a certain degree and then the other point that's crucial is those humans, of course, that will all depend on how gender biased or other typesf biases that the humans have in terms of trying to fix a problem and recognize a problem.  And also mentioned of diversity in the work place and gender diversity has been pointed out as a serious problem and the stem point is an excellent one promoting girls and young women to access to stem but also having a more diverse in terms of age, race, national origin, gender having a very diverse presence in the rooms where both data sets and algorithms are being discussed.  I think that combined with AI audits can do quite a bit to reduce bias.

>> BERNARD SHEN:  You have a ‑‑

>> I like the point to say that the machines are not better than humans.  The point of gender discrimination recruitment, for example, is a good point to show the new technologies can show that something went wrong previously and can be detected and also tried to be repaired in the future.  And, I mean, so machine learning or AI can perpetuate and reinforce biases and create them new.  I think they're different sources of how this can happen and what was discussed before is one of the reasons.  We mirror human behavior and use the datna mirrors human behavior and using this to formalize decisions and formalize this behavior.  It is different because it's a formualization of a procedure and it can also help to mitigate the biases.

I want to point out a few sources of error.  When looking into European companies, almost half of them say they use social media data or two location data for the big data analysis.  If you look into what information people are processing and there's quite a high likelihood to have one or several information and protected attributes included in the data.  This way, I would like to read out or article 21 of the fundamental rights.  It forbids discrimination on grounds of sex, race, color, genetic creatures, language, political belief or any other opinion, property, bit, age, and sexual orientation.  That's an important long list.  If you think about the new and diverse, it is quite highly likely that one or the other is related to one of these attributes and this is very important to discuss when machine learning or AI is used.

I would like to make one more point.  So I think there are different ways how discrimination can happen.  We have data that have discriminatory behavior mirrord or measured in there.  Another is general data quality.  The people might use unrepresentative data.  There are several examples.  Voice recognition or gender deit could that was widely traced and it was shown it doesn't work as well or much worse for women or especially for black women.  And this is quite a plain problem.  Use data not represented for the target crew which you want to use and this way it can lead to discrimination.

Another important discussion I think is also when you have good quality data, but there is still a difference according to some protected attributes.  I'm looking into insurance where you will find the difference when you look at car accidents.  You will find that men have many more accidents than women.  Gender is a proxy for risky driving behavior.  If you don't have any information on risky driving, you will have a difference by gender.  That is a discussion in data and the data of good quality.  We need a value base discussion to what it is allowed to treat the Groups differently.  We know if you have the difference in the data, it is math mat scale not possible to treat the Groups in the same way.  Here's an important discussion and tradeoff to be made.

>> BERNARD SHEN:  That last part is interesting.  You pointed to the fact that there's bias and there's bias.  There's accurate good bias and that's not unfair or fair and there's unfair bias F. men get into more car accidents and prediction model of a male driver applying for insurance probably does warrant a different risk assessment than a female driver.  Is that kind of the gist of your comment?  In that case, it is not unfair because statistically it is accurate.

>> For every situation, it needs to be specifically assessed for this riel legal discrimination or not.  For that, it was ruled that there can't be any differences on insurance premiums by gender.  So it's a very important discussion of our values compared to what we find ins data and where we draw the lines.  It is, however, a very open and generic discussion.  In Europe, we have quite good policy framework.  But at the moment, it is also an absence of case law.  We don't have many of these cases.  Some litigation on the cases.  So I think this will be somethings for the future to see how this will be decided by the courts to see where the law fits in.

>> BERNARD SHEN:  Anyone else have an intervention on this first part?  This is ‑‑ I was about to transition.  So let us welcome Layla El Asri.

I will transition to what is evident to me even as we talk about benefits.  A lot of the co.S from you really touch on the risk as well.  It occurs to me the benefits and risk are almost two sides of a coin.  There are lots of things we can use AI for that if used correctly can bring great benefits.  I think the key word there is correctly.  And there are lots of ways where if you don't use correctly, the good intention is turn into a lot of impact.  So that's a good transition to the second part of this.  Part one is the concerns about unfair treatment and, David, you mentioned many of the bases and concerns with run fair treatment in the charter.

So in this next part, I suggest we take a deeper dive in talking about that.  How do we look for that?  One of the actual techniques, policies and steps we can do to look for those types of unfairness or incorrect application and there's a lot of discussion in this field about transparency and accountability.  What does it mean for the technology provider and the people use the organizations to use this technology to be transparent?  What do we have to be transparent about?  What do we have to explain and beyond explanation, how ‑‑ what does it mean to be accountable?  How can organizations or their technology supplier be accountable for these risk and potential problems?  So does anyone want to start?  Please, Wafa.

>> WAFA BEN-HASSINE:  So, I think this is a great transition point.  Private actors, we can talk about either state actors or non‑state actors/private actors.  But I think they're usually the tech companies that uses AI also have a responsibility to protect human rights and that this responsibility exists independent of state obligations.  So as part of fulfilling this responsibility, they need to take ongoing proactive and reactive steps in terms of insuring they do not cause or contribute to human rights abuses and they can do human rights impact assessments as per the UN.  And so they also need to make sure that when they develop and they deploy the kinds of machine happening systems ‑‑ learning systems, they follow knew diligence framework.  What does that really mean?  It means as you mentioned, Bernard, they include explainability and Intel availability in the use of the systems.  The impact on the affected individuals and Groups can be scrutinized by independent entities and individuals that are impacted themselves.  It also lays out that the responsibilities of who is doing what and implementing the machine learning system are well established and it's also important that actors are held into account.

So in that frame works, there are three core steps really that make up or compose due diligence framework here.

So one they need to identify potential discriminatory outcomes of the process to private actors or non‑state actors using these types of technologies can take action to prevent and mitigate discrime nation and how that plays into the outcome of the various equations that are used in the happen morning system.

Finally, this includes being transparent about first two efforts to identify and prevent and mitigate against discrimination and machine learning.  There are a lot of solutions and I think it's on all of us to take this framework back to where we work and how we use the technologies to make sure they're not negatively impacting any one particular Group over another.

>> BERNARD SHEN:  You mentioned in the beginning of your comments government user of these technologies versus commercial enterprises.  Your comments apply equally to both or is there any distinction between the two Groups in the organizations?

>> WAFA BEN-HASSINE:  There is a distinction regarding the use of artificial intelligence because states do bare the responsibilities to promote and protect, respect and fulfill human rights.  Under international law.  So they not only can they not engage or support practices that violate rights whether in designing or implementing the artificial intelligence system, but they're required to protect against human rights abuses carried out by other actors.

>> BERNARD SHEN:  Sana, you want to ‑‑

>> SANA KHAREGHANI:  Two things on this.  To your point, Wafa, I totally agree.  I think there is ‑‑d onous is on both sets of parties.  I think the state has a duty to make sure they create parameters by which innovation can happen well.  And allow those parameters to be used by non‑state actors to kind of play within and develop within.  And allow that innovation to happen.  There are a number of measures that the government has taken along this.  We have introduced the center for data which looks at reviewing and understanding AI algorithms and data technologies and what biases may exist in these and what are ‑‑ what is that double edged side of the sword.  And the CDI itself is not a regulator, but it also does influence policy and regulation in terms of how do we create a safe play ground so that innovation ‑‑ so you don't stifle innovation, but you do have the right principles and parameters by which innovation can happen.  I think there is an absolute responsibility on the state to have some parameters around that.

D other thing I wanted to touch on quickly is transparency and accountability point.  I think there are a lot of benefits and we talked about a few of these.  Benefits on machine learning really come to life when you think about the impact use cases.  So whether you are talking about healthcare or targeted agriculture, you're talking about saving the rain forrests and it ranges so widely across the different scenarios and different sectors.  But to benefit from the technologies, the actual productivity they introduced, the technology itself needs to be diffused widely.  Not just within the seconders and the companies themselves, but by people.  So people need to be able to use them.  They need to use them and it becomes a part of the everyday.  For that to happen, you absolutely need trust.  So if people don't trust the think it, they don't trust the way their data is being used, they won't use it.  They won't use that technology.  To build that trust, people need to understand how the algorithm is working.  I think that is where we start to get into this need for transparency and accountability and understandability and all the of the questions or all of the statements that you hear arrange around this stuff.  All for me stems to trust.  If I want to use something, I need to understand it.  If I don't understand it, then I won't buy it.  If I am a men ester or a head of state and I don't understand the way decision making algorithm is working, then I don't want to know accountable for it.  These are all very interrelated.  I think we need to be really clear about the tradeoffs of what we mean when we say these things.  We want introduce productivity, but yet we want to understand exactly how it is working.  I think we need to be very clear about what we mean and what we and want how in the system of decision making this all falls comes together.

So in terms of account ant, explainability, understandability, all of those ITJ words, I think it is important for us to understand what underpins all that which is trust.  How do we gain public trust and public confidence?  How do we insure that we're not trying to drive adoption just for the sake of productivity forsaking public confidence and trust because we will then undo all the benefits we're trying to create.

>> BERNARD SHEN:  Thank you, Sana.  I think that's really insideful observation and I would love to pursue that further.  How do we gain that trust and what do we need to explain?  There are so many people in the world and we can't expect every one of them to be data scientists.  We need to explain it in a way that it is digestible and it fair?  What do we need to explain and how do we need to explain?  I continue is kind of hard to explain that out of context.  Anyone wants to chime in to talk about what we need to explain, how we explain?  Anyone?  Layla?

>> Layla:  Let me apologize for being late.  In terms of explainability and working with machine learning systems, there are several things to take into account.

The first one is what do we need to communicate?  What do we need to communicate?  Basic necessity that we need to communicate is incertainty.  So if you work with a machine learning, you know when did is confidence its prediction and you know when it is not.  You can make an educated decision based on this knowledge.  And then when it comes to explainability, interpretability, it is very difficult because now the machine learning models that are working the best are neuro networks.  And they're kind of inspired by the brain and it's basically neurons unlinked together and then these mathematical operations get a chance based on the data and the predictions that they make.  If you want to look at this structure, I can show you neurons fire together when you show youd this input.  That wouldn't tell you much about what the algorithm is really doing and thinking.

There is one approach that I think is really promising which is building relationships through time was the model.  So that's what we do with human beings.  When you get to know somebody, you build a model.  You try to predict what they're going to do and say and that's how you kind of build a relationship with somebody.  So that's an approach we're making with machine learning models.  If you look at the inputs that you give it and you can prove it and give it certain inputs just to see and after some time you can start to understand or build a model, then you will have a better sense of what your mots does or where it fails and where it is successful.  So that's an approach that's kind of based on the relationship building between humans.  That's a step we're taking for interpretability.  They have studies where they show people feel like they can trust the model more if they can predict to some extent what it is going to do for certain inputs.  I think that's the most promising approach because there are other approaches where you try to build another model that is going to explain in words what the model of interest is doing.  You are building another model that is going to add some noise to your prediction.  It is adding noise on top of noise.  It is not very tangible as an approach.  The first is the one of building a relationship through time with the model between the model and the decision maker.  And so that requires being able to probe the model, look at inputs and what it is going to do to predict and also trusting the model because it is going to tell you I am not really comfortable about this input.  You should make decision.  You shouldn't really trust me on that one.  That's how we've going to build a trust relationship if model are capable of communicating in certain ways, given certain inputs, that's kind of the two aspects that are being explored in terms of predictability and how we can make them collaborate efficiently with humans.

>> BERNARD SHEN:  Thanks.  If I may, I have a couple of follow‑up questions.  One you mentioned uncertainty and my questions apply is to the average user.  That's going to use this technology.  A lot of what you explained is data scientists and people trying to advance the science and the art of it.  Once something is ready and you put it in the actual hands, let's say it's a bank who has to make decisions to make loans or not to people who apply for a loan.  And let's say the model ‑‑ when a candidate comes in, they fill out the forms and they come out with something that says there's a 30% chance this person will default and not pay.  Let's say I'm a loan officer.  I don't really understand any of the science, how do I use that information to make my decision whether to approve on that and is there any effort in the data science to look at how do we help the user of the motses to thoughtfully use the predictions so that it is appropriate use of the recommendation as opposed to kind of blind faith following whatever the machine says.

>> SANA KHAREGHANI:  Yeah.  The person that comes to mind is

>> Layla:  You can train your model if you should give them a loan or not.  So you can try to explain why you make that decision.  One concrete example is recommended systems.  When you look at Netflix, it will tell you I am proposing this movie to you because recently you watched order superhero movies.  It trys to give you the prediction and that's why it is showing us those movies in particular.  So that's done through learning characteristics about your data.  You can say you should ‑‑ there is a 30% probability that this person will not pay the loan because this person has the following characteristics.  You can build explanations by basically saying what you're looking into when you're looking at this individual in particular.  That's done ‑‑ the technical term for this is multi‑task learning.  You can train them to make predictions, but also extract some properties of the inpud data that you feed it so that it can tell you what it is focusing on to make this decision and then you can ‑‑ it gives you a beginning of an explanation for you to understand why it gives that you number in particular and what you should do with it.

>> BERNARD SHEN:  Wafa, you have a comment?

>> WAFA BEN-HASSINE:  Yeah.  It's important to remember that these types, algorithms collaborate themselves because they identify so many patterns.  They're usually too complex for humans to either understand or trace decisions or recommendations made; however, I want to go bark to a point I made earlier about keeping the human in the loop in terms of every decision that AI makes, especially contentious ones that somebody appeals or thinks it's discriminatory.

The growing use of AI and vulnerable decision categories such as issues of criminal justice really risk interfering with rights of personal liberty.  So for example, governments are essentially handing over decision making to private vendors and the engineers at these companies are not elected officials.  They use data analytics and decide choices to code policy that are often unseen by the people that it's impacting.  Again and again, I think it is important to go not only to keep the human in the loop, but risk assessments.  This is crucial.  We need to see what possible outcomes come out of these algorithms, how they can be remedied and again, I want to mention that often these machine learning systems are very complex.  So how do we translate that in a way that makes it as you mentioned intelligible and understandable by the people who are being impacted by it.  So just to underscore that as we continue this conversation.

>> BERNARD SHEN:  Thanks, Wafa.  David or Scott.

>> DAVID REICHEL:  I would like to make two points and I agree with what was said before.  One thing is a lot of initiatives are going on that works to what is building trust and also how to go about these developments.  Just to name it at the European level, we have a European Commission as the high level Group and expert Group and Artificial Intelligence to what works as an ethics code and how to deal with these topics.  A lot of national initiatives and regulations going on and we need to see what comes out of this.  The area has selfexpert commentate ears working towards a standard setting and recommendations and I think the outcomes of these will also help to build trust and see how to go about it.

What we said in the first output on the topic is we also have the data protection legislation and data protection framework at least in the year.  It is quite strong.  And this is also a good starting point as mentioned to go about the challenges.  It was mentioned that human rights impact assessment is really the way to go.  If you plan to deploy something just looking at it once and also throughout, what impact it could happen.  But this was quite challenging because first of all, there is all a lot of balancing involved.  I would say transparency and accountability.  That's property rights versus discrimination and opening up.  Transparency doesn't mean you cannot share all the data because of dat protection issues.  Then there's also the dat protection versus deticketting discrimination.  This is information you should not collected.  This is also a sensitive area where discussion needs to be done for each case separately.  It always important in the different contect.  In the GDPR.  We also have this right to explanation and there's discussion going to on what this means.  And I want to underline as a last point that probably we need to reduce also our expectations of what it means to understand something.  When I think back of what happened a long time okay and if you look into academic journals, people misinterpret.  It is not so straight forward to interpret and this is very much agreed that it needs time to learn how to complex algorithms work.

At the same time, there are several ways to show what has an impact on the predictions for all at different algorithms being used on Euro networks or others easy to interpret, but still difficult to interpret.

In addition to understanding the algorithms, there is also the input data.  It is a very important point and also looking into the outcome of the last stage.  So looking into predictions and I think it should increase what does it mean to have a false positive and what does it mean in specific content and how to understand this.  I think a lot of work is being done and we will learn how to go about these issues.

>> BERNARD SHEN:  Thanks.  I will give an example of how companies have tried to grapple with this dilemma of accountability and transparency.  And also the outcome circles us back to the initial discussion on how Artificial Intelligence can both be a tool in addressing human rights violations, but also as a double edge sword.  I will use this in part because Wafa took all of my talking points, the UN guiding principles and how we use these things.  It is always annoying when a non‑UN person does it in a more articulate fashion.  Hats off to you.

My example is the receipt case that involves Facebook's role in Mia mar.  And the reaction to very strong criticism from the United Nations and governmental and non‑governmental.  You referred to the situation as textbook ethnic cleansing and the UNin facts finding published a report on the situation in Mia Martha referred to Facebook 1079 times in the main body.

In terms of holding Facebook into account, the company had an interesting some what response which was to do a human rights impact assessment.  They went to an independent body to carry out human rights impact assessment and to make recommendations.  We thought it was an interesting way they did this in part under their obligations as member of the global network initiative.  The GNI.  They were transparent and they published the human rights impact assetment which was done over the last couple of months.  And they also published their own reaction to the report and put it in a very positive frame.  I will leave the process for you all to comment.  What strikes me as interesting is in their reaction to a very serious human rights problem of ethnic cleansing, mass displacement, and mass rape, the role of Facebook as a social media platform, Facebook has employeed human but in particular more Artificial Intelligence of trying to have things of hate red discrimination or violence.  They're trying to detect and either reduce circulation of types of images or speech or eliminate it all together, which leads us back to the initial dilemma or risk to freedom of expression.  The algorithms that are being used to pull down speech and pull down images are fine tuned with a enough human involvement so they're not restricting that.

>> BERNARD SHEN:  Layla, I want to turn to you.  You shared to me one time earlier.  You talk about different existing human institution and human decision making whether jobs, you know, deciding whether somebody gets a job when they apply for it.  It can be discrimination against them or young candidates or old candidates.  I recall some of my conversations from you.  You talk about a very useful AI technique to kind of test that.  So when you have a bunch of data to human decision making, how do you test and verify whether it is biased or unbiased.  If you use that as an example, how will I do that.

>> Layla:  So one good thing that has come out from the fact that our machine learning algorithms are working so well now they can learn to immigration human decisioning making.  So what you can do is give a data set of human decisions to a model and train it to reduce those decisions.  And then the nice thing about this is afterward, you can probe this model and look at its errors and look at predictions and you can see if it's biased against certain Groups.  You can see my model makes more errors for pem for people of a certain age.  So that means there was bias in the human decisions upstream.  One way of doing this, you can test a model on human data and see and you can test this model on any data.  So you can give it data for different Groups and then check it was making fair predictions for those Groups and that tells you about the data that you gave it in the first place.  Look at the errors the model makes and say it is going to be fair if it makes the same amount of errors for all different Groups.  It make the same errors for women and men for different ages, for different races.  Then I know my model is fair because it is not treating one Group better than another.  It is making equal amount of errors across all the Groups that I'm looking into.  So that's one way of looking for bias and human generated data.  It is not only human decisions.  They say that bias is everywhere.  There's this ‑‑ there was this study on representations.  So about we want to train a neuro network to deal with language, we give it a vector that represents a word.  The vector is a representation of the word and something that the network can understand.  And those representations we learn from massive amounts of data from the Internet, for instance.  There was a study that showed representations learned on news articles, professional news articles.  You would think it is unbiased.  The representations was biased and the interesting thing was this representation where you do operations and do the riddles.  Men is to king as woman is to queen.  And what we saw and the way we saw this data was biased against women is we tried other riddles.  We tried men is to computer scientist as women is to and the main response we got was homemaker.  Society is bis ad.  So it was a data that produces bias and the mots.  But we can test the models and we is see if they're biased and we can correct for this.

>> BERNARD SHEN:  Thanks, Layla.  I want to stop for a moment to see if there's any audience participation whether in the room or line.  I see the online speaking queue is empty.  Is there ‑‑ is there anyone in the room that wants to comment or ask a question about part one.  In the interestf time, if you can keep your comment to one minute and no more.

>> Hello.  Thank you very much.  It was very interesting to hear what you were saying.  My name is nicolas.  I do AI application zoned privacy.  You talked about different stages of development.  They had conditions, testing and training.  Have you come up with a standard for each of these particular activities and what are they doing to manage that?

>> So as you said, problems can occur anywhere in the pipeline from data collection to met rakes to model release.  So how do we standardize the whole pipeline to make sure what we release is safe and unbiased.  There's a the of work on that and there is receipt work from some of my colleagues which is called data sheets for data sets.  The wood was whenever you want to release a data set.  Other researchers can train this.  If it was collected in an ethical manner, um ‑‑ sometimes you see a data set and you think the training will be good for everybody.  That's not clear.  You need to say which groups are represented so you know and other peanut to want to use this data know it can't be used for everybody.  So that's one way of standardizing things and the research opportunity is more and more buying into those standards.  There is work to be done on the motses and reporting those motses out as well.  We also put a lot of models out there in an open source manner.  We need to say what they can do and be used for.  What was our intention when we built them.  It is not form itd yes, but there's a lot of going going on towards that.  More researchers and companies are trying to Evideotape those Emarriageing standards.

>> BERNARD SHEN:  Gentleman in the first question in the corner.  And next would be that lady.

>> AUDIENCE:  Let's hope this makes sense.  You mention guiding principles.  I work in the Danish institute for human rights in our human rights and business department.  We do impact assessments, but what we're seeing and engaging with companies that are already as they're kind of core business, they're doing good.  They seem to kind of forget some of the human rights implications of the work they do.  So I'm wondering do you see these towardels as well in your own word and how do you think we can help with these companies impacting with human rights and not necessarily just ethics as a topic which is much more antract than human rights which has processes and interpretation.

>> BERNARD SHEN:  Thanks for making the point on the value added of using human rights framework.  I would say in addition to an ethical or value based, we fully agree and it is one of the points that the misdemeanor rights law and framework gives you a good tool to help you do quite a bit of work in terms of managing your risk and your exposure as a business.

I think there are a number of companies, perhaps most companies argue they are generally doing good either for their shareholders or for the world at large.  I think your point is or your question is very well taken.  One of the methodologys we found that is useful and addressing is peer to peer learning.  It was something I wasn't expecting to give you a plug.  We have been doing that with respect, but the environment guiding principles on recommend and recommend rights can be discussed in a very strong injection from the human rights principles.  One very concrete way that is certainly many others.  Wafa, you want to jump in?

>> WAFA BEN-HASSINE:  I think that's great.  I didn't hear about the peer to peer program before.  It's a great way to share knowledge and ways to be more compliant with human rights.  I think that states to put in place regulation that I want to say forces, but it makes the process easier in a way that helps human rights.  I wanted to say that sometimes technical standards are important to be complimentary to a human rights regulation.  When I say human rights, I mean regulation that deals with non‑discrimination and other types of outcomes at the national level that expand upon and reenforce human rights.  I don't know if anyone else wants to add anything else?

>> In the criminal justice system such as a compass program take into consideration when the criminal justice VRS is racist.  65% of increasing people are black.  AI reproduced this racism in the model.  So how can they mitigate bias which it is already

>> I would say that's an unsolved problem right now.  There's been work ‑‑  there's one thing we must be aware of.  When there's bias in the data, a machine learning model cannot only show that, but amplify it.  At least dough we want amplify the bias and the data.  If it is 60 presented, we should purchase that in terms of turning motses on bias data, my point earlier is when you train a model, you can at least test it and you can see that it is biased.  You can see that it gives you a higher number of people learning in a different way.  You ask say there's a problem at the beginning in this data, in the decision making and I would attend a tutorial.  They're doing ethical language and ‑‑ I can't remember which state it was, but the police gave them some data recordings of when they were stopping cars.  And what they found by training models was that the police were taking more time on average to tell the driver the reason why they stopped them when the driver was African American.  So they identified the bias in the decision making thanks to a model and the only thing they could do was work with the police to til them this is what is happening.  We need to be aware and rectify your own behavior.  So that's one way toulous and correct data itself.  Then the data will become unbiased and then we can use it for training models.  You should rectify it.

>> SANA KHAREGHANI:  Closing comment.  The difference between regular decision making by recommends and decision making by machines, it is machine% acright.  It can impact thousands of lives.  On the criminal justice system as you mentioned, that means people will be falsely identified and incarcerated and detaind.  We go back to the question of effective remedy and appeal mechanisms for individuals who have been ‑‑ who have had their rights have thed and their personal freedoms infringed upon us.  It's a component of the whole framework that we cannot forget or side above.  So in the remake five minutes, so thank you for the questions.  I want to ask each of the panelists one question.  The session organizer, IGF is impressed upon our stat.  They want it to be oh ‑‑ so my question to each of the panel is this:  If you were to name the top idea of continuing collaboration across different stakeholders to work on any particular issue what is your suggestion ever the next step, follow up and what kind of collaboration whom needs to work with whom and who should we work on.  Anyone?  Anyone want to jump in first?

>> Setting and policy initiatives are on their way.  So there are many initiatives Ron going and follow up I think it's important to continue working together on those topics.  I see so many people and sit doth and use the existing ‑‑ what is challenging is that it is so broad and we need interdisciplineinary to understand and move forward.  I have a statistics background.  When I learned about the computer scientists, it took me a while that we were talking about the same things.  Even at this level, I also not forget and involve lawyers and judiciary because it is at the left of the law and the court will need to make important decisions.  It is really important for both sides.  From the technologic inial side and legal side to have a good understanding of what is going on.  There is notice one rule or percentage can work or cannot work.  So it needs to be decided on each of the context separately again.  So interpolice are disciplinary and ongoing initiatives is important going forward.

>> SANA KHAREGHANI:  Great.  I would add that in the Uask, we have created the AI council, which is an executive membership of people that represents experts in the community that come from government, the Private Sector and from academia and help advise government on our priorities and how to take things forward and what we should look at.

As part of this, we have expert advise radio Groups and they look at different things.  They look at ethics, international, they look at human rights, data, sharing models and skills, et cetera, et cetera.  As a concrete point, it would be great to have involvement in that from Civil Society from the UN, from others to make sure that we continue these conversations forward to make sure they are prevalent and at the front and center of stay actors minds and the decisions we make and take, but also we can also influence back in the other direction to other countries as well.

>> SCOTT CAMPBELL:  The question of collaboration going forward.  They have launched a high digital pan ole corporation.  They're looking at exactly this question in a broader way and not just collaboration or cooperation but around the planet on new technologies.  They're encouraging feedback and in a transparent way, on their website.  They will look for feedback on their recommendations.  So I think there's a process that may be of interest to contribute to.  On the how, I think having a multi‑semmorrual approach.  I won't repeat.  I will fully endorse.  But also I think crucial going forward to bringing parts of the world, sectors of society that are at layest rich of becoming further marriageel inized from the potential games due to atinitial intelligence and that reflects a general lack to technology and Internet.  But I think it will be crucial to find those sectors, the most marginalized and including those that don't have access to the Internet, which is a major challenge to engage those stakeholders in the conversations going forward.  Thanks.

>> I'm going to agree with everything that was said on this.  I think it is very important to have more gatherings where you have the technical community and research community and policy makers, law makers, et cetera.  I go to ‑‑ I haven't had the opportunity to go to both events that are fully technical and events mostly about policy making.  I'm always a bit sad that there is not more cross pollination.  I know Canada has been doing this and the government of France has been consulting with AI leaders and companies and research to build their A I strategy.  They have noble who's math magicians and field metalist.  We decided it by working with others from the sector and I think we need more of this.  There's a lot of work on ethical AI, accountable, transparency, there are lots of technical solutions being proposed for these issues and there needs to be ‑‑ they need to be aware and take them into account as much as possible or guide our technical design what is necessity‑for‑their nations.  So that's what I would call for.  Mainly crossd domains and people can meet there are different systems.  Maybe bottom some of those standards that we need‑for‑unbraced and AI.

>> SANA KHAREGHANI:  There are Civil Societies that advocate especially when it comes to more sensitive topics such as criminal justice.  There is a role for professional organizations like ‑‑ it is have AI perform where it should be is well suitd and address those biases as well.  It is upon industry to look into these questions as well.

>> BERNARD SHEN:  Thank you, Wafa.  It was primarily by data scientists, but it is turning into a multi‑stakeholder effort with lots of different organizations participating to look at how a blood did practice is not responsible.  There's no lack of opportunities and organizations that one can get in and contribute to making good progress to achieve the bin fit to realize the benefit of AI and let mitigate the risk.

I thank you all for coming.  I want us to take a moment to thank all our panelists for contributions and comments.  Thank you very much.

Contact Information

United Nations
Secretariat of the Internet Governance Forum (IGF)

Villa Le Bocage
Palais des Nations,
CH-1211 Geneva 10
Switzerland

igf [at] un [dot] org
+41 (0) 229 173 678