IGF 2019 – Day 3 – Raum I – WS #175 Beyond Ethics Councils: How to really do AI governance

The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> CATH CORINNE:  This seems to be working.  Good morning, everyone.  Thanks so much to coming to this Round Table on how to really do AI governance beyond voluntary ethics frameworks.  I want to extend a special welcome to the three panelists for the Round Table.  Levesque Maroussia, Bernard Shen and Vidushi Marda.  My name is Cath Corinne.  And the session is going to be structured as follows:  We are going to start off with a bit of a short primer, contextualizing the contention around AI ethics, having a bit of a look at the debate on how to govern and regulate AI systems.  And after that each speaker will take about ten minutes to give their views.  This is followed by a short panel discussion between the panelists and then open up for Q and A.

So to give a bit of a sense of the current discussion, a recent quote, the relevant discussions of ethics are based on almost entirely open‑ended notions that are not necessarily grounded in legal or even philosophical arguments that can be shaped to suit the needs of industry.  These are choice words coming from the most recent report of the UN Special Rapporteur on extreme poverty who questions the efficacy of ethics as a framework for AI governance.  Obviously not alone in voicing these kind of criticisms about what is often dubbed as the return to ethics in the debate about what normative and legal frameworks are best for AI governance.

AI ethics is undergoing its own tech lash moment and some of the critiques is whether these frameworks and Councils ever lead to real accountability or are a sufficient preamble to regulation.  They also argue that ethical frameworks and councils can be shared, understand or easy to co‑op and foster actual corporate accountability.  At the same time there are many others who argue we should not throw out the baby with the bath water.  And when done right these kind of frameworks do provide a very solid base.

Now it is clear if anything this is a very contentious discussion and that's why we are all here today.  And the discussion is particularly timely given the increased use of automated systems in very society critical spheres like health care, leasing and judiciary.  We are going to try to focus on three things.  So the first is we discuss the recent search and ethical frameworks and solve regulatory Councils for AI governance and talk about some promises and pitfalls and also going to discuss some other strategies and other frameworks.  What I will do is ask each speaker to introduce themselves.

>> BERNARD SHEN:  Good morning, everyone.  Thank you for having a conversation with all of you here at IGF.  My name is Bernard Shen.  I'm assistant General Counsel for the Department of Microsoft.  Turning to the subject at hand when we consider how we should govern AI one question that occurs to me is do we have to choose between ethical frameworks on the one hand and Human Rights law on the other.  Are ethics inherently voluntary?  Is it optional?  For example, unfair discrimination is unethical but also often against the law.  Is Human Rights only respected and protected by laws?  Antidiscrimination laws prohibit unfair discrimination but are those laws alone enough to help advance and protect the rights of a vulnerable population?

Perhaps one way to think about this is that ethics are referring to our conduct.  What we do and how we do it.  Human Rights refer to the consequences of that conduct.  The consequences to the people and their rights.  They are as connected as two sides of the same coin.  I ask can conduct that harms Human Rights be ethical conduct or does ethical conduct inherently mean conduct that respects and protects Human Rights.  Let's consider a simple example, someone driving down the street at the maximum speed limit allowed.  The driver is obeying the law.  But then the driver sees somebody, children playing up ahead on the street.  She immediately slows down.  Is that a matter of law or is that also an act of ethical responsible self‑governance?

AI is a tool and as a tool whether it does harm or good depends less on the tool itself and more on the human hand that wheels it.  And as a tool AI can and is using almost anything and everything that we do.  Why is that?  That's because today's AI, modern AI almost always involves machine learning.  Using math to look at an incredible amount of data, too much data for the human mind to comprehend, to study and to see, to deduce kind of hidden patterns and insights in the large quantity of data.  But computer science and math can help us see those patterns.  And with varying degrees of mathematical confidence, generates some predictions, mathematical predictions.  And we take those learnings and predictions that the machine and math provide us to help us humans do tasks that we as humans define or make decisions on questions that we as humans pose.  For example, in farming how do we be more efficient, less wasteful.  In medicine how do we find cures more quickly or make more accurate diagnoses.  For the environment, how do we better preserve the natural resources that we have.

So it really can be used in any fields where there is increasing amount of data and help us gain insights and make better decisions, but the challenge is because there are so many possible users each type of use presents a different context.  And the ways to use AI in each context in a way that's ethical, responsible, and rights respecting may be very different.  And because it is so contextual we need to think about these issues with context in mind.  And I'm going to use one context to kind of walk through it a little bit.  And that's the use of facial recognition by Government authorities.

Imagine five scenarios.  Scenario No. 1, you are participating in a protest march that Government is using cameras and facial recognition to identify during the march and to track you wherever you go afterwards.  How do you feel about that?

Scenario 2, if you have a driver's license, you have gone down to the driver's license office, have your photo taken.  They have it on file.  And now let's say there are crimes being committed by someone.  Video cameras capture the image of the suspect but they don't know who they are.  Should law enforcement use facial recognition knowledge to find a match including your photo?

Scenario 3, a Government has lots of sensitive Government buildings, sensitive data.  Should it require all the employees who use Smartphones, computers, to protect those devices with facial recognition, not just password?  You have to use facial recognition to lock your devices, or when you enter a building as Government employees, should they say we have all your photos on file?  You can't enter the front door until you show up and have your picture taken and match to the photo on file.  If it is not you, you have to be further marched out.

Passport control, good old days when you enter a country you talk to a person and hand over your passport and they look at you and look at your passport photo to see if it is really you before you can enter.  Now more and more in many countries you don't get to talk to a person.  You stand in front of a terminal and scan hour passport and scan the photo and there is a match.  You probably don't talk to a person during that entire process.

Scenario 4, this is a bit heavy, but let's say you have a family member that went missing and there is reasonable suspicion that there is a kidnapping, maybe human trafficking involved.  You gave the photos of your loved ones to the police and they have it.  They are trying to help find this person, save this person.  Should they be able to use video cameras in all public places, airports, train stations and capture images of people going in and out and compare it to the photo of your friend or family member to help find your friend or family member and save him or her?

Last scenario, scenario 5, you are a music lover and going to a concert in a big huge stadium, but police authorities have reliable intelligence that a terrorist group is trying to target this event with a bombing.  They have photos of members of this terrorist cell.  Should they be able to use cameras at entryways at stadiums and look at everyone that comes in and compare everyone to the photos of the members of the terrorist cell to try to stop them from entering the stadium?

So just a quick note, if you think about these scenarios from a technical standpoint, from a technology standpoint, really two things are going on here.  Two different types of facial recognition.  One is verification.  One is identification.  Verification means it is a one to one comparison.  We already have the photo of a known person.  Someone presents himself or herself to that person.  And you want to make sure it is really him, really her before they many unlock their computer device before they can enter a building or a country.

The second technological use is identification.  You have a captured image but you don't know who that is and you are trying to identify that person.  And somewhere you have a database of photos with known persons.  And you are trying to compare that unknown image to all the known images to see if you can find a match so that you can identify that person.

But, you know, regardless of the technological difference of what the comparison is, the question comes back to in each of these and many other scenarios how could Governments use this technology.  Should we rely on self‑governance by tech companies that provide this technology to Governments to help prevent Governments from misusing this technology?  Tech companies have a role because we understand the technology, we know how it works.  We can help the Government understand what it can or cannot do.  It help steer them away from inappropriate use.  In fact, Microsoft has gone on record that we have turned down some ‑‑ without identifying the police authority, but acknowledge that we have turned down opportunities because we felt that the proposed use was not appropriate given the state of technology and a circumstance involved.

So yes, tech companies certainly have a role but the problem is even if some companies try to act responsibly, there will be ‑‑ if you have some of the companies that do not, then you still have a problem because they would still be ready and willing to provide the technology to Governments and use the technology in ways that the public, we the public find acceptable.  So we also need Government to regulate itself with thoughtful regulations on all of these use cases.  But we also as I mentioned the tech companies also have a role.  We also need to engage in self‑governance, to have policies and Guidelines.  And the two kind of work hand in hand because laws enact quickly.  And they get outdated quickly because the technology develops very quickly.  And if we develop a law that covers today's technology and today's scenarios, the technology moves top fast in a law would fall behind.  And so it is also important that both Governments and the tech companies that have policy and guidelines to think about new scenarios, evolving scenarios how to address it and longer term more new laws may be needed.

So let me close with something that a Microsoft CEO wrote in an Article in June of 2016.  We talked about the partnership between humans and AI.  He said the most productive debate isn't whether AI is good or evil but that, and I quote, "it is about the values instilled in the people, institutions creating this technology".

So when societies enact laws or when we have international laws those laws often reflect the values of us human beings.  But values are more than laws.  They also inspire and guide us to self‑govern, to engage in responsible ethical conduct.  Conduct that respects and protects Human Rights.  So what we need is all of us in this room and beyond, all of society to be involved in these conversations.  Conversations to figure out thoughtful laws, to regulate the use of AI, self‑governance policies and guidelines for responsible conduct by Governments, by tech companies that develop the technology and by all the institutions where the Government private, non‑profits or any other institutions that implement and use the technology in so many different contexts, we need everyone at the table to have these conversations because ultimately these are conversations about our values.  How we connect the ways we use the technology, tools that we have to our values.  I look forward to having these conversations with you today.  Thank you.

>> VIDUSHI MARDA:  Thanks.  For my initial intervention I would like to throw ‑‑ I have to introduce myself.  Hi.  I'm Vidushi Marda.  I work at Article 19.  Also in policy discussions and sort of try and bridge the gap between language and assumptions that underpin both of these stakeholder groups.  So from my initial intervention I would like to throw out four provocations to the group to add text to what Bernard said and also for us to have a more critical understanding of the space in general.

So the first is machine learning is not always appropriate for social purposes.  So, for instance, there is a lot of talk about how data is really effective, how machine learning can look at the large amount of data that no human can.  Maybe we shouldn't be using machine learning for many, many instances where the system oversimplifies social technical problems and tries to reduce some of the mathematical formulas.  So that is the ‑‑ I think ethical frameworks at the moment don't fully engage with this complication which is what we are finding where you can say we can't have transparency and equality and respect privacy, but at the same time you can be undermining a lot of social problems and also making discrimination worse and social problems worse.  That's the first provocation.  The second is I think there is a false dichotomy between ethical frameworks and regulation.  Because one is not necessarily the replacement for another.  And neither is it I think constructive to think about ethics as a preamble to regulation.  It affords us an idea this is where we want to be.  This is what our conduct should look like.  It has no bearing whatsoever on this is what happens with we don't behave the way we should.  The path that says this is what we can do is regulation.  And doesn't make sense to have ethical frameworks in the absence of regulation because there is no incentive to effectively follow these ethical frameworks.  Ethical frameworks don't have teeth which means there is no consequence to not following them.

If we want to be effective with ethical frameworks, then having regulation is a prerequisite to it.  It is not an either/or situation.  It is not a before or after.  It must exist in tandem if it has to exist at all.

The third provocation is that ethics affords an exceptionalism to machine learning.  What I mean by that is ethical frameworks assume that machine learning should and shouldn't do something or Artificial Intelligence more broadly should or shouldn't do something, but we are not going back to first principles.  A lot of information we have found in constitutional order or Consumer Protection or data protection but because there is this new "really complicated technology" we go back to the drawing table without engaging with existing regulations that are already in place.  And the problem with ethical frameworks is also that they are built mostly in opaque closed rooms by people who design and develop these systems but not necessarily people who are subject to their deployment.

So what happens you are subject to a system and not fully sure how you can appeal it.  The only verifiable public statement that you have ethical frameworks which you can't review and can't fully understand.  No one meaning of privacy, no one meaning of accountability.  And the last is I think only having ethical frameworks is more harmful than not having them at all.  Because they also offer a shield of objectivity when there is none.  So a company, it can be any company, can say, you know, we have a ethical framework where we believe in transparency and accountability and privacy and we respect, you know, nondiscrimination, for instance.  And it almost gives a company the right to move fast and break things and see how systems function without engaging with the actual social cost of these systems because there is an ethical commitment in place.

In the absence of this ethical commitment we would have regulation and actual verifiable accountability mechanisms that any system should satisfy.  And I think ethical frameworks buy time which is extremely harmful.  I think it is important to remember that Human Rights are an ethical and legal framework.  So I think the false dichotomy is particularly of interest because it discounts the ethical normative importance of Human Rights frameworks and rights‑based frameworks in general.  And it would be more helpful to think about it in terms of ethical frameworks are enough but do they invoke the right kinds of regulation, existing rights and first principles that we already have.  I will stop there.  And I look forward to the conversation.  Thank you.

>> LEVESQUE MAROUSSIA:  Thank you.  Hi.  I'm Levesque Maroussia and I am part of the data justice lab.  If you are Tweeting to use the data justice hashtag because they are currently on strike due to austerity measures in the UK due to the higher education.  I will be ‑‑ I do research in to how police in Western Europe are using data and technology and sort of what the neuro social questions are.  And before this I spent maybe ten years working as a practitioner on issues around data privacy, digital security, data protection.  And I sort of I think will piggyback on a lot of things that Vidushi said and ground them in the context of Europe and public institutions in Europe because I think there are so many things we should uncover.

Having spent a lot time in the tech scene talking to technologists, because there is this presumption that AI is already here and people use it blankly for everything from basic statistical modelling to machine learning and this is creating massive problems.  On the one hand it is not here yet.  But it also creates this idea that we can't do anything about it, that is sort of already happening and we just have to roll over and die.  And maybe we can like mitigate harmful issues by creating ethical frameworks.  I feel this is a false narrative that's being created.  The other that way that people talk about public institution there is stupid and no knowledge inside of them.  And sort of they have no way of regulating this.  And I feel like this is overstepping the fact that we have a lot of laws in place that also apply to AI and this idea that law is slower than AI.  I feel it is fallacy that is coming from people who are creating these systems.  So I'm trying to unpack these things with a few examples.  And it is I think one of the issues that we look at AI and then look to society whereas we also can look at society and then try to figure out solutions which are probably not AI.

So I'll go through some examples that I am seeing like on the ground in practice.  So I think one really interesting case has been sort of DeepMind in UK and that they have access to health care information of people, of like patients of NHS.  They had access to like a few million records.  And according to the actual regulation that was governing access from companies to this data, the NHS made the right decision because this data is given for innovation R&D projects all the time where Phillips develop a robot arm to assist in operation rooms or whether it is for like trying to figure out a treatment, but maybe there is nothing wrong with the regulation where you say should companies get access to pseudomized information of patients to develop new tools and technologies to help us solve a problem.  And they were in their rights, but the problem is in this they didn't at look Google or Alphabet, the owner of DeepMind and their business model is to actually analyze that data and sell this for commercial purpose.

So I think like we have to open up these frames more to take in to account the context and business models of people, of the companies who are getting access to this data to train sort of algorithms and figure out solutions.  So once it became public and guardians started reporting on the fact that the NHS gave data to the alphabet it was a massive backlash.  DeepMind pulled out because of the controversy.

Do we want big tech companies to have access to our private information?  So this is a point that there is regulation.  We just have to revisit it.  Then with my case studies on policing where I talked to police also by facial recognition, I think the interesting thing, what they do themselves as well is some are inverting the process.  So when you look at risk taxation or all the examples that were given about should facial recognition be applied in this context, they are sort of relatively easy ethical questions because nobody is standing up on behalf of the people who get targeted.

One of the police officers that I was talking to, what if we apply this device.  What if we identify to perpetrators and victims of sexual misconduct, how would we feel police intervening in people's lives?  Can we preemptively go in to a victim's house and say there is a high likelihood that you would be harassed or raped and inverting it to another problem, then the standard problems all of a sudden makes these questions far more pronounced.  And they also said we don't know if we actually should be the actors doing this.  If we talk about the same problem with high impact crime, so it is burglary, robbery, everybody ‑‑ there is a sense where people say yes, we can do it, but inverting it to a different problem all of a sudden shows the issues that also apply in the case of terrorism and high impact crime.

So I think in these debates I think we sometimes have to challenge it by inverting it.  I think we also have to unpack where this entire ethical debate is coming from.  If I look, for instance, at ethics discussions in Europe, a lot of it is also funded and supported by the companies who are creating AI.  And I'm not saying that the things that are coming out of it are influenced.  So I don't say the content is influenced by these companies.  But it is putting the ‑‑ by putting money behind it we are setting the agenda that we have to look at ethics instead of regulation and be critical about this.  Is it bad that they are spending money on this?  Maybe not, but then why are not governments spending money and figuring out when the regulatory framework should be different?

So I think one ‑‑ when we look at all of these talks about ethics we have to unpack it.  I see policing as well, that there is a lot of money made available after an incident happened in society.  And we ask the police and they are like, what do you think about this money being made available, for instance, to implement facial recognition or something else and they say oh, we just have to do something to show the public that we care.  But we actually don't know if it is going to work.

And I think so we have to unpack what are the drivers that are driving the implementation of this on social problems of technology.  Who are the creators because yes, what are the values of the people who are creating AI.  If you look at all the tech companies there is ‑‑ it is quite a homogeneous crowd who are creating this.  Are their values the values that are shared across the world and the values that we all hold in to account?

So we also have to be critical about this.  We normally don't take context in to account.  Things have come from one place, we assume it can apply everywhere else.  Ethical guidelines of the EU would not apply in other context.  It is not a one size fits all and I think we also start ‑‑ should start having discussions about what are the red lines.  What are the places we don't want AI to be implemented in.  So is it like we don't want ‑‑ we don't want it implemented in identifying fraud detection and welfare schemes.  There are certain areas that we don't want technology to be implemented if we can't be sure what the drivers are to do it.  In the end what we are seeing with a lot of these AI systems that are sort of penalizing the poor, marginalized and other groups.  So I'll leave it at that.

>> CATH CORINNE:  Thank you all three for this excellent provocation.  There seems to be quite a number of topics that keep on coming up.  The limit but also the possibilities that these frameworks provide and the question of context.

So I think one of the first questions that comes to mind is there are currently many of these ethical frameworks, 70 on my last count.  Some of their principles contradict.  Some of their principles overlap.  And considering sort of this mushrooming of ethical frameworks and importance of all three of you stressed for context is how do you make sense of these principles from your respective sectors and their contradictions.

>> VIDUSHI MARDA:  I think the current ethical frameworks, we touched on this a little bit, where there are sudden deficiencies, but I want to pick up on something that Levesque said.  Being built by only certain sections of society and being particularly dangerous for vulnerable communities, whether it relates to gender, race.  I think the ethical AI field kind of makes those systems microcausism of what the field is.  On the one hand we are in a room saying these systems are discriminatory and they won't work if you are a white man and will never work if you have a darker skin or a woman.  But the same is true for ethical frameworks.  They work in the areas in which they are built and they are harmful in a context that haven't been considered in the room.  And I think no matter how many ethical frameworks we have seen I have yet to see one that meaningfully engages with a difference in context and the harm they can create.

To give you an example, a lot of credit scoring algorithms around the world look at things like how many times you leave your house, do you go to the same place every day.  That wouldn't work in a country like India because a lot of people, a lot of women in many classes of India are not allowed to work.  They don't get to leave home.  And that fundamental proxy is inconsistent with the context in which it functions.  But that is not engaged with at the level of ethical frameworks.  So a system could be systematically discriminatory against a vulnerable section of a particular society but ethical frameworks do not engage with the complexity that comes with it.  So while there may be 70 at this moment I'm ‑‑ I don't feel the need to make sense of all of them to be honest, because I think they say the same thing in different ways and in different permutations and combinations, but they don't actually meaningfully change how the systems are designed or developed or even deployed.

>> LEVESQUE MAROUSSIA:  So I think the 70 ethical frameworks I think my ‑‑ so what I'm seeing from ‑‑ when you talk to people who have to implement parts of them or like think through them is that what it obscures, is one thing is like the question of should we actually implement something to begin with.

So with a question of facial recognition if you see how police are experimenting with it in the UK, actually for me all of these things that were suggested like can you identify if the right people is walking in the building and can you see if a terrorist is walking in to a football stadium, most of these things can be done by other police work.  For me it is like this drive for innovation, first we have to see what is actually substituting and because we are having ethical discussions this is how do we ‑‑ how do we maneuver it in the best way we see fit without asking the question why to begin with.

And then the other thing is sort of with we have also seen with Human Rights impact assessment and privacy impact assessment.  When it comes to the ground where people are implementing what they do ‑‑ I have seen some of these privacy impact assessments and people Googled because nobody is getting training on how to do a proper impact assessment.  So then it is like how should we implement it, how do we get it through the bureaucracy and make it compliant.

So one thing is having these ethical frameworks and also how do we sort of implement these at a lower level skill of Government  and public institutions.  And then for me what is quite like difficult about these ethical frameworks and what I have seen for a long time in the tech scene there is no responsibility.  It is quite an impunity for things when things go wrong.  So we have seen it so many times also when Google launched their facial recognition algorithm and it identified African people as gorillas and they went oops, sorry.  When you apply it then in a context of police, where the responsibility lies for when things go wrong.  And I think these are the discussions that are sort of not being had because they have said like oh, we went through at the right processes and procedures.  It is also a question about responsibility.

>> BERNARD SHEN:  Let me make two overall points.  First one is do we need these ethical frameworks.  I think we do because as I think reflected in my opening comments, I don't think the law alone is enough.  You are right, there are a lot of ethical frameworks, a lot of AI principles.  And one could be spanning an infinite amount of time to sort through all of that.  That is probably not the best use of time.  You do need some sort of ethical reference because law alone, why is law alone not enough?  A couple of examples.  I don't know of how many of you have heard of this law about motor vehicles.  In the UK back in 1865 I could be wrong, the Locomotive act and there was a law that says as a motor vehicle moves down the street, you have to have a person walk in front of it waving a red flag to warn everybody else that the motor vehicle is approaching.  It is a safety measure.  I think that law lasted 30 some years before it was repealed.  It wouldn't make any sense today.  And if you look at cars today, do they strictly adhere to the law?

Safety features that require the law but many cars also have safety features to go beyond the law.  Why do they do that?  Maybe because they understand that people desire safety and the law, minimum requirement of the law is maybe not ‑‑ doesn't go far enough.  So in order to earn the trust of customers, and the need for safety, they provide features that go beyond what's required by law.  And I think about Microsoft, just us as humans we don't just conduct ourselves in a minimum in terms of what's required by law.  We have ethics and morals.  And the people, the company colleagues that I work with they come to work every day and they don't park their values at the door.  They bring it with them.

And so they as humans care about doing the right thing not just the minimum required by law and also it is like I said it is a matter of trust.  Nobody is going to use our technology if they don't trust us.  People don't want to use things they don't trust.  To earn that trust is not enough to do what is required by law.  We have to figure out what it is that we are doing and in what scenario and context and what's reasonable and responsibility and try to meet that expectation.  Otherwise we don't earn that trust and keep that trust.

About data, I want to make a comment about data and discrimination that's an important topic to talk about.  Modern AI often involves machine learning and like any science, any technology is not perfect.  But it does make important contributions.  And as to whether AI is here, I think that's a reasonable debate from where I'm sitting, from what we are seeing.  AI is already here being used in many fields.  Machine learning is being used in many fields.  It is absolutely correct, there is a risk of discrimination.  And with machine learning what it is ‑‑ it is critical that day that we use, there was a study where they found that after crunching the data, they found that people with asthma are actually less likely to die of pneumonia.  And the reason they found out was that after consulting with medical experts they found that reasons if you are an asthma sufferer you are probably going to get much more immediate medical intervention being checked in to a hospital when you are sick.

So the chance that you actually die from pneumonia is much lower.  And then the question there should they take out that data point about people having asthma in terms of deciding the risk of dying from pneumonia.  When you remove a data point all the other data fields are already affected by the fact those people have asthma.  Already kind of polluted and the effect is hidden.  Better to include more data so that you can account for that abnormality.  So it is really critical that you have good, thorough data.  Blind spots that you don't see and predictions that the machine learning give you are bad.  They are not accurate.  They are less accurate from a flipping coin if you omit a lot of important externalities.

But then, you know, I also hear the concerns about privacy.  I mean that's the conundrum.  In order for machine learning to be high quality and produce predictions that's highly accurate you need different types of data and a lot of them.  If a bank is trying to make loan decisions they only include data from past applicants, people that they grant loans to and they are all, you know, Caucasian, male, et cetera.  Then that prediction model is going to skew towards favoring people who are also Caucasians and other races.  You probably get as good a chance of getting granted a loan.  It is important to test your data to see if it is representative that the machine learning is fair and reasonable and account for and address those biases.  If you want more data, you have to address privacy concerns because people are understandably concerned when you use a lot of data.

So we need to address people's concerns about data protection and privacy and again not only comply with laws such as GDPR but also think about what is truly responsible practices.  So people can trust that as we use that data as being used in a responsible way for the benefit without violating data protection laws or invading their privacy.

>> VIDUSHI MARDA:  Just to pick up on some of the things that were said, the example of the flag in front of a car, is that the Locomotive Act?  It is permanently different in the case of machine learning.  In the case of a car you see the car and if a car hits you you definitely know it.  The problem with Artificial Intelligence is as more often than is not intangible.  You don't know you are being subject to a system that has reduced you to a data point and you don't know who to appeal to.  Even if it is in the case of a state deployed Artificial Intelligence system, even if you appeal to the state it is often said well, the system said that we didn't say it and we don't know why the system said it.  You can come and appeal to it every time you are denied a service, but we can't tell you why you were hit by that metaphorical car.  It is different in the case of systems that you cannot peer in, that you cannot see and that you cannot control.  And they are subjectively and selectively made and built by certain stakeholders only.

The second thing about machine learning is that I think it is a great tool if you want the future to look like the past.  It is a fantastic tool if you want to replicate the past in to the future and be efficient and quick while doing so.  It is not an efficient tool when societal complexities are in the free.  Because I think regardless of where you stand politically or socially or whatever discipline you come from I think it is safe to say that we don't necessarily want to repeat social discrimination mistakes on the past.  And the problem with machine learning is that it obscures these very complicated systemic human problems in to a simple data point.  That then become a reflection of we want a good decision to look like in the future.

In the case of health there are enormous benefits to be had and also a huge danger because there is enough research.  There are other certain types of people to access health care in the past that overlap certain types of genetic problems and overlap with how male versus female patients are treated and all of these institutional realities and human pitfalls that data can never I think fully capture.  And I think being mindful of that regardless of how representative your data is, it is still a reflection of one human interaction that is almost always harmful for those who have been disadvantaged already.

>> LEVESQUE MAROUSSIA:  I would like to pick up two points.  One is the point of trust.  That ethics are a way to build trust, but if you look at a lot of the technology that we use, and we actually think of the companies behind them, you can question for yourself you actually trust them.  And the answer might be a bit in the middle, you might have a little bit of an uneasy feeling about because we all use these big services.  And we all know they are not that respectful for our privacy.  So trust is not a one dimensional thing.  Trust is a multi‑dimensional thing.  Take Google, for example.  Your gmail, very secure.  They also look at your data.  So it is not like a one way or one thing fits all.  It is a very complex process, do you trust these companies behind it.

And then the question is also what you can do as a user or like as an individual who then gets subjected to these systems.  So I think trust is maybe not the right wording.  And then also when we talk about accuracy, this is very common like in the discussion about how like accurate can we make these systems, can we make them less biased, and less discriminatory.  And if you look at the principles behind these ethical guidelines about privacy and sort of things like this, like accuracy, but also if you look at the European guidelines the first thing is it should be lawful and you can question what is lawful because it is always happening in a context.  Even if you have the fairest system in the world or most accurate system in the world it might not be applied very fair.

So if you take a look at, for instance, facial recognition or fraud detection in, well, their systems they in their piloting phase they target specific parts of cities, specific cities and not the general population.  So there was just a lawsuit in the Netherlands against Sely, which is a fraud detection algorithm for welfare fraud, and it was only tested in six areas.  And it was very low income areas because the Government has the most data with them.  It is only applied to one proportion of the population and not to the others.  We have to look at the situation in which they are being applied to.  And then also about like there is somebody in the room here who is involved with the creation of EU guidelines on trustworthy AI.  And from the Civil Society perspective also a lot of criticism has been ‑‑ who have been a part of this process have been raised about sort of the process of about how these guidelines were created and who was in the room.

And we have these very beautiful guidelines and still questionable if implemented in EU Horizon 2020 AI funding.  If we have are they being applied across the board by the EU.  There are so many questions around these ethical guidelines.  And there is a lot of knowledge in the room as well.  But I think just looking at the technology itself is too limited.

>> CATH CORINNE:  So that puts us sort of at the last question that I would like to ask of the panel before we open it up to the rest of the room, which is clearly there are a lot of outstanding questions that need to be asked.  And context and how to make sure that that is taken in to account in any kind of discussion is one of them.  And I want to try and sort of ask all of you what are some examples of frameworks of regulation of organizations of actors that you feel are getting it right.  So can you speak a little bit to the debates that you see that you feel are going in the right direction and why that is the case and what we can learn from that today.

>> LEVESQUE MAROUSSIA:  So I have been in the room a few times with Vidushi.  Article 19 is raising some very critical issues.  Access Now has been involved with a lot of ethical debate and they are pushing for Human Rights standards and doing critical work.  I think it would be interesting to see how this San Francisco or California facial recognition ban is going.  I think it is like this few actors that are very interesting to follow.  And then see how they will play out.

>> VIDUSHI MARDA:  Yeah, I think I agree about the California ban of facial recognition.  That was the first instance that we saw resistance being questioned by a critical ‑‑ through a critical discussion and that's where we need to go.  Treating these systems as inevitable we necessarily give up some amount of critique.  And I think that is very dangerous given how sensitive and how profound the implications are.  I generally think that if we are going to look at these systems and not treat them like this silver bullet and social problems and look at it for a social technical system that must adhere to first principles of law, whether that's international Human Rights or constitutional law or Consumer Protection or whatever body of regulation that is actually verifiable and actionable, I think there is a lot of space for that.  I don't think enough of that is being done, which is a big, you know, gap in current discussions, but I think yeah, I agree that we need to see how these bands and things play out, but just questioning inevitability and looking back on what we already have and not treating these systems as magic would be a great step.

>> BERNARD SHEN:  Instead of citing any particular conversation it is important that we are having them.  We desperately need any and all of these conversations so that we can surface all of these concerns and societies and issues and questions.  There is no exception.  Every new technology that comes upon the scene through the ages if they are major they create fundamental challenges, changes to society.  And we need to figure them out.  And it causes a lot of concerns and questions.  It could also cause a lot of harm, because if you rush in to it too fast then you put your blinders on and you don't see the problems and you cause the problems.  So when it comes to sensitive users, users that could cause harm, Microsoft was part certainly advise caution and proceed with caution to have conversations to figure out where the technology is, what the circumstance is.  To use imagination to figure out how could it go wrong and who could it harm and how can we mitigate that risk and trust that risk.

Certainly when Vidushi talked about whether people know they are being harmed or not, I think that goes to the point of transparency.  I haven't had a chance to address that in my previous comments but we ‑‑ one of our ‑‑ one of the AI principles in Microsoft is transparency.  If the use of a technology is so opaque you don't even know about it.  And yet it affects you, the companies that develop technology or the organizations that implement it should be transparent about so that you know at an instinctual how it is affecting, you have a right to know how the operation of that technology is affecting your rights.  But, you know, the most important thing to have constructive conversations, when you talk about bans sooner or later this technology is advancing and organizations are using it.  And they use it not, you know, I hear the point that somebody may be trying to demonstrate that they are trying to look good to the public.  Maybe there are other scenarios.  But equally there are a lot of scenarios where organizations are using because they believe it can help them make better decisions that benefit people.  And they are experimenting it and using it and finding it, can indeed and it does help them.  So this is happening.

The question is how fast does it happen, how thoughtful we are about letting it happen.  And as I said in my opening comments must we choose between ethical framework and the law of Human Rights law.  You need both, you need it all because anything that you can help you figure it out.  You need to look at it and incorporate it.  That's a responsible, constructive way to move forward as a society.  So that you can take advantage of new technology that after all is the product of our human ingenuity, data scientists comes up with it and come up with a way to use it to benefit people and people in Government figure out a responsible way to thoughtful regulate to mitigate the risks of these users violating people as rights.  So everyone has a role and everyone has a constructive role to make sure that it is being used in a responsible way that benefits society.

>> CATH CORINNE:  Thank you.  And on that note of the call for constructive conversations I would like to open it to the floor if there is any questions.  I would also like to ask you to briefly introduce yourself and keep your question snappy as there are many.  I will start from the right‑hand side.

>> I am Veronica Teal, advisor for Algorithm Watch.  We have compiled a global inventory of AI ethics guidelines.  We found 106.  We are not really that bothered with looking at the exact content of it because they are all broadly the same.  What we found absolutely startling is that there is next to no evidence of any sort of self‑regulating enforcement.  Somebody within a company has a shiny dancing singing AI guideline, is not adhering to it.  There is very little information out there.  I think we found six roughly who have anything on that.  So in other words, it seems more of fig leaf.

The other thing in the question I would pose to the panel is how long are we going to talk about AI ethnics and if something completely new.  Corporate societal responsibility has been around since the 1960s.  We are still struggling to get companies to adhere to principles of like no child labor, environmental carefulness and all that sort of thing.  At the moment AI is ethics discussion is discussed as if something completely new, whereas the basic principles of do no harm is really not that hard to understand and Google after all started out by saying do no evil.  So yes, for me it is not a discussion whether or not we should have AI.  For me it is a discussion how we are going to get companies to adhere to standards, we all seem to be agreeing on that is a good idea.  Thank you.

>> LEVESQUE MAROUSSIA:  I agree it is nothing new.  We should also celebrate the things that people are actually doing that are not related to ethics.  So, for instance, even though they didn't completely win it is important that liberty took the South Wales police to court to applying facial recognition.  There is no legal framework to govern.  They are trying to push for creation of new legal standards.  And I think this should be one celebrated.  And we should acknowledge the fact that this is a very long process that liberty is going on.  I also think we should question where the money is coming from the South Wales police to do it.  Any other sector that has been hit from the central Government you get a tech budget, you are going to use it and apply it.  And this is what we are seeing in the UK police landscape is where they have a budget and then the home office has said we have a police transformation fund you can use for tech innovation.  As soon as there is money for AI and ethics, working on AI and ethics is because this is the way the world works.  And so while I think it is very important what liberty is doing, they are creating an enabling environment.  I do think there is a lot of other things happening.

>> CATH CORINNE:  Can we have two seconds for another response?

>> BERNARD SHEN:  I can't speak about the companies.  Microsoft has AI principles.  And we have a process and procedures to review business scenarios to make a decision whether we should proceed or not.  And there have been, you know, I can't talk about private confidential instances, but we have made public acknowledgement of a police request scenario where we turn down the request.  Principles are not principles if they're readily involved.  We go after revenue and sacrifice of principles.  To see is this something that we feel live up to our principles.  And whether we should proceed or not.  And, you know, we should help hold ourselves accountable and society should hold each other accountable in terms of Government regulations as well as private sector companies that develop a technology and the companies that use it.

>> I think all panelists agreed on the principle that legal framework is essential.  I think that everyone agreed on that.  Even the example of car safety critical possibilities are beyond the legal requirements.  And they do not substitute the legal requirements.  So I think that's a point.  And ethical frameworks, the coalition of ‑‑ came out with principles for legal framework creation.  And I just read out five of those principles which are very relevant for having legal frameworks that work for everyone.

Principle 1, data subjects must spawn their other data.  Private sector collects it, doesn't own it and who is able to derive value from and exploit the data is an important consideration.  Two, our data requires protection from abuse.  Three, we need the tool to control our data.  Four, data comments need appropriate governance frameworks and data production and sharing require new institutions.  Legal framework, institutional frameworks are absolutely essential if we need AI to work for in an ethical manner.  Ethical frameworks are a part of a thing, I have the principles with me.  And if anybody is interested they can pick it up from me afterwards.

>> I am from the center for policy studies.  I am essentially an AI policy researcher.  First I would like to thank the panel for at least acknowledging that regulation is at the key of this debate and that's new.  But one word I was trying to like I was hoping to hear one word because it has been a very critical panel, and that word has not yet been heard.  It is capital.  So rights are ‑‑ include the right to live and the right to equality.  And machine learning systems are ultimately systems which amplify and make efficient current modes of social relationships.  So when are we going to start talking about material part, intelligence, influence, control and most importantly wealth.  And who are these systems helping.

So an analogy was made of cars.  Many people who are in this economic history, like we know that prevalence of cars, for example, in the United States of America was not due to them organically being better than say trains.  These were very contested political things.  Lobbying was involved and ultimately you have a reality where people who buy cars are privileged over people who would probably require public transport.  I am not saying that's what the AI situation is.  We are seeing a lot of function creep in the country where I come from, India.  You see often code precedes policy.  You have artifacts being made.  Things like facial recognition which was mentioned and then those things become their de facto standard and then policy catches up later and tries to play a dimensional to justify the world that already exists.  When are we going to start talking about influence and material wealth because I think that's central to the question of any rights‑based framework.  Thank you.

>> VIDUSHI MARDA:  I fully agree.  I think when we think about Artificial Intelligence systems always a focus on stage of deployment when it already exists, when someone is denied a loan and I think if we are going to reserve our critique that system and that stage we are always going to be too late and thinking about conceptualizing design.  And then the development and testing and deployment I think you are right, if we follow the money and incentives it is a much more effective way.  Thank you.

>> Okay.  I am wearing two hats.  One is research from Brazil.  Since 2011.  But also as a representative of the European data protection proviso.  So my question is to Microsoft.  First of all, I feel very relieved to note that giant tech Microsoft has decided to consider ad hoc principles.  Particularly I was pleased to see some public statements from the companies such as the commitment to honor California's privacy law.  I would like to know what's Microsoft approach in other commercial relationships.  And, for example, I give facial recognition if it is being provided by Microsoft, I don't know, but if it is being provided to Microsoft any B to B and B to C relationship when are the conditions that clause or whatever that's being proposed by Microsoft to ensure that its service will provide minimal safeguards to protect the privacy of its customers.  Thank you.

>> BERNARD SHEN:  Yep.  With regard to facial recognition including in addition to general AI principles, we have also principles on the use of AI that includes fairness, accountability, transparency, and specifically also law enforcement surveillance use.  So those are the principles we go by is the facial recognition fair.  So, for example, if the data is biased and a representative such as suggested earlier, and have a high error rate for people of color or for different gender then it is not fair.  And it would not be appropriate to use that technology.  And also transparency, for example, if you are using the technology to scan everyone in public, should there be some notice so that people are ‑‑ know that technology is being used.

So those are the questions that I think not just Microsoft internally need to consider, the public should have the conversation because it affects everyone.  And it informs all of us if we have those conversations to arrive at a norm, what is it that we in society expect.  Because on the one hand law enforcement are acting in good faith they are trying to protect all of us and public safety is a human right.  We all want to be saved and protected from harm.  In these conversations we should not forget about that.  But at the same time we don't want to sacrifice civil liberties and other rights as we pursue the protection of public safety.

So we need to have those conversations not only within companies alone but as a society as to what is ‑‑ is it reasonable for the police to do because in the absence of that, then a Government is left on its own and to figure out, you know, when is it that they can engage and use this technology to pursue law enforcement and protection of public safety.  If those conversations are had and we go through all those scenarios, not just the ones I mention, either that becomes law at some point, maybe it is not a ban.  Maybe it is some permitted use that yeah, can use it in these cases, but in these cases it is not allowed.  Even if the technology could use because society believed that it is not the right balance between public safety and people's civil liberties we would not allow it to proceed.

>> Hi.  I am a data scientist working in Sri Lanka.  I would like to thank Levesque for pointing out that perhaps people have watched too much Terminator II and there are different modes of granularities.  Going by the discussion of some others as well, there seem to be a few problems clashing at the table.  And the first is the demand for explainibility.  The need to understand the black boxes that we live under.  The second conflicting with that is anticompetitive law where some countries let their cooperation say this is our secret sauce and that we can't reveal it.  And the third is the issue of bias, which exists in every system, machine or human.  And it is mathematically impossible to engineer a system that does not have some error rate in making an addition between any two given groups unless it is a condition of perfect prediction.

What I would like to ask because you have been studying the laws and conversation on this, why is nobody talking about accreditation of systems.  If you take a machine learning system apart we should be able to examine the datasets and interrogate the bias and categories and discuss whether these data categories belong to our ‑‑ revealing information on protected classes.  And even if we do not expose the backdrop itself a machine learning system should be able to feed it enough input, instances feeding it actual data is unethical and examine the outputs, feed it enough different contributions of enough data.  And potentially to look at what human error rate is what we consider and test it against a human, against that given ‑‑ against that sort of accepted error rate and then make a judgment on whether we use this system or not.  Is it their conversation on doing this?  Because this is practically possible with the level of technology and with the legal structures we have today.  Explainibility is still theoretically on a mathematical level still a pipe dream.  I would like to ask that anyone who would like to take it.

The second is sort of a general comment addressed to Bernard from Microsoft.  The libertarian ideal that companies should be able to regulate themselves and have done so is perhaps a little naive.  And I understand that Microsoft has many good attempts and they do not leave your morals at the door when you walk in to work, but you are not the only people doing this stuff as a case in point.  Right now there is a particular company that has brought something to the Parliament in my country saying we solved hate speech.  It is this Norwegian company doing AI.  We asked what are they doing.  They don't have the slightest clue.  And this level of conversation in many instances where you potentially stop and say no, this is unethical and the California thing is hilarious.  It was California law enforcement that requested facial recognition.  Maybe I can put a system there.  I think that also needs to be addressed, particularly in the Global South where a lot of these systems come in to play without being discussed in fora like this.

>> BERNARD SHEN:  I absolutely agree with you.  In case my comments weren't clear Microsoft absolutely believed that regulation has a place and you cited an example of some companies act responsibly, you can have the scenario where they don't.  I want to quickly check off some of the other questions you raised.  In terms of a black box, I would just say that we need to remember that while we absolutely should be concerned about transparency of machine learning on new technology, we need to consider that before we have that, when let's say in the good old days of pure human decisions making that is not necessarily that transparent either.

>> Absolutely, which is why the subject is being approached in certain circles because of the ‑‑ testifying in a court of law, his testimony may not be understood by the judge, but we look at accreditation and the history of work to understand whether this man is evil or not and perhaps could that process be applied.

>> BERNARD SHEN:  When you have a human decision maker making a decision, that decision could be biassed and unconscious.  The decision maker may not know that he or she is being biassed.  I want to make that machine learning actually could of used wisely and constructively; it could actually help address that problem because of the presence of data.

You mention error rate and companies that develop in organizations that implement this technology have that both opportunity and the responsibility to try to address an error rate to address the opaqueness of a pure human decision maker that we used to live with and do testing because machine learning is not just development of a decision prediction model.  You test it.  You test it by, for example, following if you are trying to make loan decisions.  You don't want to repeat mistakes of the past.

With machine learning one approach you can first build a model based on historical data and you wisely create a test dataset of a wide spectrum of loan applicants with a varying degree of background income level, ethnic groups, gender, et cetera.  And then you stress test that model that's based on the past and see how it did.  And when it came out that it just most of the time denied loans to minorities, to females, et cetera, et cetera, you have very strong empirical evidence that the historical mode of the past has a problem and it needs to be addressed.

So I mean machine learning AI can be a force of good if you use it responsibly and creatively.  And then your point about the close by addressing a point of accreditation, in connection of facial recognition Microsoft has proposed in order to address this very sensitive use that companies, tech companies that provide this technology make available public API, application program interface, so that any researchers can access the system so it verifies where accurate or unfair and unbiassed.  We need to find ways to allow people to gain that trust whether that technology is being developed in a way that addresses error rate and bias issues.

>> VIDUSHI MARDA:  I think the argument that we can make machines less biased than humans because we can see where the bias comes from.  I think it is an interesting academic exercise.  You cannot teach a machine how to feel and you cannot teach the machine what discrimination looks like and what past discrimination is because of systemic inequality.  So I defer a little bit.  To your question about where ‑‑ to your question about understanding black boxes, I think that Cynthia Luden has done great work.  I think the accreditation system, I don't know if I've seen something specifically like that but impact assessments are becoming increasing popular.  I think however the problem is that it ends up becoming a game of Whack A Mole.  You assume this could be bias on the basis of gender and then you fix that.  But given the huge amounts of datasets you never know how a system will function and what it takes up discriminating factor.

>> Actually you could because there is significant ‑‑

>> CATH CORINNE:  I want to be mindful of the number of questions in the room.  I ask you to take it up over coffee.

>> I want to counter.

>> CATH CORINNE:  I want to make sure that everyone gets heard.

>> VIDUSHI MARDA:  I will stop there so other people are heard.

>> LEVESQUE MAROUSSIA:  Can I add one more thing?  When you said in de facto fairness, accountability and machine learning community, this is often the rebuttal where their humans make mistakes.  Who makes less mistakes, machines or humans and how can we make machines make less ‑‑ humans interact together.  Usually the mistakes get added on some of each other and maybe a call out to the room if like anybody knows any implementation of AI that are actually for good come find me afterwards.  I haven't found one yet.  There is always a lot of issues around it.

While we are very critical about the use of AI, I do think that would be interesting to figure out the cases where it is actually used for good.  Maybe interests like Spam detection or maybe it is like more infrastructure related.  Because I would like to see these examples and explore them to think like what are ‑‑ what's so good about it.  That's my comment, a force for good.  If you have examples come talk to me.

>> CATH CORINNE:  In which case I know that a lot of people are having to go to other panels.  We can entertain one more question before we have to slowly head off.

>> So hello.  My name is Mairana.  I come from Brazil.  I am a student of journalism over there.  And as a youth IGF I would like to ask you guys that we are as different multi‑stakeholders which world we want to ‑‑ I see a lot of initiatives using AI, especially through discriminate and to reinforce racism over there.  We are facing a huge problem with our public security policies.  And I would like to suggest for all stakeholders at this panel that maybe we should look for different strategies.  And what I would like to suggest for you guys is there one called intersectionality built through these experiences of black women in ‑‑ but it is something that has already been used in the law field but needs to be spread, because when we think about intersectionality we don't think about especially what is the profile of a person who has been discriminated, but we look, we turn to look to the structure.  And look in to the structure we can ‑‑ we can address humanity.  And I think that there is something that needs to be reinforced and intersectionality can help us to see the blind spot where you are thinking on companies, and Civil Society demand.  And because as a young black woman in Brazil I am really afraid of the use of AI on new kind of colonialism.  And that's it.

>> VIDUSHI MARDA:  Yeah, I think you are spot on and you said very eloquently what it took people many years to say, that is that intersectional approach is absolutely necessary.  And it is hard to do but I don't think that should stop the conversation in going there.  And I think fairness, accountability, transparency and machine learning in the last three years has come a long way from looking at solutions, as coming up with technical definitions of various fairness or transparency and looking at how it interacts with different intersections of society.  So thank you for that.

>> BERNARD SHEN:  What you said really strikes a cord with me because Microsoft believes this technology should be for everyone.  Inside the company we use the buzz word is democratizing AI.  This technology shouldn't be confined to the rich and powerful, the biggest institutions.  If we can make the technology available, so that any organizations with an idea on how to improve people's lives how to have responsible use to data, to build a prediction model, not to have the machine blindly apply the decisions, again just to sidebar, Microsoft ‑‑ one of the things that Microsoft emphasizes that humans should be in the loop.  You really need both.  A machine doesn't know what's ethical but humans do.  That's ‑‑ you test the model.  You stress test with good test data and you look at the results.  And people in good faith see the results that minorities, female are being denied loans.

So that institution that's making those decisions would make changes so that they become more fair.  And we want all institutions around the world to be able to have access to technology so that they can apply good faith, responsible and beneficial use.  And that could address the concerns that you have.

In terms of ‑‑ there are so many examples of good uses.  One example that I recall.  I want to cite it because it is not used by a huge institution.  There is an organization called Path.  I just remember reading it some time ago.  One of these projects is to help address the problem of malaria in Africa.  And they did a project in a country in Africa.  It was too long ago.  I don't remember all the details.  Where they use machine learning to find out ways to use medical supplies and treatment and patterns of diseases to predict where they need to direct their efforts to be most effective.  And I believe ‑‑ I would be wrong about the precise data because I don't have a perfect memory, but the scale before they start infection rate in this region or this part of the country was 50%.  One in two people gets malaria, but after using machine learning and be more effective in applying their efforts and medicine and treatment, I believe the infection rate went down to 1 or 2%.  One or two people in a hundred.  I would say that for those 48 people it makes a difference.  I think they like the fact that they have been helped.  So I would suggest that this is technology that is beneficial.  Absolutely there could be problems.  And we need to use laws, ethics, whatever we can come up with to help us so that we use it responsibly because it can bring benefit when we do that.

>> CATH CORINNE:  On that note and that important call to take an intersectional lens from the youth coalition here at the IGF I hope you all join me in thanking our speakers.  And I hope this will result in many interesting conversations over lunch.

(Applause)