You are here

IGF 2019 – Day 3 – Raum I – WS #175 Beyond Ethics Councils: How to really do AI governance - RAW

The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> CATH CORINNE:  This seems to be working.  Good morning, everyone.  Thanks so much to coming to this Round Table on how to really do AI governance beyond voluntary ethics frameworks.  I want to extend a special welcome to the three panelists for the Round Table.  Levesque Maroussia, Bernard Shen and Vidushi Marda.  My name is could Cath Corinne and the session is going to be structured as follows: .  We are going to start off with a bit of a short primer, contextualizing the contention around AI et thiks, having a bit of look at debate on how to govern and regulate AI systems and after that each speaker will take about ten minutes to give their views.  This is followed by a short panel discussion between the panelists as then open up for Q and A.  So to give a bit of a sense of the current discussion, a recent quote the relevant discussions of ethics are based on almost entirely open‑ended notions that are not necessarily grounded in legal or even philosophical arguments that can be shaped to suit the needs of industry.  These are choice words coming from the most recent report of the UN Special Rapporteur on extreme poverty.  Who questions the efficacy of ethics an as frame work for AI governance.  Obviously not alone in voicing these kind of criticisms about what is often dubbed as the return to et thiks in the debate about what normative and legal frameworks are best for AI govern aebs.  AI ethics is undergoing its own tech lash moment and son of the critiques is whether this frameworks and Councils every lead to real accountability or are sufficient preamble to regulation.  They also argue that ethic kal frameworks and councils can be fezy act shared understand or easy to co‑op and dose foster actual corporate accountability.  At the same time there are many others who argue we should not threw out the baby with the bath water and when done right these kind of frameworks do provide a very solid base.  Now it is dler in anything this is very contentious sdugs and that's why rae are all here today.  And the discussion is particularly timely given the increased use of automated systems in very society critical spheres like health care, leasing and Jew dish sher.  We are try going to try to focus on three things.  So the first is we discuss the recent search and ethical frameworks and solve regulatory Councils for AI governance and talk about some promises and pitfalls and also going to discuss some other strategies and other frameworks.  What I will will do is ask each speaker to introduce themselves. 

   >> BERNARD SHEN:  Good morning, everyone.  Thank you for have a conversation with all of you here at IGF.  My name is Bernard Shen.  I'm assist sent general counsel Department of Microsoft.  Turning to the subject at hand when we consider how we should govern AI one question that occurs to me is do we have to choose between ethical frameworks on the one hand and Human Rights law on the other.  Our ethics inherently voluntary?  Is it optional?  For example, unfair discrimination is unethical but also often against the law.  A Human Rights only respected and protected by laws?  Antidiscrimination laws prohibit unfair discrimination about are those laws alone enough to help advance and protect the rights of a vulnerable population.  Perhaps one way to think about this is that ethics are referring to our conduct.  What we do and how we do it.  Human Rights refer to the consequences of that conduct.  The consequences to the people and their rights.  They are connected as connected as two sides of the same coin.  I ask can conduct that harms Human Rights be ethical conduct or does ethical conduct inherently mean conduct that respect and protect Human Rights.  Let's consider a simple exam tell, someone driving down the street at maximum speed limit allowed.  The driver is obeying the law.  With you then the driver sees somebody children playing up ahead on the street.  She immediately slowses counsel.  Is that matter of lawyer or is that also an Act of ethical responsible self‑governance?  Law) AI is a tool and as a tool where whether it does harm or good depends less on the tool itself and more on the human hand that wheels it.  And as a tool AI can and is using almost anything and everything that we do.  Why is that?  That's because today's AI modern AI almost always involves machine learning.  Using math to look at an incredible amount of data too much data for human mind to comprehend, to study and to see to deduce kind of hidden patterns and insights in the large quantity of data.  But computer science and math can help us see those patterns.  And with varying degrees of mathematical confidence, generates some predictions, mathematical predictions.  And we take those learnings and predictions that the machine and math provide us to help us humans do tasks that we as humans define or make decisions on questions that we as humans pose.  For example, in farming how do we be more efficient, less wasteful.  In medicine how do we find cures more quickly or make more accurate diagnoses.  For the environment, how do we better preserve the natural resources that we have.  So it really can be used in any fields where there is increasing amount of data and help us gain insights and make better decisions but the challenge is because there are so many possible users each type of use presents a different context.  And the ways to use AI in each context in a way that's ethical, responsible, and rights respecting, may be very different and because it is so contextual we need to think about these issues with context in mind.  And I'm going to use one context to kind of walk through it a little bit.  And that's the use of face ial recognition by Government authorities.  Imagine five scenarios.  Scenario No. 1 you are participating in a protest march that Government using cameras and facial recognition to identify you during the march and to track you wherever you go afterwards.  How do you feel about that?  Scenario 2, if you have a driver's license you have gone down to the driver's license office have your photo taken, they have it on file.  And now let's say they their crimes being comited by sun video cameras capture the image of the suspect but they don't know who they are.  Should lae enforcement sues facial recognition knowledge to find a match including your photo?  Scenario 3, a government has lots of sensitive Government buildings, sensitive data, should it require all the employees who use Smartphones, computers, to protect those devices with facial recognition not just password.  You have to use facial recognition to lock your devices or when you enter a building as Government employees should they say we have all your photos on file.  You can't enter the front door until you show up and have your picture taken and match to the photo on file.  If it is not you, you have to be further he can ched out.  Passport control, good old days when you enter a country you talk to a person and hand over your passport and they look at you and look at your passport photo to see if it is really you before you can enter.  Now more and more in many countries you don't get to talk to a person.  You stand in front of a ter minute nalt and scan passport and scan photo and there is a match.  You probably don't talk to a person during that entire process.  Is he nair yoe 4, this is a bit heavy but let's say you have a family member went missing and there is reasonable suspicion that there is kidnapping, maybe human trafficking involved.  You gave the photos of your loved ones to the police and they have it.  They are trying to help find this person save this person, should they be Abe to use video cameras in all public places, airports train stations and capture images of people going in and out and compare it to the photo of your friend or family member to help find your friend or family member and save him or her.  Last scenario, scenario 5, you are a music lover and going a concert big huge stadium but police authorities have reliable intelligence that a terrorist group is trying to target this event with a bombing.  They have photos of members of this terrorist cell, should they be able to use cameras at entryways at stadium and look at everyone that comes in and compare everyone to the photos of the members of terror yis cell to try to stop them from entering the stadium.  So just a quick note, if you think about these scenarios from a technical standpoint from a technology standpoint, really two things going on here.  Two different types of facial recog noigs.  One is verification.  One is identification.  Verification means it is a one to one comparison.  We already have the photo of a known person.  Someone presents himself or herself to that person and you want to make sure it is really him, really her before they many unlock their computer device before they can enter a building or a country.  The second technological use is identification.  You have a captured image but you don't know who that is and you are trying to identify that person.  And somewhere you have a database of photos with known persons and you are trying to compare that unknown image to all the known images to see if you can find a match so that you can identify that person. 
    But, you know, regardless of the technological difference of what the comparison is, the question comes back to in each of these and many other scenarios how could Governments use this technology.  Should we rely on self‑governance by tech companies that provide this technology to Governments to help prevent Governments from misusing this technology.  Tech companies have a role because we understand the technology, we know how it works.  We can help the Government understand what it can or cannot do.  It help steer them away from inappropriate use.  In fact, Microsoft has gone on record that we have turned down some ‑‑ without identifying the police authority, but acknowledge that we have turned down opportunities because we felt that the proposed use was not appropriate given the state of technology and a circumstance involved.  So yes, tech companies certainly have a role but the problem is even if some companies try to Act responsibly, there will be ‑‑ if you have some of the companies that do not, then you still have a problem because they would still be ready and willing to provide the technology to Governments and use the technology in ways that the public we the public find acceptable.  So we also need government to regulate itself with thoughtful regulations on all of these use cases.  But we also as I mentioned the tech companies also have a role.  We also need to engage in self‑governance.  To have policies and Guidelines and the two kind of work hand in hand because laws enact quickly and they get outdated quickly because the technology develops very quickly and if we develop a law that covers today's technology and today's scenarios, the technology moves to fast in a law would fall behind.  And so it is also important that both governments and the tech companies that have policy and Guidelines to think about new scenarios evolving scenarios how to address it and longer term more new laws may be needed. 
    So let me close with something that Microsoft CEO wrote in an Article in June of 2016.  We talked about the partnership between humans and AI.  He said the most productive debate isn't whether AI is good or evil but that and I quote "it is about the values instilled in the people institutions creating this technology".  So when societies enact laws or when we have international laws those laws often reflect the values of us human beings.  But values are more than laws.  They also inspire and guide us to self‑govern, to engage in responsible ethical conduct.  Conduct that respects and protects Human Rights.  So what we need is all of us in this room and beyond, all of society to be involved in these conversations.  Conversations to figure out thoughtful laws, to regulate the use of AI, self‑governance policies and Guidelines for responsible conduct by Governments, by tech companies that develop the technology and by all the institutions where the Government private non‑profits or any other institutions that implement and use the technology in so many different contexts we need everyone at the table to have these conversations because ultimately these are conversations about our values.  How we connect the ways we use the technology tools that we have, to our values.  I look forward to having these conversations with you today.  Thank you. 

   >> Thanks.  For my initial intervention I would like to throw ‑‑.  I have to introduce myself.  Hi I'm Vidushi Marda I work at Article 19.  Also in policy discussions and sort of try and bridge the gap between language and assumptions that underpin both of these stakeholder groups.  So from my initial intervention I would like to throw out four provications to the group to add text to what Bernard said and also for us to have a more critical understanding of the space in general.  So the first is machine learning is not always appropriate for social purposes.  So, for instance, there is a lot of talk about how data is really effective how machine learning can look the an large amount of data that no human can.  Maybe we shouldn't be using machine learning for many, many instances where the system oversimplifies social technical problems and tries to reduce some of the mathematical formula.  So that is the ‑‑ I think ethical frameworks at the moment don't fully engage with this complication which is what we are find willing where you can say we Kant transparency and equality and respect privacy but at the same time you can be undermining a lot of social problems and also making discrimination worse and social problems worse of that's the first pro Kation.  The second is I think there is a false dichotomy between ethical frameworks and regulation.  Because one is not necessarily the replacement for another.  And neither is it I think constructive to think about et ethics as a preamble to regulation.  It affords us an idea this is where we want to be.  This is what our conduct should look like.  It has no bearing whatsoever on this is what happens with we don't behave the way we should.  The path that says this is what we can do is regulation.  And doesn't make sense to have ethical frameworks in the absence of regulation because there is no incentive to effectively follow these ethical frameworks.  Ethical frameworks don't have teeth which means there is no consequence to not following them.  If we want to be effective with ethical frameworks then having regular sleags a prerequisite to it.  It is not an either or situation.  It is not a before or after.  It must exist in tandem if it has to exist at all.  The third prove Kation is that ethics affords an exceptionalism to machine learning.  What I mean by that is ethical frameworks assume that machine learning should and shouldn't do something or Artificial Intelligence more broadly should or shouldn't do something but we are not going back to first principles.  A lot of information we are found in constitutional order or Consumer Protection or data protection but because there is this new "really complicated technology" we go back to the drawing table without engaging with existing regulations that already in place and the problem with et kale frameworks is also that they are built mostly in opaque closed rooms by people who design and develop these systems.  But not necessarily people who are subject to their deployment.  So what happens you are subject to a system and not fully sure how you can appeal it.  The only verifiable public statement that you have a et kale frameworks which you can't review and can't fully understand.  No one meaning of privacy, no one meaning of accountability. Ant last is I think only having et kale frameworks is more harmful than not having them at all.  Because they also offer a shield of objectivity when there is none.  So a company, it can be any company, can say, you know, we have ethical framework where we believe in transparency and accountability and privacy and we respect, you know, nondiscrimination, for instance.  And it almost gives a company the right to move fast and break things and see how systems function without engaging with the actual social cost of these systems because there is an ethical commitment in place.  In the absence of this ethical commitment we would have regulation and actual verifiable accountability mechanisms that any system should satisfy and I think ethical frameworks buy time which is extremely harmful.  I think it is important to remember that Human Rights are an ethical and legal framework.  So I think the false dichotomy is particularly of interest because it discounts the ethical normative importance of Human Rights frameworks and right based frameworks in general.  And it would be more helpful to think about it in terms of ethical frameworks are enough but do they invoke the right kinds of regulation existing rights and first principles that we already have.  I will stop there and I look forward to the conversation.  Thank you. 

   >> Thank you.  Hi.  I'm Levesque Maroussia and a put of the data justice lab.  If you are Tweeting to use the data justice hashtag because they are currently on strike due to you a stair at this measures in the UK due to the higher education.  I will be ‑‑ I do research in to how police in Western Europe is using data and technology and sort of what the in your social questions are.  And before this I spent maybe ten years working an practitioner on issues around data privacy, digital security, data protection.  And I sort of I think will piggyback on a lot of things that Vidushi said and ground them in the context of Europe and public institutions in Europe because I think there are so many things we should uncover.  Having spent a lot time in the tech scene talking to technologists, because there is this presumption that AI is already here and people use it blanketly for everything from basic statistical modelling to machine learning and this is creating massive problems.  On the one hand it is not here yet.  But it also creates this idea that we can't do anything about it, that is sort of already happening and we just have to roll over and die.  And maybe we can like mitigate a harmful issues by creating ethical frameworks.  I feel this is a false narrative that's being created.  The other that I that the way that people talk about public institution there is stupid and no knowledge inside of them.  And sort of they have no way of regulating this and I feel like this is overstepping the fact that we have a lot of laws in place that also apply to AI and this idea that law is slower than AI.  I feel it is fallacy that is coming from people who are creating these systems.  So I'm trying to unpack these things with a few examples.  And it is I think one of the issues is that we look at AI and then look to society whereas we also can look at society and then try to figure out solutions which are probably not AI.  So I'll go through some examples that I am seeing like on the ground in practice.  So I think one really interesting case has been sort of deep mine in UK and that they have access to health care information of people of like patients of NHS.  They had access to like a few million records.  And according to the actual regulation that was governing access from companies to this data, the NHS made the right decision because this data is given for innovation R&D projects all the time where Philippes develop a robot ARM to assist in operation rooms or whether it is for like trying to figure out a treatment, but maybe there is nothing wrong request the regulation where you say should companies get access to sue Donnized information of patients to develop new tools and technologies to help us solve a problem and they were in their rights but the problem is in this dethnt look Google or Alpha bet the owner of data mind and their business model is to actually analyze that data and sell this for commercial purpose.  So I think like we have to open up these frames more to take in to account the context and business models of people of the companies who are getting access to this data to train sort of algorithms and figure out solutions.  So once it became public and guardians started reporting on fact that the NHS gave data to alphabet it was a massive backlash.  Deep mind pulled out because of the controversy.  Do we want big tech companies to have access to our private information.  So this is a point that there is regulation.  We just have to revisit it.  Then with my case stud at thises on policing where I talked to police also by facial recognition, I think the interesting thing what they do themselves as well is some are inverting the process.  So when you look at risk taxation or all the examples that were given about should facial recognition be applied in this context, they are sort of relatively easy ethical questions because nobody is standing up on behalf of the people who get targeted.  One of the police officers that Is talking to, what if we abli this ke advice.  What if we identify to perpetrators and victims of sexual misconduct, how would we feel police interverning in people's lives.  Can we preemptively go in to vik tem's house San say there is a highly like lie hood that you would be harassed or rape and inverting it to another problem then the standard problems all of a sudden makes these questions far more pronounced on they also said we don't know if we are actually shutted be the actors doing this.  Whether if we talk about the same problem with high impact crime so it is burglar ri robbery, everybody ‑‑ there is is a sense where people say yes, we can do it but inverting it to a different problem all of a sudden shows the issues that also apply in the case of terrorism and high impact crime.  So I think in these debates I think we sometimes have to challenge it by inverting it.  I think we also have to inpack where this entire ethical debate is coming from.  If I look, for instance, at ethics discussions in Europe a lot of it is also funded and supported by the companies who creating AI.  And I'm not saying that the things that are coming out of it, are influenced so I don't say the content is influenced by these companies.  But it is putting the ‑‑ by putting money behind it we are setting the agenda that we have to look at ethics instead of regulation and be critical about this.  Is it bad that they are spending money on this.  Maybe not but then why are not governments spending money and figuring out when the regulatory framework should be different.  So I think one ‑‑ when we look at all of these talks about ethics we have to unpack it.  I see policing as well, that there is lot of money made available after incident happened in society and we ask the police and they are like, what do you think about this money being made available, for instance, to implement facial recognition or something else and they say oh, we just have to do something to show the public that we care.  But we actually don't know if it is going o work.  And I think so we have unpack what are the drivers that are driving the implementation of this on social problems of technology.  Who are the creators because yes, what are the values of the people who are creating AI if you look at all the tech companies there is ‑‑ it is quite a homogeneous crowd who are creating this.  Are their values the values that are shared across the world and the values that we all hold in to account.  So we also have to be critical about this.  We normally don't take context in to account.  Things have come from one place, we assume there be apply everywhere else.  Ethical Guidelines of EU would not apply in other context.  It is not one size fits all and I think we also start ‑‑ should start having discussions about what are the red lines.  What are the place wes don't want AI to be implemented in.  So is it like we don't want ‑‑ we don't want it implemented in identifying fraud detekion and welfare schemes.  There are certain areas that we don't technology to be implemented if e can't be sure what the drivers are to do it.  In the end what we are seeing with a lot of these AI systems that there sort of pee nanizing the poor marginalized and other groups.  So I'll leave it at that. 

   >> CATH CORINNE:  Thank you all three for this excellent provekeagtsion.  There seems to be quite a number of topics that keep on coming up.  The limit but also the possibilities that these frameworks provide.  And the question of context.  So I think one of the first questions that comes to mind is there are currently many of these ethical frameworks, 70 on my last count.  Some of their principles contradict.  Some of their prinels overlap.  And considering sort of this mushrooming of ethical frameworks and importance of all three of you stressed for context, is how do you make sense of these principles from your respective sectors.  And their contradictions. 

   >> VIDUSHI MARDA:  I think current ethical frameworks, we touched on this a little bit where there are sudden deficiencies but I want to pick up on something that Levesque said.  Being built by only certain sections of society and being particularly dangerous for vulnerable communities whether it relates to agenda, race.  I think the ethical AI field kind of makes those systems micro causism of what the field is.  On the one hand we are in a room saying these systems are diskream na tore and they won't work if you are a white man and never work if you have a darker skin or a woman.  But the same is true tore ethical frameworks.  They work in the areas in which they are built and they are harmful in context that haven't been considered in the room.  And I think no matter how many ethical frameworks we have seen I have yet to see one that meaningfully engages with a difference in context and the harm they can create.  To give you an example, a lot of credit scoring algorithms around the world look at things like how many times you leave your house, do you go to the same place every day.  That wouldn't work in a country like India because a lot of people a lot of women in many classes of India are not allowed to work.  They don't get to leave home.  And that fundamental proxy is inconsistent with the context in which it functions.  But that is not engaged with at the level of ethical frameworks.  So a system could be systematically discriminatory against a vulnerable section of a particular society but ethical frameworks do not endweaj with the complexity that comes with it.  So while there may be 70 at this moment I'm ‑‑ I don't feel the need to make sense of all of them to be honest, because think they say the same thing in different ways and in different permutations and combinations but they don't actually meaningfully change how the systems are designed or developed or even deployed. 

   >> LEVESQUE MAROUSSIA:  So I think the 70 ethical frameworks I think my ‑‑ so what I'm seeing from ‑‑ when you talk to people who have to implement parts of them or like think through them, is that what it obscures is one thing is like the question of should we tombly implement something to begin with.  So with a question of facial recognition if you see how police are E perming with it in the UK, actually for me all of these things that were suggested like can you identify if the right people is walking in the building and can you see in a tar rorist is talking in to football stadium, most of these things can be done by other police work.  For me it is like this drive for innovation first we have to see what is the actually substituting and because we are having ethical discussions this is how do we ‑‑ how do we maneuver it in the best way see fit without asking the question why to begin with.  And then the other thing is sort of with we have also seen with Human Rights impact assessment and privacy impact assessment.  When it comes to the ground where people are implementing what they do ‑‑ I have seen some of these privacy impact assessments and people Googled because nobody getting training on how to do a proper impact assessment.  So then it is like how should we implement it, how do we get it through the bu ra kasy and make it compliant.  So one thing is having these ethical frameworks and also how do we sort of implement these at a lower level skill of Government.  And public institutions.  And then for me what is quite like difficult about these ethical frameworks and what I have seen for a long time in tech scene there is no responsibility.  It is quite an Impunity for things when things go wrong.  So we have seen it so many times also when going dwell launched their facial recognitional dwor rhythm and it identified African people as gorillas and they went oops sorry.  When you are ply it then, in a context of police, where the response inlt lie for with things go wrong.  And I think these are the discussions that are sort of not being had because they have said like oh, we went through at the right processes and procedures.  It is also a question about responsibility. 

   >> BERNARD SHEN:  Let me make two overall points.  First one is do we need these ethical frameworks.  I think we do because as I think reflected in my opening comments, I don't think law alone is enough.  You are right there are a lot of ethical frameworks, a lot of AI principles.  And one could be spanning an infinite amount of time to sort auto through all of that.  That is probably not the best use of time.  You do need some sort of ethical reference because law alone, why is law alone not enough?  A couple of examples.  I don't know of how many of you have heard of this law about motor vehicles.  In the UK back in 1865 I could be wrong, low crow motive Act and there was a law that says motor vehicle moves down the street you have to have a person walk in front of it wavring a red flag to warn everybody else that the motor vehicle is approaching.  It is is a safety measure.  Think that law lasted 30 some years before it was repealed.  It wouldn't make any sense today.  And you look at cars today, do they strictly adhere to the law.  Safety features that require the law but many cars also have safety features to go beyond the law.  Why do they do that.  Because maybe because they understand that people desire safety and the law minimum requirement of the law is maybe not ‑‑ doesn't go far enough.  So in order to earn the trust of customers, and the need for safety, they provide features that go beyond what's required by law.  And I think about Microsoft, just us as humans we don't just conduct ourselves in a minute numb in terms of what's required by law.  We have ethics and morals.  And the people at company colleagues that I work with they come to work every day and they don't park their values at the door.  They bring it with them.  And so they as humans care about doing the right thing not just the minimum required by law and also it is like I said it is matter of trust.  Nobody is going to use our technology if they don't trust us.  People don't want to use things they don't trust.  To earn that trust is not enough to do what is required by law.  We have it figure out what it is that we are doing and in what scenario and context and what's reasonable and responsibility and try to meet that expectation.  Otherwise we don't earn that trust and keep that trust.  About data, I want to make a comment about data and discrimination that's an important topic to talk about.  Modern AI often involves machine learning and like any science any technology is not perfect.  But it does make important contributions and as to whether AI is here, I think that's a reasonable debate from where I'm sitting, from what we are seeing, AI is already here being used in many fields, machine learning is being used in many fields.  It is absolutely correct there is a risk of discrimination.  And with machine learning what it is ‑‑ it is critical that day that we use there was a study where they found that after crunching the data, they found that people with Asmer is actually less likely to die of pneumonia and the reason they found out was that after consulting with medical experts they found that reasons if you are asthma sufferer you are probably going to get much more immediate medical intervention being checked in to hospital when you are sick.  So the chance you actually die from knee moan ya is mouch lower and then the question there should they take out that data point about people having asthma in terms of deciding the risk of dying in ne moen ya.  When you remove a data point all the other data fields are already affected by the fact those people have asthma.  Already kind of polluted and the affect is hidden.  Better to include more data so that you can account bor that abnormality.  So it is really critical that you have good this thorough data.  Blind spots that you don't see and predictions that the machine learning give you are bad.  They are not accurate.  They are less accurate from flipping coin if you omit a lot of important externalities.  But then, you know, I also hear the concerns about privacy.  I mean that's the conun drum.  Nard for machine learning to be high quality and produce prediction that's highly accurate you need different types of data and a lot of them.  If a bank is trying to make loan decisions they only include data from past applicants, people that they grant loans to and they are all, you know, kau skea shan, male, et cetera.  Then that prediction model is going going to skew towards Faying peep tlael are also Caucasians and other races you probably get get as good a chance of getting granted a loan.  It is important to test your data to see if it is representative that the machine learning is fair and reasonable and account for and address those biases.  If you want more data you have to address privacy concerns because people are jun standibly concerned when you use a lot of data.  So we need to address people's concerns about data protection and privacy and again not only comply with laws such as GDPR but also think about what is truly responsible practices.  So people can trust that as we use that data as being used in responsible way for the benefit without violating data protection laws or invading their privacy. 

   >> VIDUSHI MARDA:  Just to pick up on some of the things that were said, the example of flag in front of car, is that the locomotive Acts.  It is permanently different in the case of machine learning.  In the case of a car you see the car and if a car hits you you definitely know it.  The problem with Artificial Intelligence is as it more often than not intangible.  You don't know you are being subject to a system that has reduced you to a data point and you don't know who to appeal to.  Even if it is in the case of a state deployed Artificial Intelligence system, even if you appeal to the state it is often said well, the system said this we didn't say it and we don't know why the system said it.  You can come and appeal to it every time you are denighed a service but we can't tell you why you were hit by that metaphorical car.  It is different in the case of systems that you cannot peer in, that cannot see and that you cannot control.  And there are subjectively and selectively made and built by certain stakeholders only.  The second thing about machine learning is that I think it is a great tool if you want the future to look like the past.  It is a fantastic tool if you want to replicate the past in to the future and be efficient and quick while doing so.  It is not an efficient tool when societal complexities are in the free.  Because I think regardless of where you stand politically or socially or whatever discipline you come from I think it is safe to say that we don't necessarily want to repeat social discrimination mistakes on past.  And the problem with machine learning is that it obscures these very complicated systemic human problems in to a simple data point.  That then become a reflection of we want a good decision to look like in the future.  In the case of health there are enormous benefits to be had and also a huge danger because there is enough research to they there other than certain types of people to access health care in the past that overlap certain types of genetic problems and overlap with how male versus female patients are treated and all of these institutional realities and human pitfalls that data can never I think fully capture.  And I think being mindful of that regardless of how representative your data is, it is still a reflection of one human interaction that is almost always harmful for those who have been disadvantaged already. 

   >> LEVESQUE MAROUSSIA:  I would like to pick up two points.  One is the point of trust.  That et thicks are a way to build trust but if you look at the a lot of the technology that we use, and we actually think of the companies behind them, you can question for yourself you actually trust them and the answer might be a bit in the middle you might a little bits of uneasy feeling about because we all use these big services and we all know they are not that respectful for our privacy.  So trust is not a one dimensional thing.  Trust is multi‑dimensional thing.  Take Google, for example, your gmail very secure, they also look at your data.  So it is not like a one way or one thing fits all.  It is a very complex process do you trust these companies behind it.  And then the question is also what you can do as a user or like an as an individual who then gets subjected to these systems.  So I think trust is maybe not the right wording.  And then also when we talk about accuracy this is very common like in the discussion about how like accurate can we make these systems, can we make them less bias, and less discriminatory and if you look at principles behind these ethical Guidelines about privacy and sort of things like this, like accuracy but also if you look at the European guide lines the first thing is it should be lawful and you can question what lawful because it is always happening in a context.  Even if you have the fairest system in the world or most accurate system in the world it might not be applied I have fair.  So if you take a look at, for instance, facial recognition or fraud detection in well their systems they in their piloting phase they target specific parts of cities, specific cities and not the general population.  So there was just a lawsuit in the Netherlands against Sely which is a fraud detection algorithm for welfare fraud and it was only tested in six areas and it was very low income areas because the Government has the most data with them.  It is only apried to one proportion of the population and not to the others.  We have to look at the situation in which they are being applied to.  And then also about like there is somebody in the room here who is involved with with the creation EU Guidelines on trustworthy AI and from the Civil Society perspective also a lot of criticism has been ‑‑ who have been a part of this process have been raised about sort of the process of about how these Guidelines were created and who was in the room.  And we have these very beautiful Guidelines and still questionable if implemented in EU horizon 2020 AI funding.  If we have are they being applied across the board by the EU.  There are so many questions around these ethical guide lines and there is a lot of knowledge in the room as well.  But I think just looking at the technology itself is too limited. 

   >> CATH CORINNE:  So that puts us sort of the last question that I would like to ask of the panel before we open it up to the rest of the room which is clearly there are a lot of outstanding questions that need to be asked.  And context and how to make sure that that is taken in to account in any kind of discussion is one of them.  And I want to try and sort of ask all of you what are some examples of frameworks of regulation of organizations of Actors that you feel are getting it right.  So can you speak a little bit to the debates that you see that you feel are going in the right direction and why that is the case and when we can learn from that today. 

   >> LEVESQUE MAROUSSIA:  So I have been in the room a few times with Vidushi.  Article 19 is raising some very critical issues.  Access Now has been involved in with a lot of ethical debate and they are pushing for Human Rights standards and doing critical work.  I think it would be interesting to see how this San Francisco or California facial recognition ban is going.  I think it is like this few Actors that are very interesting to follow.  And then see how they will play out. 

   >> VIDUSHI MARDA:  Yeah, I think I agree about the California ban of facial recognition.  That was the first instance that we saw resistance being questioned by a critical through a critical discussion and that's where we need to go.  Treating these systems as neflible we necessarily give up some amount of critique and I think that is very dangerous given how sensitive and how profound the implications are.  I generally think that if we are going to look at these systems and not treat them like this silver bullet and social problems and look at it for a social technical system that must adhere to first principles of law, whether that's international Human Rights or constitutional law or Consumer Protection or whatever body of regulation that is actually verify ibl and actionable I think there is a lot of space for that.  I don't think enough of that is being done.  Which is a big, you know, gap in current discussions but I think yeah, I agree that we need to see how these bands and things play out but just questioning inevitablity and looking back on what we already have and not treating these systems as magic would be a great step. 

   >> BERNARD SHEN:  Instead of citing any particular conversation it is important that we are having them.  We desperately need any and all of these conversations.  So that we can surface all of these concerns and societies and issues and questions.  There no exception.  Every new technology that comes upon the scene through the ages if they are major they create fundamental challenges, changes to society.  And we need to figure them out.  And it causes a lot of concerns and questions.  It could also cause a lot of harm because if you rush in to it too fast then you put your blinders on and you don't see the problems and you cause the problems.  So when it comes to sensitive users users that could cause harm, Microsoft was part certainly advice caution and proceed with caution to have conversations to figure out where the technology is what, what the circumstance is.  To use imagination to figure out how could it go wrong and who could it hafrm and how can we mitigate that risk and trust that risk.  Certainly when Vidushi talked about whether people know they are being harmed or not I think that goes to the point of tran transparency. Dy haven't a chance to address that in my previous comments but we ‑‑ one of our ‑‑ one of the AI principles in Microsoft is transparency.  If the use of a technology is so opaque you don't even know about it and yet it affects you, the companies that develop technology or the organization that implement it should be transparent about so that you know at an instinctual how it is affecting, you have a right to know how the operation of that technology is affecting your rights.  But, you know, the most important thing to have constructive conversations, when you talk about bans sooner or later this technology is advancing and organizations are using it.  And they use it not, you know, I hear the point that somebody may be trying to demonstrate that they are trying to look good to the public, maybe there are thoer scenarios.  But equally there are a lot of scenarios where organizations are using because they believe it could can help them make better decisions.  That benefit people and they are experimenting it and using it and finding it can indeed and it does help them.  So this is happening.  The question is how fast does it happen, how thoughtful we are about letting it happen.  And as I said in my opening comments must we choose between ethical framework and the law of Human Rights law.  You need both, you need it all because anything that you can help you figure it out.  You need to look at it and incorporate it.  That's a responsible constructive way to move forward as a society.  So that you can take advantage of new technology that after all is the product of our human ingenuity, data scientists comes up with it and come up with a way to use it to benefit people and people in Government figure out responsible way to thoughtful regular laying to mitigate the risks of these users violating people as rights.  So everyone has a role and everyone has a constructive role to make sure that it is being used in a responsible way that benefits society. 

   >> CATH CORINNE:  Thank you.  And on that note of the call for constructive conversations I would like to open it to the floor if there is any questions, I would also like to you ask you to briefly introduce yourself and keep your question snappy as there are many.  I will start from the right‑hand side.

   >> I am Veronica Teal.  Advisor for algorithm watch.  We have compiled a global inventory of AI et thicks Guidelines.  We found 106.  We are not really that bothered with looking at exact content of it because they are all broadly the same.  What we found absolutely startling is that there is next to no evidence of any sort of self‑regular lathing enforcement.  Somebody within a company has a shiny dancing singing AI guideline, is not adhering to it.  There is very little information out there.  I think we found six roughly who have anything on that.  So in other words, it seems more of figure leaf.  The other thing in the question I would pose to the panel is how long are we going to talk about AI ethnics and if something completely new.  KofrPt socialal responsibility has been around since the 1960s.  We are still stug gelling to get companies to adhere to sirn principles of like no child labor, environmental carefulness and all that sort of thing.  At the moment AI is ethics discussion is discussed as if something completely new, whereas the basic principles of do no harm is really not that hard to understand and Google after all started out by saying do no evil.  So yes, for me it is not a discussion whether or not we should have AI.  For me it is a discussion how we are going to get companies to adhere to standards we all seem to be agreeing on that is a good idea.  Thank you. 

   >> LEVESQUE MAROUSSIA:  I agree it is nothing new.  We should also celebrate the things that people are actually doing that are not related to ethics.  So, for instance, even though they didn't completely win it is important that liberty took the south wales police to court to applying facial recognition.  There is no legal framework to govern.  They trying to push for creation of new legal standards.  And I think this should be one celebrated.  And we should acknowledge the fact that this is a very long process that liberty is going on.  I also think in this we should question where the moun is coming from the south waels plis to do it.  In iner sector that has been hit from the central Government you get a tech budget you are going to use it and apply it.  And this is what we are seeing in the UK police landscape is where they have put bit and then the home office has said we have police transformation fund you can use for tech innovation.  As soon as there is money for AI and et thicks, working on AI and et thicks is because there this is the way the world works and so while I think it is very important what liberty is doing.  They are creating a enabling environment.  I do think there is a lot of other things happening. 

   >>.

   >> CATH CORINNE:  Can we have two seconds for another response? 

   >> BERNARD SHEN:  I can't speak with the companies.  Microsoft has AI principles and we are process and procedures to review business scenarios to make a decision whether we should proceed or not and there have been, you know, I can't talk about private confidential instances but we have made public acknowledgement of a police request scenario where we turn dourn down the request.  Principles are not principles if the readily involved we go after revenue and sacrifice of principles.  To see is this something that we feel live up to our principles.  And whether we should proceed or not.  And, you know, we should help hold ourselves accountable and society should hold each other accountable in terms of Government regulations as well as private sector companies that develop a technology and the companies that use it.

   >> I think all panelist agreed on the principle that legal framework is essential.  I think that everyone agreed on that.  Even example of car safety critical possibilities are beyond the legal requirements.  And they do not substitute the legal requirements.  So think that's a point.  And I ethical frameworks the coalition of ‑‑ came out with principles for legal framework e creation and I just read out five of those principles which are very relevant for having legal frameworks that work for everyone.  Principle 1, data subjects must spawn thean other data.  Private sector collects it, does it own it and who is able to derive value from and exploit the data is an important consideration.  Two, our data requires protection from abuse.  Three, we need the tool to control our data.  Four data comments need appropriate bro certainance pream kwoshgs and data production and sharing require new institutions.  Legal framework institutional frameworks are absolutely essential if we need AI to work for in ethical manner.  Ethical frameworks are a part of a thing I have the principles with me and if anybody is interested they can pick it up from me afterwards. 

   >> I am from centre for policy studies.  I am essentially an AI policy researcher.  First I would like to thank the panel for at least acknowledging that regulation is at the key of this debate and that's new.  But one word I was trying to like I was hoping to hear one word because it has been a very critical panel, and that word has not yet been heard.  It is capital.  So rights are ‑‑ include the right to livant right to equality.  And machine learning systems are ultimately systems which amplify and make efficient current modes of social relationships.  So when are we going to start talking about material part, intelligence, influence, control and most importantly wealth.  And who are these systems helping.  So and analogy was made of cars.  Many people who are in tho economic history like me know that prevalence of cars, for example, in the United States of America was not due to them organically being better than say trains.  These were very contested political things lobbying was involved and ultimately you have a reality where people who buy cars are privileged over people who would probably require public transport.  I am not saying that's what the AI situation is.  We are seeing a lot of function creep in the country where I come from, India.  You see often code preseeds policy.  You have artifacts being made.  Things like facial recognition which was mentioned and then those things become their defacts toe standard and then policy catches up later and tries to play a dimensional to justify the world that already exists.  When are we going to start talking about influence and material wealth because think that's central to the question of any rights‑based framework.  Thank you. 

   >> VIDUSHI MARDA:  I fully agree.  Think when we think about artificial inn tell Jens systems always a focus on stage of deployment when it already exists, when someone is denied a loan and think if we are going to reserve our critique that system and that stage we are always going to be too late and thinking about conceptualizing design and then the development and testing and deployment I think you are right if we follow the money and incentives it is a much more effective way.  Thank you.

   >> Okay.  I am wearing two hats.  One is research from Brazil.  Since 2011.  But also as a representative of the European data protection proviso.  So my question is to Microsoft.  First of all I feel very relieved to note that giant tech Mike Microsoft has decided to consider ad hoc principles.  Particularly I I was pleased to see some public statements from the companies such as the commitment to honor California's privacy law.  I would like to know what's Microsoft approach in other commercial relationships and, for example, I give facial recognition if it is being provided by Microsoft I don't know but if it is being provided to Microsoft any B to B and B to C relationship when are the conditions that clause or whatever that's being proposed by Microsoft to ensure that its service will provide minimal safeguards to protect the privacy of its customers.  Thank you. 

   >> BERNARD SHEN:  Yep.  With regard to facial recognition including in addition to general AI principles, we have also principles on the use of AI that includes fairness, accountability, transparency, and specifically also law enforcement surveillance use.  So those are the principles we go by is the facial recognition fair.  So, for example, if the data is bias and representative such as suggested earlier, and have a high error rate for people of color or for different gender then it is not fair and it would not be appropriate to use that technology.  And also transparency, for example, if you are using the technology to scan everyone in public, should there be some notice so that people are ‑‑ know that technology is being used.  So those are the questions that I think not just Microsoft internally need to consider, the public should have the conversation because it affects everyone.  And it informs all of us if we have those conversations to arrive at a norm, what is it that we in society expect.  Because on the one hand law enforcement, the Acting in good faith they are trying to protect all of us and public safety is a human right.  We all want to be saved and protected from harm.  In these conversations we should not forget about that.  But the same time we don't want to sacrifice I civil liberties and other rights as we pursue the protection of public safety.  So we need to have those conversations not only within companies alone but as a society as to what is is it reasonable for the police to do because in the absence of that, then a Government is left on its own and to figure out, you know, when is it that they can engage and use this technology to pursue law enforcement and protection of public safety.  If those conversations are had and we go through all those scenarios not just the ones I mention, either that becomes law at some point maybe it is not a ban.  Maybe it is some permitted use that yeah can use it in these cases but in these cases it is not allowed.  Even if the technology could use because society believed that it is not the right balance between public safety and people's civil liberties we would not allow it to proceed. 

   >> Hi.  I am a data scientist working in Sri Lanka.  I would like to thank Levesque for pointing out that perhaps people have watched too much terminator II and there are different modes of granularities.  Going by the discussion of some of others as well, there seem to be a few problems clashing at the table.  And the first is the demand for explainibility.  The need to understand the black boxes that we live under.  The second conflicting with that is anticompetitive law where some countries let their cooperation say this is our secret sauce and that we can't reveal it and the third is the issue of bias.  Which exists in every system.  Machine or human.  And it is mathematically impossible to engineer a system that does not have some error rate in making addition between any two given groups unless it is a condition of perfect prediction.  What I would like to ask because you have been studying the laws and conversation on this, why is nobody talking about accreditation of systems.  You if you take a machine learning system apart we should be able to examine the datasets and interrow gait the bias and categories and discuss whether these data categories belong to our ‑‑ revealing information on protected classes and even if we do not expose the back drop itself you can machine learning system should be able to feed it enough inpout, instances feeding it actual data is unthet kal and examine the outputs feed it enough different contributions of in enough data.  And potentially to look at what human error rate is what we consider and test it against a human, against that given ‑‑ against that sort of accepted error rate and then make a judgment on whether we use this system or not.  Is it there conversation on doing this.  Because sh practically possible with the level of technology and with the legal structures we have today.  Explainibility is still thet receipt kally on mathematical level still a pipe dream.  I would like to ask that anyone who would like to take it.  The second is sort of general comment addressed it Bernard from Microsoft.  The libertarian ideal that companies should be able to regulate themselves and have done so is perhaps a little naive.  And I understand that Microsoft has many good attempts and they do not leave your morals at the door when you walk in to work but you are not the only people doing this stuff as a case in point, right now there is a particular company that has brought something to the Parliament in my country saying we solved hate speech.  It is this Norwegian company doing AI.  We asked what are they doing.  They don't have the slightest clue and this level of conversation in many instances where you potentially stop and say no, this is unethical and the california thing is I had learous.  It was California law enforcement that requested facial recognition.  Maybe I can put a system there.  I think that also needs to be akresed.  Particularly in the Global South where a lot of these systems come in to play without being discussed in fora like this. 

   >> BERNARD SHEN:  Absolutely agree with you.  In case my comments weren't clear Microsoft absolutely believed that regulation has a place and you cited example if some companies Act responsibly you can have the scenario where they don't.  I want to quickly check off some of the other questions you raised.  In terms of black box, I would just say that we need to remember that while we absolutely should be concerned about transparency of machine learning on new technology, we need to consider that before we have that, when let's say in the good old days of pure human decisions making that is not necessarily that transparent either. 

   >> Absolutely which is why the subject is being approached in certain circles because of the ‑‑ testifying in court of law, his testimony may not be understood by the judge but we look at akred digs tan history of work to understand whether this man is evil or not and perhaps could that process be applied. 

   >> BERNARD SHEN:  When you have a human decision maker making a decision that decision could be biassed and unconscious.  The decision maker may not know that he or she is being biassed.  I want to make that machine learning actually could if used wisely and constructively it could actually help address that problem because of the presence of data.  You mention error rate, and companies that develop in organizations that implement this technology have that both opportunity and the responsibility to try to address error rate to address it opaqueness of a pure human decision maker that we used to live with and do testing because machine learning is not just development of a decision prediction model.  You test it, you test it by frex following you if are I troog to make loan decisions.  You don't want to repeat mistake of the past.  With machine learning one approach you can first build a model based on historical da tan you a wisely create a test dataset of a wise spectrum of loan applicants with varying degree of background income level ethnic groups gender, et cetera.  And then you stress test that model that's based on the past and see how it did.  And when it came out that it just most of the time denied loans to minorities, to fee male, et cetera, et cetera, you have very strong empirical evidence that the historical mode of the past has a problem and it needs to be addressed.  So I mean machine learning AI can can be a force of good if you use it responsibly.  And creatively.  And then your point about of close by addressing a point of accreditation, in connection of facial recognition Microsoft has proposed in order to address this very sensitive use that companies tech companies that provide this technology make available public API application programme interface so that any researchers can can access the system so it var fis where accurate or unfair and unbiassed.  We need too find ways to allow people to gain that trust whether that technology is being developed in a way that address error rate and bias issues. 

   >> VIDUSHI MARDA:  Think the argument that we can make machines less bias and humans because we can see where the bias comes from think it is an interesting academic exercise.  You cannot teach a machine how to feel and you cannot teach the machine what discrimination looks like and what past discrimination is because of systemic inequality.  So I defer a little bit.  To your question about where ksh to your question about understanding black boxes I think that sign Theia Luden has done great work.  I think the accreditation system I don't know if I so seen something specifically like that but impact assessments are becoming increasing popular.  I think however the problem is that it ends up becoming a game of whack a mole.  You assume this could be bias on the basis of gender and then you fix that.  But given the huge amounts of datasets you never know how a system will function and what it takes up discriminating factor.

   >> Actually you could.  Because there is significant ‑‑

   >> CATH CORINNE:  I want to be mindful of the number of questions in the room.  I ask you to take it up over coffee. 

   >> I want to counter. 

   >> CATH CORINNE:  I want to make sure that everyone gets heard. 

   >> VIDUSHI MARDA:  I will stop there so other people are heard. 

   >> LEVESQUE MAROUSSIA:  Can I add one more thing?  When you said in defactor fairness accountability and machine learning community, this is often the rebuttal where there humans make mistakes.  Who makes less mistakes, machines or humans and how can we make machines make less ‑‑ humans interact together.  Usuallying the miss tacks Ged added on some of each other and maybe a call out to the room if like anybody knows anyism plementation of AI that are actually for good come find me afterwards.  I haven't found one yet.  There is always a lot of issues around it.  While we are very critical about the use of AI, I do think that would be interesting to figure out the cases where it is actually used for good.  Maybe interests like Spam detection or maybe it is like more infrastructure related.  Because I would like to see these examples and explore them to think like what are ‑‑ what so good about it.  That's my comment a force for good.  If you have examples come talk to me. 

   >> CATH CORINNE:  In I which case I know that a lot of people having to go to other panels.  We can entertain one more question before we have to slowly head off. 

   >> So hello.  My name is Mairana.  I come from Brazil.  I am a student of journalism over there.  And as I youth IGF I would like to ask you guys that we are as different multi‑stakeholders which world we want to ‑‑ I see a lot of initiatives using AI, especially through discriminate and to reinforce racism offer there.  We are facing a huge problem with our public security policies.  And I would like to suggest for all stakeholders at this panel that maybe we should look for a different strategies and when I would like to suggest for you guys is there one called intersectionality built through this experiences of black women in ‑‑ but it is something that has already been used in the law field but needs to be spread because when we think about intersectionality we don't think about especially what is the profile of a person who has been discriminated but we look, we turn to look to the structure.  And look in to the structure we can ‑‑ we can address humanity and I think that there is something that needs to be reinforced and intersectionality can help us to see the blind spot where you are thinking on companies, and Civil Society demand.  And because as a young black woman in wa sblil I am really afraid of the use of AI on new kind of colonialism.  And that's it. 

   >> VIDUSHI MARDA:  Yeah, I think you are spot on and you said very eloquently what it took people many years to say that is that intersectional approach is absolutely necessary.  And it is hard to do but I don't think that should stop the conversation in going there.  And I think fairness accountability transparency and machine learning in the last three years has come a long way from looking at solution as coming up with technical definitions of various fairness or transparency and looking at how it interacts with different intersections of society.  So thank you for that. 

   >> BERNARD SHEN:  What you said really strikes a cord with me because Microsoft believes this technology should be for everyone.  Inside the company we use the buzz word is dem kraizing AI.  This technology shouldn't be confined to the rich and powerful, the biggest institutions.  If we can take the technology available, so that any organizations with an idea on how to improve people's lives how to have responsible use to data, to build a prediction model, not to have the machine blindly apply the decisions, again just to sidebar, Microsoft ‑‑ one of the things that Microsoft emphasizes that humans should be in the loop.  You really need both.  Machine doesn't know what's ethical but humans do.  That's we you test the model you stress test with good test data and you look at the results and people gooed faith see the results that minorities female are being ke nighed loans.  So that institution that's making those decisions would make changes so that they become more fair and we want all institutions around the world to be able to have access to technology so that they can apply good faith responsible and beneficial use.  And that could address address the concerns that you have.  In terms of ‑‑ there are so many examples of good uses of one example that I recall.  I want to cite it because it is not used bid a huge institution.  There is an organization called path.  I just remember reading it some time ago.  One of these plo jeks is to help address the problem of malaria in Africa and they did a project in a country in Africa it was too long ago I don't remember all the details.  Where they use machine learning to find out ways to use medical supplies and treatment and patterns of diseases to predict where they need to direct their efforts to be most effective and I believe ‑‑ I would be wrong about the presis data because I don't have perfect memory but the scale before they start infection rate in this region or this part of country, was 50%.  One in two people gets malaria but after using machine learning and be more effective in applying their efforts and medicine and treatment, I believe the infection rate went down to 1 or 2%.  1 or two people in hundred.  I would say that for those 48 people it makes a difference.  I think they like the fact that they have been helped.  So I would suggest that this is technology that is beneficial, absolutely there could be problems and we need to use laws, et thicks, whatever we can come up with to help us so that we use it responsibly because it can bring benefit when we do that. 

   >> CATH CORINNE:  On that note and that important call to take an intersectional lens from the youth coalition here at the IGF I hope you all join me in thanking our speakers and I hope this will result in many interesting conversations over lunch. 
   (Applause.)

 

Contact Information

United Nations
Secretariat of the Internet Governance Forum (IGF)

Villa Le Bocage
Palais des Nations,
CH-1211 Geneva 10
Switzerland

igf [at] un [dot] org
+41 (0) 229 173 411