IGF 2018 - Day 1 - Salle IX - WS421 Algorithmic Transparency and the Right to Explanation

The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> MODERATOR:  Can everybody hear me?  Welcome to the session on algorithmic transparency and the right to explanation.

We had planned a breakout Groups session.  I don't know how the audience feels about breakout Groups, but I'm thinking rather than we do a session to get input from the audience, could we have a vote on that?  Should we break out or we just going to spend half of the session getting input from the audience?  So break out?  Hands up.

>> Explanation for that.

>> MODERATOR:  You breakout into Groups and you discuss and you come back.  Second option is we just get more audience participation in the event.  Who's voting for breakout?  And for audience participation?  Okay.  So I think it's a consensus.  We're looking forward to what you have to say.

AI is a 62‑year‑old phenomena.  It's big now in the hype cycle and everybody's looking at it.  The reason is probably the ecosystem with Internet of things and big data is there and it's converging in a way that it makes money.  AI is also very readily accessible for anyone to use.  You can use machine learning on Amazon web services, on Google Cloud, luckily on your computer.  And the conel isualization of artificial intelligence I think is something that is quite important to unpack it because we see artificial intelligence in terms of stereo types, in terms of movies and we often see the extreme version of artificial intelligence which are machines that can supposedly think like humans.

A lot of what is referred to artificial intelligence now is statistics and machines learning.  Not machines thinking independently.  I think science fiction has given us problems with consensualization.  That's what I want to gather from the audience is how do we unpack AI and how do we make AI understandable and systems explainable?

So the GDPR dictates that if you have your personal information subject to automate decision making without consent, that's not allowed.  So it has to be a human there and you have to be able to explain it.  And the GDPR talks about a right to explanation.  It doesn't say it in the terms of the A‑right to explanation, but a right to explanation would mean that we can understand why a system has made a decision.  And there's a term algorithmic transparency and that is due to understand why the algorithms have made a decision and to make them transparent.  So we're going to discuss if that's possible today.

My name is Alex Comninos.  We have Aman Bela specializing in human rights and artificial intelligence.  Karen Reilly worked quite a while for that and she now works with start ups and managing people and code and projects.  Then we have Deborah Brown.

>> Good morning.  Thank you for being here.  I'm going to quickly do an introduction on the concept of algorithmic bias, which is obviously linked to algorithmic systems.

So algorithmic systems are computational and predictive.  They're under the umbrella which is AI.  So that means that automating, decision make processes used by computer programs are used to identify patterns that then can be useful for decision making.  So machine learning systems that process large amounts of historical training data and can then be used by decision making processes.  But if the historical training data is incomplete or if it's not representative of a certain specified population, then sometimes biases can swell quickly and inexplicably across different AI systems which can then further English outcomes in other people's life.  So you will have a reproduction of culturally engrained biases.

What I really want to leave you guys with is the fact that algorithmic systems are intrinsically situated where they are deployed and developed.  So that means that first of all they're not majormal.  They don't predict the future.  They're just taking the old and tell us something about the present and that's where the provisions of the GDPR come into play.  They have to be understood like safe guards for human rights.  The human rights we have here with the equality and non‑discrimination, right to information, et cetera.  So as Alex said, you have ‑‑ you find several articles that would first be article number 15, which is right to access and article 22, which sets out a general prohibition of solidly auto matedd decision making processes except when you find yourself in different exceptions.  And those exceptions are what's of interest to us today.

So this was my quick introduction.  If you have any questions, don't hesitate.  I can go deeper into why we have issues with historical training data systems and what it means in technical terms.  I will leave the floor to my other colleague.

>> SPEAKER:  So among the things that I've done in technology, I've managed bare‑metal technology.

At the core of my anxiety on this subject is the issue of AI and its application as a social problem, but I am not here to talk about the divide between developers and sociologist.  That is an issue where technologist in general don't respect the social sciences.  They believe in moving fast and breaking things and thinking about the consequences later.  A lot of these teams are not diverse.  They don't represent the people they are having an impact on and that is a problem with all technologies.  When it comes to algorithmic transparency and the technology planning perspective, the things that keep me up at night are technologies that are not planned.  What do we want to see in the world as a result of this code?  Where are these datas stored?  Are they documented?  If you move fast and break things and just experiment and just have machine learning through data sets, if you're surprised at what happens at the end of it and you don't document where you got there, then how can you explain it in the end?  And so a lot of the solutions to issues of transparency, of application of the GDPR are really boring things like end user needs assessments, documenting your code, knowing where things are in the infrastructure and I've seen things you wouldn't believe.  10‑year‑old servers with critical data that it is hard to migrate things off.  Getting somebody a mechanical worker to repair a fan because they don't manufacture them anymore.  And then you put this massive human rights problem in the middle of that and, um, I weep for what can become of this.  And so documenting your code, knowing where things are, that's things that I don't hear in this.  I would be happy to expand on this and tell more horror stories, but again, boring subject.

>> SPEAKER:  Apologies for the delay.  I had to run from the other session.  Yes.  Since I wasn't here before ‑‑ what was the question?

[Laughter]

Introduce ‑‑

>> AUDIENCE:  This is an area I think is problematic.  If anything, you are very limited to two minutes, but take some more if you need.

>> SPEAKER:  Good day, everyone.  My name is Lorena Jamie pal as.  When I look at these technologies, I with look at the human object of technology.  Right now when we talk about algorithms and automation process, we are very much concentrated on the mathematical relevance of the system.  We look at statistical bias and math madical problems and formualation.  And that's the first step in my opinion.  Because it all ‑‑ all those mathematical relevant problems and biases that we know from many, many years of research doing statistics on all this stuff, they become relative about it comes to how humans interact with the technology.

So just an example, when you probably have heard about compass.  It was this software that was used by many judges and specific states in the U.S. to make decisions about whether ‑‑ whether people should be granted parole or not.  So that software would create a risk calculation about those citizens wanting to be granted freedom from parole by looking at the risk of them becoming and perform anything type of criminal activity afterwards.  It was problemitized by Republicca.  And then I had this discussion started and it was about the mathematical and statistical bias and the software.  And this is an important conversation.  It's really important.  And it's a first step in my opinion.

And the second step is how our judges using that software and under which circumstances are judges making a different decision to what is being suggested by the algorithm because in the very end, it boils down to the fact that we are dealing with a software that is not deciding, but it is assisting judges to make a decision and to understand how this technology that is, by the way, human created is interacting with human beings and to which there is a path dependency and there is high granting authority to the software.  And under which circumstances quite the opposite is the case.  It's important to understand where the factors of technology are out there that manipulate people and which other factors are people saying that.  Do you use this technology in both legitimate and not legitimate ways.  By the way, the interesting thing by this compass software is there's a lot of research being done by a female scientist in Princeton that was ignored by the Republicca.  And she analyzed under which circumstances judges would not follow the suggestions, but by the software.

In many cases, the judges wouldn't trust the software T. wouldn't fit with their own stereo types.  So sometimes it is also interesting to understand that technology is a sort of excuse to have to explain to ourselves to hide our own biases.  So this is some of the things that I look at.

>> MODERATOR:  Since we're not going with the original plan to do breakout sessions, I want to pause here to see if there are any questions or interventions from the floor.  We can do a first round of questions now and then come back to the panelists for another round.  I see one, two, three.  I see three questions so far.  So if we can go from the front to the back with the questions and please introduce yourselves first.

>> Hello, everyone.  My name is Manuela.  And what would I like to ask is about the compass software because I think the decision about this software went to a high court in the U.S. and the conflict was about intellectual property because they couldn't reveal the black box because of this.  What do you guys think would solve this problem?  You have a private enterprise doing something that is public and you don't know what you're being judged for even if it is only recommended.  Thank you.

>> SPEAKER:  So my question is really related.  As a social scientist I say we make the assumption ‑‑ foot see, but some social scientists make the assumption that it makes people more pollarrized or less polarized.  The question is:  If we were to inspect the relationship between the makersf the technology, the technology and people's use, how can we tie all the relationships?  Thank you.

>> MODERATOR:  Thank you and we'll take one more before going to the panel.  The gentleman in the fourth row.  I'm sorry.  I can't see very well.  Yes G. ahead.

>> I am currently studying in the Netherlands for technology.  Very serious question.  About a matter of legitimacy about decision make process because usually, I think that the idea of providing information it helps to build this legitimacy so a court can rely on that.  But at the same time, we are having the algorithms that work in a machine learning process that after some time, even the developer doesn't understand anymore how the machine makes the solution.  So how can the information that should be provided make it clear for them to understand the information that is being provided.  I think there's a big challenge in the legitimacy process.

>> MODERATOR:  Thank you.  I don't think any question was toward any specific analyst.  Went around respecting relationships between actors in the system and a third around legitimacy and the making processes and insuring trust and transparency there.

>> SPEAKER:  So the question about the cycle between the sociology and the technology.  I was in a train reading a German business magazine and there was an executive from a big company saying oh, AI for hiring is going to be great because there's no bias.  And no.  And recently, Amazon scrapd their hiring AI because it showed a preference for white guys F. your company has white gender heterosexual men and you put machine learning on top of that and say okay.  Of the people that we hire, look at their resumes.  The people that we hire, that's the definition of success.  Then you're going to get a machine that preferences people who are already privileged.  And then those people are going to make the technology that goes into other systems for hiring and then you're going to have a VRS built for discrimination ‑‑ system built for discrimination.  That's already a problem in tack where as a tech manager, I used to say I'm a chief feelings officer because I was doing very highly technical work, but the difference between responding successfully to a distributed denial of service attack or making sure the hard drive's got replaced in the dat center was asking people have you had coffee this morning?  How's your blood sugar?  Do you feel valued in your work?  I used to talk to other tech managers and say what?  Feelings?  No.  I work in computers.  So you have this already bias of tech makers.  They don't want to deal with social stuff and in many cases, there's an element of program supremacy where they believe the programming is the highest endeavor that humans can achieve and all these humanities of people were slacking off and couldn't code.  Lots of humanities people actually do code and humans are a lot more complicated.  You want to talk about buggy firm ware and software and legacy code, computers are easy.  Humans are hard.  So AI is not the solution yet.  It can address a lot of issues, but unless we have a diverse set, a multi‑disciplinary project to ask ourselves what do we actually want to build, what kind of teams do we want to see before we apply this, it's just going to be new technology reinforcing age‑old systems of oppression.

>> SPEAKER:  Compass.  Yes.  It was a question of company secrets, trade secrets and this is one of the thingses we have seen in many other cases.  It is even a more relevant one.  When you do a DNS‑‑ when crime asks specific laboratories to do a DNA test to check whether specific person was in the set of crime and so on.  What they did is a run a specific software they program this and they had bought this software.  That's been the case.  It shows how complex the systems are where the software was buggy.  But algorithm was fine.  So at the beginning, everyone was checking on the algorithm and everything was fine.  The mathematical formula behind that calculation was explaind and clear and was the correct formula used by everyone.  And it took many, many years of fighting from the scientific community to just bring the company selling this type of software to make clear that code is not the algorithm, but there is a distinction between the mathematical formula and the translation from code from the mathematical formula into code that there is a set step.  There are two profiles behind those two different things.  Coding and creating algorithms.

The problems that we have there when we say it's a trade secret, precisely because of that very problematic and very crucial, when we talk berk valuating these types of systems, we need some sort of oversite that is able to work in the mathematical formula, but also in the code and understanding.  The same way Coca‑Cola is being oversight even though the formula is not made public.  And this is a trend that is starting to break because we see more and more governments are talking more about these type of issues not mainly because they want to foster more openness, but in many, many cases because they are themselves supplying this type of methodologies and technologies within their own services.  And this is becoming a challenge for many of them.  Overall, I would be curious that just by scrutinizing the algorithm and software, you would be fine.  This type of technology is very complex.  You are going to need a more holistic approach to understand where the problems.  It does not start with the code and ends with the data.  Sometimes you're going to need to look at management processes and how the data is inputd and how the output has been interpreted to understand where the problems.  I think evaluating software, we have been evaluating.  We had a campaign where we tried to make a partial auditing of the biggest scoring company in Germany.  And one of the things that came out very quickly is that the data map might be right, but the problem and the fact that specific bank companies are inputting the data very wrongly not because it is a false equality of the data but just because the process that is being specified of how to transmit information from one point to the other is just that defined process.

Just an example with that great scoring company, in the year 2000s, because of the Internet, a new profile as a human being and that provide was called the smart user.  They would compare things.  Before it was very complicated because they would have to go to different places because they didn't have Internet and compare prices.  But that was absolutely without Internet.  And when they were asking for credits, they would go to a different bank and ask for different conditions.  But this type of profile wasn't conceived.  What usually happened in the bank companies was that they were asked to the scoring company, they would demand the score to the person just to understand what type of conditions can I give to this person demanding for credit and then they would decide whether they offer a specific credit or not.  And technically what this meant in the year 2000s is that the credit scoring companies would just become just one request.  A person asking for a credit and they didn't refer between requests for credit and credit considered, credit given.

At the very end, the smartarses would go to two or three different banks and at the very end of the day, his score will be very down below just because there was no specification of that part of the process.

So you see, that's an example of how there is an impact that is ambly feed by technology of bad defined process and the problem is not technical.  It will solve very quickly.

>> MODERATOR:  Thank you very much for the example and going into the case.  We want to flush out cases in countries and sectors and their lives and where they work where the explainability of AI or automated systems and the explainability of decisions made by computers.  Where does it affect you and what are the challenges and opportunities.  Somebody put their hand up?  Yes.  You can introduce yourself.  Okay.

>> SPEAKER:  Hello.  One thing I would like to listen from you guys is how can you use ‑‑

>> AUDIENCE:  For instance, artificial intelligence to improve human rights and I work on a research that we are ‑‑ there are a bunch of companies developing softwares to fight against human trafficking using AI.  And that is great research and I think this stuff we need to talk about it too.  Because if you don't, AI becomes like a bad myth of robots.  But I think you can do a lot of good.  I would like to hear you from guys and what you think and from the adience as well ‑‑ audience as well.

>> SPEAKER:  I could respond.  In Africa, there was a debate about leap frogging.  So mobile phones were leap frogging to certain stages of development and people would get online and suddenly access the information and markets change.  You can know what to bring to a market.  There was a lot of hype and we're seeing the negatives of mobile phones.  AI has been around for many years.  You would have to be at a university and book computing time like 60 years ago.  Now you can run AI on an edge device and relatively cheap or free with the Google's open source intenser flow and other.  And you can run Amazon web services in so many places.  We have access to computing and I would like to see computing power.  I would like to see how that turns out in a developing world.  Perhaps I'm on it.

>> SPEAKER:  To go back to the questions that were asked before, Lorena said one of the things that we can use to go around the IP issue that is from algorithms cannot be open to days of the public.  You have several things to go out with that.  We have the possibility to audit algorithms even though it is not a sufficient solution.  It's a first step.  And then we also have the possibility to enhance the demand for transparency when systems are used by public actors.  And that demand has been heard in the last couple of years because you have governments that have issued reports on the algorithmic systems they use.  They have issued reports in which fields they implement and what kind of impacts they have on citizens.  So they're really trying to be transparent on this use of algorithms.  So you have auditing and transparency demands that can be enhanced to make sure there are no negative side effects.  Like we said, it can be used for good, but we need to make sure that doesn't structurally automate biases and discriminatory outcomes.

>> SPEAKER:  We have a question in the back.

>> AUDIENCE:  I'm also working I triple E algorithms in the standards.

One of the issues that I'm looking at is investigation of chain, export of technology for diversification of training data especially in Zimbabwe.  I wanted to find out where ‑‑ what's your view on getting that to Africa to harvest dat for diversifying and training data for algorithms or is the answer for algorithms to training in synthetic data because that's another new thinking at the moment.  And also, what's your view on the Google drag and flight?  One of those people would think that's a good thing for Google to go to trainer for retaliation for what China is doing.  They're harvesting data all over Africa from the global side, but why is it good for them to get all this data?  Thank you.

>> MODERATOR:  There were few other questions in the back and one in the middle and on the side.  If we can do another round and get questions at once in the back row, please.

>> AUDIENCE:  I have two questions.  There's a convincing thing that our alternative is human decision making which has presizely those same kinds of biases because we're perpetuating them.  I'm from the U.S. and obviously we see this in the content of bail and parole where we have a real split in the criminal justice community because some people think that we're going to get better outcomes in racial terms out of these algorithms than we would the bias judge sitting down in Alabama.  I would love to know what your answer is to that question.  And the second one is:  So picking up on thisd in of being wholistic of evaluating AI.  I would love to hear any good examples you guys may have of how this is done on the back end.  Once you have the system functioning, what are the opportunities for actually analyzing who it affects and who the outcomes and are whether that gives us away to go back into it rather than auditing code and all those kinds of things.

>> MODERATOR:  Please introduce yourselves.

>> AUDIENCE:  Hi I'm from the LDI team from England.  Obviously there are already quite a lot of problems garners transparency with algorithms.  My question is:  How is that further complicated when we are dealing with this borderless Internet, but regionalized development of algorithms and more over if there are any potential avenues you see to help provide solutions to this.  I know somebody mentioned data storage and I would really love to hear the panels thoughts on that.

>> MODERATOR:  Person in the third row here in the front.

>> AUDIENCE:  I'm Nicholas from the university in Germany.  I have a technical background.  Die privacy and applications to AI privacy.  I was wondering about different techniques.  AI is only one algorithm and you can have neo networks and decisionries.  It is way easier.  What are we tackling when we say we have to be explain the algorithms.  Are we tackling the complexity and explaining an algorithmic model or explanation the code itself or what it does?  Thank you.

>> SPEAKER:  Could I answer the last question?  Okay.  So I think it was John in 1980 who talked about strong AI and weak AI.  We also have this idea of artificial general intelligence.  So but yeah.  A lot of AI is simple statistics and refining statistics.  I don't like the term algorithmic transparency.  It denotes some magic masked man or woman on a console.  I'm not good at majorities.  It scares me.  It can't just be the algorithm.  I took what systems transparency and then what you have a computer system sitting on a usually complex step and you have people systems around it.  So yeah.  I think it's been mentiond about good documentation.  I think unpacking is important and I don't think it all comes back to the algorithms.  If we just focus on the algorithms, someone can say, well, here's tensor flow.  Those are the algorithms.  There's your explanation.  Where there's a whole stick that relies on usually closed source.  I am a bit of a hiby and I would like to see it mean that everything has to be open source.  I don't think that's going to happen and if it was, it would be quality open source.  Some type of middle ground.

>> MODERATOR:  I would like to jump into that.  You're right.  We need to look at this in a more wholistic approach.  We talk about explainable machine learning, we have two different approaches.  When it comes to explaining what it is doing and right now, we have this ‑‑ a lot of signs doing research and how to create explainable machine learning.  I don't like that.  What we are creating with that, it's a plausization model of the first model that we're trying to explain.  So that is not even an explanation of the first model because if it would be, you could exchanged model.  You can ‑‑ you just can transport the one mile to the other.  So it is not even an explanation, but we call it explainable in AI.  We feel better at the end of today.  And it's, of course, a cheaper way to go with machine learning.  But Thursday is very old fashion for machine learning, which is an incredible machine learning half does that mean?  Engineers need to put some features into the code.  It's more complicated.  It's from the perspective more expensive because you have to do more thinking.  It's not about plague bingo with input and output trying to understand what do you need to change in the input to make a specific output happen.  And there comes the example you were asking for in the back end.  And this is precisely what happened in New York and they were ‑‑ New York decided they wanted to reform their electricity distribution system.  They asked a company to do that for them and they started using explainable machine learning and they could only better up to the system for just 1%.  It was a very expensive thing and it only betterd the system for 1%.  Pretty bad, right?  Why?  Because they didn't know exactly what the input was really doing and what type of input they needed to change to make a specific output better.  So because they were feeling very bad about this whole money paid by the tax payers, they decided to start thinking something like very go back to the rootses and try to make ‑‑ these are critical machine learning model.  This is what they did.  They put a lot of effort and money they had gotten on engineers doing special features so that they were very conscious about what types of inputs they had and needed and understood exactly what type of input they need to change to optimize the system up to 60% more.  And this is a good example of how you can do things.  Now, I wouldn't say that everyone should now use critical machine learning in their technologies.  It costs money and not everyone has that money.  When it's not high stakes, I don't see a reason why you cannot just research and try and experiment.  When it comes to high stakes and technology that is being implied in the public sector that has an immediate impact on the society, you need to have a better system that you understand and scrutinize.  There is this situation where most companies that are also selling services to the companies, they perhaps not only have the will, but do not have the resources to use incredible machine learning.  And this is a problem.  That's not an economical problem.  We need to be aware of that and I would not say let's not use explainable machine learning.  Let's only use non‑technologies, but right now, we need to decide.  I think, in my opinion, what are the high stakes in society where we decide in those sectors?  We need a better understanding and accountabilityf the systems.  There are sectors where it is fine to have competing models with different ideas where we might leave them to develop new technologies, to develop a original ideas and if some problems arise, they're usually easy to solve or to identify because they're not hire stakes.  It's about th toaster or things that are fun to experiment with.

>> MODERATOR:  Thank you.  We're having back and fort trying to get ‑‑

>> SPEAKER:  Remote speaker.  It's a funny thing about systems.  Webex works and IGF doesn't.  Put the link.  Yeah.  I'll explain how this worked out.  Joy, can you hear us?  Okay.  From the audience until we get Joy, I wanted examples from countries or sectors, et cetera.

>> SPEAKER:  I am Irish living in France.  I work for pixyo in the area of personal data.  It's an open question to the panel.

Famous in the financial services said, you show me the incentives and I will show you the outcome.  We can't say the world is in great shape at the moment.  So doing more of the same faster and cheaper, is it really what we need?  Left to its own devices, the Private Sector does that.  It's short‑term profit maximizization.  It is driving that forward and perhaps rehabilitating the role of the public sector.  Trying to go further than just short‑term objectives.

>> SPEAKER:  I think it goes back to the question I had before.  It is more than a trend that has been there for maybe 10 to 15 years.  I recount two different parallels that are getting on, which is the one you described and the other which is trying to apicate the AI umbrella different techniques towards social goes.  So there are two different ones that are parallel dynamics, I would say.

>> SPEAKER:  Sorry.  We had some feedback from the mic.  I think there are unanswered question.  If the panelists want to answer any questions, do this now.

>> SPEAKER:  Humans will discriminate against people.  We have seen this in the text sector.  Every graduate from Howard University and the states and the computer science program, which I urge you to look at website, they're doing amazing research.  The faculty are amazing.

The fact Silicon Valley has very few black people working there is a disgrace and that's not a problem of AI.  That's because people are discriminating and hiring.  It's not only just Overt racism, it's unconscious bias.  And so some good applications of AI could be maybe going through reviews and looking for words like abrasive and aggressive applied to people who aren't white dudes, who aren't white women as well.  There's a great saying in disability activist, nothing about us without us.  And that has to be the thing going forward.  I should not be speaking on this topic, you know.

In terms of the justice system, you know, prison abolitionists.  There are activists who are doing amazing work already.  If you're a white prosecutor, maybe discussions with those activists will make you feel uncomfortable.  Maybe talking about white supremacy and white discussions and some people will need to have those discussions whether you're talking about machine learning or the way systemic racismwork works.  This ‑‑ racism works.  We can't see trends and maybe call for accountability.  Looks.  You used this word bracive or aggressive and you only use that for people who don't look like you.  That is a trend.  Look.  There's empirical data, but the solution to that is not going to be online.  When it comes to gender discrimination in tact, Google employees walked out over their handling of harassment and horrible, horubble things including gender based violence.  No AI can do things like that.

Incentives, I would hope we find incentives, but part of this is going to have to be punitive.  Tech workers organizing.  If you want to solve some of these problems, you will have to talk to activists who make you uncomfortable.  People who don't look like me basically.

>> SPEAKER:  ‑‑ of algorithms are important.  How in institutions that we know, um ‑‑

>> (low voice).

>> SPEAKER:  Okay.  So like we're sitting with institutions in many parts of the world there are institution societies that are transparent.  Should we make a standard for algo right Romes the same and can we justify higher transparency for algorithmic tools than we do for humans.  And she agrees with me that the ‑‑ the transparency offers little insight to most humans and AI should maybe take the form of human practical reasoning and a lot of our understanding of giving AI ethics is thinking computers can one day emulate the brain.  So yeah.  Researcher from the University of ortego I left some questions and I will share on Twitter afterwards.  We have 5 minutes left from the panel.  We have somebody from the audience?  Yes.

>> AUDIENCE:  Sorry.  There.  I am Maria.  Latin American society organization.  I think we have heard a lot here in the conversation now from the different panelists about the idea that we definitely need more transparency in any kind of form that has been described either in the algorithm or in the process as a whole.  So my question to the panel regarding the issues that also really the most relevant right now in Latin America is about how we kind of find a way to move this conversation to the mainstream in a sense more political for the society at large in the sense that we asked before we have related to the last intervention, we have procedures to assess the transparency of human processes and for that, we have the democratic processes of society in general.  How we can move that type of conversation to this field of algorithm or automatic decision making decision ‑‑ sorry decision making.  Be part of this conversation of how we insert the concept of multi‑stakeholder approach, the concept of democratic participation in the building of the metal of that or what we want.  What are the ideas you.  From your different fields of experience providing that more methodology approach to move to some of the at least procedural solution to come from this issue.  And we precisely will be launching a report that we do about Latin American rights every year, a short document, a short paper that is building this idea of how we start to have this conversation in Latin America because this is urgent related to the massive use of government or implementing of different type of artificial intelligence or according to decision making technology in the provision of services and with the purpose to achieve more inclusion, but we wonder how this also should be a political conversation.  That's my question.

>> SPEAKER:  I'm going to answer your questions.  In my sense, I think the first steps needs to be a literacy common effort on machine learning systems.  We need all to be aware of what AI is and then we need to deconstruct the myths that we have touched upon during the panel.  So whenever you read newspaper article, for instance, that would go, you have a title that would go as this algorithm showed bias against women.  Well, it's partly true, but algorithms don't have agencies and don't have an agenda.  It's the people that develop them and deployed them and apply them that have an agenda.  So ‑‑ I'm not sure the public knows at all and they are being ‑‑ they are being nourished with some perspectives on AI and some fears and concerns and worries around the systems that weren't clear and opaque.

I think the first step needs to be about enhancing education and literacy about those systems so that we all understand and can really discuss within a multi‑steak holder and cross disciplinary session ‑‑ multi‑stakeholder and cross disciplinary session.

>> SPEAKER:  Joy, can we hear you?  Okay.  Okay.  Five minutes.  Yes.

>> AUDIENCE:  I'm sorry.  I missed the train.  It must be nice to participate in this conversation a little bit.  The idea of thinking about AI and therefore something we might consider opaque.  Karen, I think what you said earlier you discussed the bias of the human mind.  I think with AI what is likely to happen, we see it with scale.  We're all using in a way around the world is really technology developed in the U.S.  We have Facebook and What'sup come up with.  It is only in the U.S.  But Mark Zuckerburg owned the answer about why it took place.  I think it is important to think about AI and to think about what it means concretely for the lives of people.

So for me, I am curious what you think.  I was thinking transparency would have to be a question baring in mind what we will do with that information and who is going to will the company's accountable.  You might dice close about AI.  People don't understand the technicality of it.  One might break it down and some are illiterate.

What are different stages we can come up with in the content of transparency?  It is transparency plus what?

>> SPEAKER:  It is not an end in itself.  It is not that.  The Saudi government is transparent and very democratic about censorship.  Kearn join and should say what should be censored next.  It is more.  I think that we're at the beginning of a conversation.  We started with transparency and we will need ‑‑ right now, we're trying to identify what we mean on the transparency, to whom and for what purposes just in the middle of that.

At the other side, we're very much forward in the conversation when it comes to understanding what we and want what we consider fear and consider unfair.  I think just to come back to your point, the conversation already started.  And I don't think that you need to understand technology to understand what is fear to and you what is not fear to you and it should be all about that.  We go to the supermarket and we don't know.  We don't need to know about biochemistry.  We don't need to know about engineers to enter a plane.  So I think that it is morally fair to just demand from people to understand as a common ground and there are things that go beyond that.  I don't think that the citizens can audit algorithms.  They will never do in the same way they cannot order a yogurt.  They cannot order a car.  And that's fine because that's fair.  It is way too complicated and our lives, your everyday lives are way too complicated.  We have institutions for that.  We should try to transport all the new technologies.  They're in every sector and there's no single sector in society that is not going to be permeated by those technologies.  We all have oversight in every sector.  So the question is:  Are they?  Do they have the instruments to scrutinize the new technology being applied in their environment?  Are they able?  Do they need to reform the law?  Do they need more people?  Do they have specific new profiles in their institutions to evaluate from that perspective?  We should not forget on the other side that this is a technology that demands the most interdisciplinary technologies ever before because this technology supplies specific sectors and to understand and view theinize that technology, you not only need implementation and institution, but you also need the person that is able to make sense of the output being given by the algorithm.  It can be a psychologist because this is a tool being used to make some sort of risks calculations.  It can be a marketer that is using specific technology as a marketing professional in his company.  Or it can be a human resources psychologist that is use specific HR software.  That is a person that will be able to tell you whether the outcome make sense or not and not the person in controlling and not scientists that has been part of the programming but is not age to understand the context.  Soy the conversation is already there.  We don't need to understand the things, but what we need to discuss is what are the values and the light of this new technologies that are bringing problems that we have before, but are being amplified by this technology.  One of the things we need to have more and more is where I learn more about the south is public good.  This technology is very collective.  It's a technology that sort of tries to manage collectives.  Western societies are individualistic.  The law of western societies are very individualistic.  We try to scrutinize this technology, we get from a very realistic perspective.  We talk about human rights and those individual rights.  But there's harm and impact of this technology cost at a collective level.

>> SPEAKER:  Lorena, we're at the end of the session.  She pointed out to AI and literally everything we use.  We have to build I work force around AI, build capacity around AI and build our people.  I hope that you people in the session are a part of that initiative.  Thank you to our order panelists.  We had many people.  Some had travel problems.  So yeah.  We need to build a work force and build capacity in AI and go to the AI sessions and go to all the AI events also on Thursday and everybody's input is valued in a world of ubiquitous AI.  Thank you.