IGF 2019 – Day 3 – Raum II – WS #179 Human-centered Design and Open Data: how to improve AI

The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> MODERATOR:  Hi.  We're going to start in a few minutes.  This is just technical settings here with the remote participation.

Okay.  So, hi, everyone.  Thanks for coming and joining us in this panel.  I'm really happy in being here and sharing all the knowledges and also our vision on machine learning and artificial intelligence, especially related to the open data and also related to the digital design.

So, I will give you very short and very brief introduction, and then we will move to the floor for our distinguished guests here, so I hope you can also join us afterwards with your questions.

Machine learning is leading a real data revolution.  Everybody is talking about artificial intelligence, governments are talking about artificial intelligence, big companies, big IT companies are talking about artificial intelligence.  My mom is talking about artificial intelligence.  Perhaps in the future my pets will also talk about artificial intelligence, so it's the hype of topics in this date.

Of course, data is for machine learning in the algorithm, and they are becoming ‑‑ the algorithms are becoming more powerful these days.  However, it's important to highlight that data is not equally available and distributed for everybody.  Data may be a barrier of entry to ensure that the global south can participate in this new economy.

What you are seeing is that there might be a concentration indeed, on ownership.  We argued that data is being stretched from the global south and access is being monopolized from the north, which is entrenching global south's interpretation of consumer and not producer of knowledge.

In this workshop we discuss how open data principles and web technologies could help to overcome some of the consequences of this data concentration and increase its quality.

We also discuss how important it is to bring a humanistic approach in artificial intelligence, so we hope in this workshop to talk about open data, web technologies, and web design for artificial intelligence.

So, let's move to the floor.  I would like to invite Diogo Cortiz to have the floor, and Diogo Cortiz is computer scientist and also expert in web designer and is a researcher on the web technologies and study center in Brazil and also, he teaches at the University in San Paolo.

>> DIOGO CORTIZ DA SILVA:  Thank you.  Can you add the presentation?  All right.  I'm Diogo Cortiz and I work in the technology center and also professor at the University of San Paolo.

What I'm going to try to do is to reply to those two policy questions that we described in the workshop focused more on the design.  Okay.  So first of all, we see that the artificial intelligence is touching every aspect of our life, and we start to see a lot of regarding transparency and regarding explainability.  Okay.  And for many, artificial intelligence was a subject inside like computer science departments or between mathematicians, and now we need more than this.

We do not just need to build a model and put it in the world.  We need to ensure that if this model is okay and if it's acceptable by the society.  So, the first action that we need in the AI project is diverse and interpretability.  We do not just need computer science, engineers, and mathematicians.  We also need designers, that's an important concern that we bring to this workshop.  We also need people from social science, enterology, and many areas to make sure that our model works well.

And to start, I want to claim that design is much more than an interface.  Design is important during the whole process of artificial intelligence project, and we need to start to break artificial intelligence down in some parts.  There is no such thing as artificial intelligence.  What I have now today are different models designed to do specific things, so we can have speech recognition model that can recognize someone saying, apple.  Okay.  You can have a different model for image classification that can recognize apple photos.  We can also have a natural language process for out complete, so write the word apple, but those are different models, completely different, using different techniques.  Okay.

So, there is no such thing as artificial intelligence.  We have narrow models to solve specific problems.  And, basically, all of those models are trained based on the data, okay, so we need to pay attention also on the data.

And when you are talking about the user, we have here two main sections, two main divisions.  One is the interface, so you have the user, and the user will interact with the interface to input and to get input.  And this interface, actually, we use a model, a machine learning model that could be a speech recognition and so on.

And the model is based on data, but we also need metrics, and so in this workshop, I will briefly try to explain or to discuss how design can be applied or at least it needs to be applied on those two moments in the interface and also in the model.

So, just to start, this is an article published in the Science, the scientific magazine.  It's understood showing that our AI system in the United States that was used in the United States hospital, it was discriminating black people when they need to receive a special treatment.  Okay.

It was occurring because the design of this AI system was completely based on spending, financial spending.  And if you get the historical data, black people has problem to access this, the health system, so it was a strategy, a design strategy that was not good, and a better strategy, a better design strategy and maybe in this article they agree about this, is we don't need financial data in this.  We just need to know how is the health of each patient.  It does not matter if it's white or black.  So, it's one problem that's occurring.

So, the first thing that we need to take action is regarding data design.  We need to pay attention ‑‑

>> MODERATOR:  Two minutes.

>> DIOGO CORTIZ DA SILVA:  Okay.  We need to pay attention that the data is that what follows the AI, the machine learning techniques, and so I will show briefly here an example that I give to my students.  It's a dataset.  It's a famous dataset that is used in machine learning community to test and verify all the models.  It's a dataset for Titanic, so you have here a list of data from travelers and who survive and who died at the accident.  It's not an actual dataset, but it's very interesting because it's very useful to train in to set the machine learning algorithms to do a benchmark.

So, you have here many, what we call in machine learning features that are very valuable.  You have the name, the sex, the age, the fair, the class, you have a lot of information.

To do that, for example, just to understand the part of the design, I got a simple model that is the decision tree of one of the most simple methods in machine learning, and I train using three variables, class, age, and fare.  Okay.  I'm sorry it's in Portuguese.

The behavior of the assistant is one.  And when I decided to add a different variable, for example, sex, the system has a completely different behavior, so you see here that the sex is the most important feature in this model.

So, I bring this example just to show that the problem is not just in the data.  The problem is just in the decisions that we make to use this data because in this example, the data was the same, the dataset was the same, and the model was the same decision tree, but my decision to use one variable instead of the other, the system's behavior is different.

Okay.  That's the same thing that happened in the article published by Science.

We also needed to address a question that we need additional data.

>> MODERATOR:  Could you conclude please.

>> DIOGO CORTIZ DA SILVA:  Yeah.  We need additional data.  This is an example of machine learning image recognition, and for example those three first images here, the system can recognize as a ceremony, a wedding, a marriage.  Okay.  But the last one not.  And that happens because the system does not have enough data about local cultures.  Okay.  So, we need to, in some way, try to get it to happen.

So, there are some initiative it's by crowd sourcing initiatives, and Google has the crowd source app that you can take photos, upload photos.  It's a design strategy to get it done.  Okay.  And just to conclude, we've also needed to give users in this important design technique that I think Luciana will tell more will also give the interface more control of the data and of the algorithm.

Here is an example, a simple example.  In Brazil, in Portuguese, if you translate doctor, it translates to magical the male doctor, and if it's nurse, Google Translates to the female nurse.  Okay.  So, it's based on the data on the example, but we can fix this on the interface also, so that's the fix that Google has done, so now if you translate nurse, they will give you two options and the user can get what he wants.  Okay.

>> MODERATOR:  Please complete.

>> DIOGO CORTIZ DA SILVA:  Okay.  Okay.  Thank you.  Sorry.

>> MODERATOR:  Thank you very much, Diogo.  I'm sorry of being the clock man, but it's part of my nature so I have to do that.  So, let's move to Heloisa Candello, a researcher on IBM Research Lab in Brazil, so please share with us your vision on design for artificial intelligence.  Go ahead.

>> HELOISA CANDELLO:  I'm sorry.  I think the presentation is not here.  The presentation is not changing.  I'm sorry.

Anyway, so I am Heloisa Candello from IBM Research Brazil.  My first degree was in design and my PhD was in computer science, and here in Europe, in England at the University of Brighton and I'm going to talk a little about our project.  Actually, like Diogo said, we have a multidisciplinary group, we have people from computer science, we have machine learning experts, we have moot agent experts, we have people working with NLP as well, and I'm in the last part here, the last area that is machine teaching.

So, we are worried now about the interfaces not actually only the interfaces that the users will use, but the actually, the interfaces for people that train the machines.  The interfaces that people that are not computer science scientists, are not programmers, and how they can train the machines, and so this is the kind of study that we are doing as well.

Here we have several projects that we are working with the conversation of systems in the last three or four years.  We have the first one there that you have the face is for micro‑credit people that they have their micro business and we did a project with them.  We did a few studies as well to understand from the Brazil.  We have a financial advisor, that's the green one, that's the chat, and people can talk to different chat bots and the chat bots represent one investment.  It's for financial education.

The other two that are for public spaces, I'm going to explain a little bit more.  So, this one, it's a moot bot platform behind, so it's the same platform as the financial advisor that I told you before.  And in this platform, you have three chat bots, so we have three chat bots that they are characters from a book.  It's a famous book in Brazil and actually the book, the plot is about a love triangle, so we have the husband, the wife, and the best friend and the husband is very jealous.  But the author of the book does not answer if the wife had an affair or not, and actually everybody in Brazil has at least one question to ask to those characters.  It was doing two months in an exhibition space and we studied this to understand how people react and interact with chat bots in public space, and which kind of features we should consider in the design of systems and machines in public space.

So, I'm going to show a video, so this is a guide from the museum and he's explaining how to use, and it's because of the sir name of the characters.  So they add their name.

(no English translation)

>> HELOISA CANDELLO:  (Speaking off mic)  There they have a projector as well and analysis for each phrase so they are according to the sentiment of the chat bot, so she's very mad, and of course.  She said she didn't do anything.  And just like, the husband does not have any reason to be worried about.

But, anyway, so we studied that to understand how these engage with chat bot in public space and what are the effects of the origins.

So, one thing that we noticed is that if people they are observed by other people, they act differently.  When they are interacting with those machines.  So, what we saw was that people that were observed by acquaintances like for example family and friends and were together, they had different kind of interaction than the people that they were being observed by strangers or in a queue and waiting to interact with the exhibition or around the table.  I'm sorry, or on the table in front of them.

What we saw is that people observed by strangers, they felt more connected with the chat bots and they asked more questions as well.  And we had something about gender.  The so one challenge of chat bots and machines is that people, they think they are like Oracles and you can ask about football, we can ask about the weather, and you can ask about anything, and the people don't keep the scope of the system, so they don't ask about what actually the system knows.

So, what we saw that males and females behave differently.  We saw that males usually ask 50% more questions out of scope than women in that context, and we saw that when they were observed by strangers, they ask less out‑of‑scope questions.  What I mean by out of scope here is like questions that are not related to the book.

We saw as well, thinking about the public space, that users observed by family and friends, they continue engaging and they ask more questions, and users addressed by their name, that they could see their name on the table, they also engaged more.  But if you have the two variables like people with family and friends and the people that see at least one name on the table, they don't engage so often because the machine has to know that you have more people interacting in that space and not just one person, so they don't engage in more.

>> MODERATOR:  Two minutes.

>> HELOISA CANDELLO:  Okay.  So, the last one that I'm going to show is actually like a kind of challenge because we're going to talk about AI to children in 30 minutes in a museum space, and how can you teach children about AI?  So, this is a project for children 9 to 14 years, will be in museum next year, and the idea is to talk about AI and what is the minimal understanding that children can have about that, and how can we request the black box natural from AI systems?

So we did an exhibition as well and the idea is that you have like job ‑‑ but instead of having like two humans and one machine, you have three machine, and so those heads represent the machines, actually, and we have three stations where children, they will be in groups and teaching those machines.

We have a prototype now in the lab and this is what, okay, can you give all the information to the machines, you can give a spreadsheet, and you can give a database, but the big challenge of machines today is to understand humans.  So children, they will teach how humans talk.  They will teach examples of human questions to the machines.  So here you have an answer like, who is Charlie Darwin and they're going to add questions to the system to train the model.  So, the idea is like you have, for example, 38% of confidence, and you see that and later you teach the bot and you are going to have 95% of confidence, for example.

We are doing tests with children.  Yes, so the idea is AI systems use knowledge acquired from human beings.  AI systems do not know everything and make mistakes, and AI systems are corrected and improved by human beings.

>> MODERATOR:  One minute, please.

>> HELOISA CANDELLO:  We use several design methods, I'm not going to explain all of that, don't worry.  We use several design methods in the process of those projects, and we have other projects as well and can I talk more about that later.

And that's it.  Thank you so much.

(Applause)

>> MODERATOR:  Thank you.  Thanks very much.  So now we're going to have a remote participation, Jaimie Boyd.  Jaimie Boyd is the Director of Open Government Treasury Board of Canada, and we thank her to join us because I know she is back in time, so probably is early in the morning in Canada, so thanks very much to join us.  So, you have the floor, please, Jaimie.

>> JAIMIE BOYD:  Thank you very much.  Good morning, everybody.  My name is Jaimie Boyd.  The description of my position is actually slightly out of date, I'm the chief digital officer of the government of British Colombia, the western most Province of Canada, 5 million inhabitants and a beautiful spot and I hope you all come to visit at some point.

I would like to share a few perspectives today on the topic of AI and open data, and I think that one of the things that I bring that ask a little bit unusual in the public sector is experience in managing an open data portal.  For the last three years, my team was responsible for the management of open.canada.ca which is the Government of Canada's open data portal, and we took a number of AI projects to production.  The Government of Canada invested quite heavily in the idea of building an ethical approaches to AI, and so we felt that in the interest of embracing ourselves as a learning organization, it made eminent sense to be shipping products.

So, I would like to share with you a few experiences with regards to building out both open data and AI practices, and some of the lessons that we learned along the way around the great sense that it makes to be leveraging open data for AI and vice versa.

I hope that you're able to see the deck that I've prepared, but if not, well I'll try to be particularly descriptive in my words.

So, the first thing that I do want to do, just recognizing that there is a bit of a diverse audience, is a little bit of level setting around ‑‑ around the perspective that many of us bring in governments, and that is very much one of disruption.  The world has changed, compute has never been faster or cheaper, and with that governments are changing.  The imperative of being poorest in innovation has never been so great.  Governments like my own at the sub‑national level and national level are leaning into this idea of e‑government, a government that uses modern technology as well as the culture and practices of modern age to deliver great services that are deserving of citizens' trust.

It's somewhat important, I think, in the context of talking about adoption of AI to recognize that we're grappling with a fairly significant misalignment in citizen expectations.  We are well aware of this and we see it manifest in the data around credibility and what not.  People around the world have never been so polarized, their level of trust in public sector institutions has rarely been so low, and this makes sense in the digital age, right.

I'm not saying it's a good thing by any means, don't get me wrong, but we have citizens who are able to click and from the palm of their hands, skip the dishes, maneuver and bricks and mortar around citizen delivery in the public sector and these misalignment don't make a huge amount of sense and if you're following on the deck I'm now on Slide 5.

What we do then is look at the misalignment and how it represents itself in the views of our citizens.  I'm showing you some polling data from Canada.  You see that 60% of the Canadians feel that laws and government policies are not keeping pace with changes in technology.  So, in that context, there are a few things that we do need to somewhat urgently do in my view in an effort to more effectively leverage the opportunities of both AI and open data, and I want to speak a little bit about that.

So, if you go on to Slide 7, and AI in the public sector can very much help us in delivering sound services to our citizens, and that's the true north of why we exist in the public seconder, right.  So, the sorts of things that effective use of AI can do is it can improve service delivery, and we've just heard from Luciana, a wonderful story about chat bots.

We can also support our decision‑making abilities, so we can automate a lot of the triage, and I'm thinking of immediate applications around fraud detection, for example.  We can support ourselves with advanced analytics, so our ability to predict outcomes and intelligence gathering, and then, of course, internal processes, and I'm going to speak to just a couple of examples to try to make this as concrete as possible.

If you go to Slide 8, you'll see these are not theoretical ideas.  These are things that are actively in production in our governments, so I want to make it clear that when we talk about ethical approaches to AI, this is not a hypothetical future, and these are just, you know, as random smattering of examples that I'm personally familiar with and have been involved with, right, and so we're talking about managing our bus routes, and so that's the fleet management and we're looking at predictive analytics around bus departures and natural language processing to help identify technical risks in our regulatory process, interactive court services, our ability to triage bankruptcy cases when they come into the national regulatory system, our ability to identify regulatory infractions for toys.  You know, these are fascinating case studies often using content that's voluntarily by citizens online, and so we're able to triage that content much more effectively using automation.

I'll give you a very specific example that I was personally involved with, and it was from this May.  We, within the Government of Canada, we're responsible for generating a lot of the sit citizen engagement behind open government, being around transparency, accountability, and citizen engagement.

So if you go to Slide 9, you see just a screenshot from one of the tools that we ended up building in collaboration with open text, which is a Canadian Software Company using a tool called Magellan, and what this did is realtime sentiment analysis of public content that was made available through social media by participants in a major event that we did.

So, this kind of analysis is absolute gold for us in the public sector, right.  Previously, when you wanted to know what people were thinking, what they were saying, we would go out and we would do surveys, right.  And so now, with the invent of AI, we're able to look at public content and have realtime analytics.

And we're able to take a relatively nuanced approach, recognizing the limitations of these things, like not everybody is hanging out on Twitter and that is in itself a bit of an echo chamber.  But if you look at Slide 10, you see our ability to triage on the basis of semantics that we're setting up is somewhat ‑‑ it's certainly improving.

We can extrapolate from this very sort of select example to broader reflections of how to apply AI to open data.

When we look at data source that are based on open data, we have incredible availability and replicability which is really important when we talk about the ethics of AI, right.  In the public sector, we have the imperative of explainability, and within the Canadian context, we've chosen to take a fairly nuanced approach.  The imperative to act as public officials to explain the functionality of an algorithm that we use to, for example, figure out which camping site you're going to be assigned when you go to our national park system.  That is a lower burden of explainability and responsibility than the burden associated with using an algorithm to recommend a migratory status, for example.  Taking that triaged approach, a nuanced approach is critical, but across the board, open data seems to be a powerful enabler for things like explainability.

A second advantage of focusing on the open data space is for training purposes.  We're looking at data here that is already relatively high quality in terms of data standards, certainly within the Canadian context, we were imposing very high standards for interoperability, for example, and so having a clean data source that is interoperable across national borders is presumably a huge advantage, and that certainly is the case that we experienced.

Finally, it's safe data, right.  So, especially at the sub‑national level in my current role, we manage a huge amount of fairly intimate data, I'm talking health records, education records, and these are the sorts of things that you really do want to manage with a great deal of care.  Open data only includes data that has already been rigorously, rigorously reviewed in terms of security so there are lots of advantages in terms of the open data space.

If you go to Slide 11, I think it's well worth thinking about how we behave in the public sector when we're looking to apply AI to our processes and particularly leveraging the opportunities of open data.  And I do want to just very briefly in the interest of time, share with you three sort of ‑‑ three things that I think can support a truly humanistic approach to the application of AI in the public sector.

The first is what I refer to as hygiene.  We need to be courageous in shipping code early and often.  We are paid by the taxpayer, it is an imperative that we provide value repeatedly, quickly.  But we also have to impose on ourselves the rigger, the discipline, the culture, the governance, to be able to be successful in our use of these tools.  In my experience, there are three sort of categories of hygiene that are conducive to good outcomes and the first is the obsession with users, really making sure that we test our services out with real people.  Secondly, being agile in everything that we do and not just in shipping software, but embracing the discipline agility and testing and iterating.

Finally, being open.  Open data is a critical enabler, but being open in everything that we do, embracing Open Source code, for example, embracing transparency, building across sectors and having advisory committees and people scrutinize algorithms that are as diverse as populations is critical.

A second big category around opportunities is around building the algorithms out and embedding citizen views into the design, right.  So, on Slide 12, you see the results of a survey that was done in Canada through IPSO in 2018 and you'll see that there is a fairly clear trend here.  The cultural acceptance for the use of AI really does vary given the nature of that algorithm, right.  So, there is a high degree of acceptance when the algorithm is supported decision‑making, so you'll see the sort of top ones.  And you see the level of support decreasing fairly dramatically as we get to algorithms that can have a direct impact on people's well fair and their livelihood and what not, so it's sort of an intimacy question.

So, we do have to be very, very thoughtful about the views of our citizens and design our use of algorithms on the basis of citizen preferences, in my view.

And the third big category around sort of actions for humanistic approaches to AI in the public sector, in my view, are around trust and ethics.  So on Slide 13, you'll see I've put a screenshot of our ID card and this is somewhat significant because British Colombia, my government was the first government in North America to provide a unified ID card based on health and driver's licensing, and I this is, I think, an important kind of behavior and it's similar more than anything else because it, excuse me, it's a clear investment in building up trust.  The backbone for collaboration in the digital age is trust, and we need to be building out mechanisms and systematic approaches to ensuring that the people we are collaborating with in the digital age are who they say they are, and so looking at using emerging technology, so distributed ledgers and what not, so provide that sort of context of trust and building out ethics on top of that would seem to be absolutely critical.

So, last slide, please feel free to follow our work, we use the hashtag digitalvc and there are tremendous opportunities around the use of open data for scalability, availability, replicability, of course the safety and assurance that we can provide our citizens when it comes to ethical use of AI.  I'll pause there and thank you very much for accommodating my remote participation.  I'm sort of at that strange time where it's 2:22 a.m. so I'm not quite sure if I'm supposed to go with the wine or coffee at this point, so thank you for bearing with me on this.

(Applause)

>> MODERATOR:  Okay.  Thank you very much, Jaimie Boyd, for sharing your vision on open data, open government, and the approach to artificial intelligence.  So, if you are in Vancouver, now it's time to go to bed so have a nice nap.  Okay.

So, let's move quickly to Krzysztof, a lawyer and activist from Poland and also the Director of foundation, yeah, like that.  So, please, it's time to, you have the floor to share your vision about artificial intelligence and also humanistic design.

>> KRZYSZTOF IZDEBSKI:  Okay.  Okay.  It's on.  I'll be very brief because I see we are running out of time.  So, basically, what I want to concentrate on maybe taking something that Jaimie already mentioned is how you design algorithms or AI artificial intelligence solutions within the government in the context of the citizen state relations.

We did a report in several countries in central and eastern Europe and what we have found out is that basically, there was a lot ‑‑ there is like different levels of lack of transparency, and that is a main problem which is connected with the general issue that the citizens have a lack of trust in politicians or public officials, unfortunately, for different reason, but there is a imagine mall belief by the government sometimes that we as citizens would believe machines created by politicians more easily, which is a quite opposite thing because when we were talking with the people, when we were doing the report, when you have to deal with any case that concerns your life, your situation, and the government is responsible for this, you're kind of having this position of being aware of getting into the context of other administration because you're not sure about the results.

Still, if the machine is deciding, because this is what you understand, the machine is deciding and you feel that you don't have a place to refer to, that you are actually talking with someone who has no human instinct, and I think that this is always very important in terms of having the direct contact with the public officials.

So, I think this is something that, while at least in the countries we did the report, it's completely underestimating.  There is no human sense of design, there is no wheel of engaging citizens or any other groups that are impacted by machine ‑‑ or by automated decision‑making in the process of actual design.  Not because they are experts in the fields.  Of course, they're not, but giving the possibility of this feeling of ownership and understanding why this solution is not designed to harm them but to support them in their lives.

One of the examples that we have is the systems of allocations of allocation of judges to the specific court cases, and this is a very interesting example because, although it directly impacts citizen, but it builds also the mistrust between judges themselves.  The system was only tested in two courts before it was really introduced, effects of the test and information of the test, but never really revealed to the judges or to the ‑‑ or I'm sorry to the broader public.  The Judges cannot compare the data with what other judges got, so meaning I'm not really sure whether my 10 cases is lessor more than my colleague has, and especially in the context of discussion on independence of judiciary or this like general mistrust of the government, these are the things that increase the problem within the trust to public administration.

And, also, we haven't really ‑‑ so in terms of transparency, any discussion of using the open data, not only in the terms of the open data standards are being easy to be reusable, et cetera, but first of all one of the standards that we have in open data, which is the metadata, and so you describe what kind of a data you're really using, and so you can from that, if you're not the specialist, you can understand more of how the system was created, so we don't have this and I'm not wanting to ‑‑ it sounds like I'm blaming the government for this.  I think this is something that is still too fresh for them to understand how it's important because what we, especially having this transparency approach see, is public official, they don't understand how the AI or algorithm works as well.

But for citizen, and this is something that is crucial work, and also we saw in the interviews we had with the people that the person would not contact the company that created an algorithm.  It will contact ‑‑ the citizen will contact the public official, because this is natural thing.  But the public officials are not aware of that, and from the very beginning, this is a discussion, we don't have time for that, but how to procure machine learning or automated decision‑making tools in the first place, so I think the whole discussion about the human centered design should also involve not only the groups of citizens, but also public officials which will actually be using or be responsible for the usage.

And the problems when they occur, and the problems are occurring, of course, the public officials should be prepared, at least to, give this like first group of explanations of what went wrong, what are the possible solutions to the probable because, again, one of the cases we had, it was an algorithm that allocated the children to the kindergartens and completely wrong.  And if you imagine you want to send your kid to the kindergarten and something is wrong and you're not really sure if it's like in the kindergarten or it's not in the kindergarten or it's in the good age group or not, this is not one of the things in the world that you ‑‑ you don't care about the trade secrets or that the company knows how to do it, or like you have to trust because this is a machine and this is a random system.  You have to remember about this human factor and build competencies around the public sector.

>> MODERATOR:  One more minute.  Are you finished?

>> KRZYSZTOF IZDEBSKI:  Yes.

>> MODERATOR:  Great.  Thank you.  Yeah.

(Applause)

Okay.  Thank you very much, Krzysztof.  The only one that is on time.  Now we have only 5 minutes to conclude, so I move the floor to Luis Aranda, an economist and policy analyst for OECD.

>> LUIS ARANDA:  I'm very honored to be in this panel surrounded by such talented colleagues and it's very interesting so far.  What I want to try to do today, I'm going to briefly, we don't have much time and I'm going to go quickly and present what we at OECD have done in the realm of artificial intelligence, and highlighting in particular the humanistic approach we've taken so far.

So, many of you have probably ‑‑ I'm going to start from the end, and then I'm going to move my way backwards.  You've probably heard about the OECD AI Principles already.  Probably several times this week, so you may ask yourself, what are these principles?  You know, what's this all about?  Well, very simply, and as Jessica Cousins put it in her article in the Hill one week after the principles were launched, she said that the OECD principles are the world's first inter‑governmental policy guidelines in AI, so what does this mean?

Well, we have a lot of other principles going around in the world, other organizations coming up with principles, ethical guidelines, et cetera, but so far, the OECD AI principles have been the only ones adopted by governments.

So, now you may say, okay, it's the governments from the OECD country, so the rich country club some people call it.  That's not completely true.  It is true in May of this year the OECD countries adopted them along with six other countries, five from Latin America, including Brazil and Argentina.  But then in June something very successful happened for these principles because they were ‑‑ the G20 came up in Japan with their own set of AI principles, the G20AI Principles and if you look at them and you compare them with the OECD principles, word by word, they're the same.

So this is important because what you see on the map, it's probably more than 90% of AI development in the world, so working my way backwards a little bit, I'm not going to go through this slide, but I'm just going to mention the AI expert group that we formed to write these principles, and I'm going to mention this because as I understand, some countries are trying to come up with their own set of AI principles, including Brazil, so I was talking to the Diogo yesterday and I think the key success factor for these principles was the multi‑stakeholder approach that we took.  We had 52 experts.  It was not OECD staff drafting the principles, it was experts from all walks of life, including academic, private sector, labor unions, Civil Society, and this is what made us reach consensus with the principles.

Now, I try to put them in one slide so if you want to take a picture, this is the right time.  It's ‑‑ testify five value‑based principles and five recommendations for policymakers.  As you can see, we start with inclusive growth and sustainable development, and all the way to accountability for AI actors.  And in the Recommendation, we try to make some suggestions for governments to push and to move forward in AI.  I'm not going to go through them in view of the time.

I'm just going to present to you a couple of clippings that relate to the topics of the panel today.  So, for instance, human‑centered AI, you have Principle 1.1 and 1.2 about advancing inclusion of underrepresented populations, human‑centered values and fairness, for instance, and this is the movie trailer and not the spoiler, so if you want to watch the whole movie with the principle, it's five pages, and it's a little more than 140 character, but will be okay, I think.

If we talk about data, we also mention that along the principles, as we encourage governments to invest and encourage private investment in open datasets, for instance, and data trusts.

So, what's next for us?  We want to move from words to action at the OECD.  We want to move from principles to practice, so we're going to launch the OECD.AI observatory?  February 2020 and we're basically just following up on the recommendation to try to monitor the implementation of the principles by countries., and to convince the humanistic approach, we included this in the logo.  I'm not going to go through this.  We're going to show live datapoints, for instance AI‑related news as it happens in the world, and you can see the geographical location in all languages just popping up in the map.  And also, another thing we're going to show is AI research by country, and how they have advanced in time.  Okay.  You can see.  It's five‑dimensional chart.

And just to wrap it up, I was working the other night along the Berlin Wall and I saw this painting, and it really caught my attention.  It says Get Human.  You know, and being Mexican, I know there is nothing human about wall, but yet we keep building them.  So, let AI not become the new world that divides our societies between the rich and the poor and the developed and the developing, the haves and have notes.  The time so act is now and it's good that we're having these discussions here.  Thank you.

(Applause)

>> MODERATOR:  Thank you panelists.  Thank you very much for your sharing.  Before finishing, I wanted to thanks a lot for Amanda to for the support she gave us and also I wanted to thanks to Nathalia the Report for this, so you are the memory of this meeting so thank you very much, and we run out of time, so thank you very much for your presence here and we are leaving the floor.  Again, thank you for sharing that you could have with all the panelists here.  Bye‑bye.

(Applause)