The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
MODERATOR: Hello, friends, welcome to this workshop which has been organized by the Just Net coalition, the organizations from across the world, which ‑‑ just ‑‑ internet. You can look us up on the website, and our co‑organizers, and, who is presented on the panel here by Mishi, let me start this on making artificial intelligence work for equity and social justice with some initial remarks after which I will pass it on to the panelists who will make introductions before we open the floor for comments. Sorry, my ‑‑ if your audio is not good, you can use this to put to your ear, which makes it louder for you.
So, okay. The intention of this effort of organizing this workshop is to discuss artificial intelligence not in technical terms, but as a social construct. And there's a lot of confusion about what exactly somebody means when they mention artificial intelligence. And among the businesses that so and so business started using artificial intelligence and others are yet not using artificial intelligence, and the generally, especially the kind of technologies which probably a machine learning is what they mean, that they have started using it, other people use it more broadly, it's a good tool to market themselves by saying they use artificial intelligence.
So technologies change, but what we are interest in is artificial intelligence having ‑‑ as a huge social force and try to reclaim here in this workshop artificial intelligence as a social construct and not as a technology. Also ‑‑ the diagnosis does predict things that are ‑‑ there's machine learning, artificial intelligence, what all these things do is use data to find patterns and on the basis of those patterns, it gives insights which can be used as intelligence to organize human affairs, that's what is actually happening.
So, we need to ‑‑ this is happening for a long time, there was a time when you would not call, for example, when you were typing into your laptop and auto complete, this is also some kind of, inside they have based on data, people call it artificial intelligence in modern times, but ‑‑ at that time, my colleague was telling the commission of optic ‑‑ was really big at some time. So, basically we are not interested in how the technologies are evolving, we are interested in ‑‑ technologies may change, we may have something brand new on networks, which would be even more intelligent, so, you are not talking about artificial intelligence as a technology, but what we are talking about is that we reached a time when insights and intelligence based on data has become so powerful and so intelligent that it does not just aid human beings in decision making but it has taken a force of its own ‑‑ one of operations, but organizing a whole lot of operations in such a manner that the whole social organizations from economy to governance to social relationships gets reorganized on the intelligence derived from data. I have called it this to distinguish it from the artificial intelligence as digital intelligence in some of my papers, but what we are talking about is data based intelligence. So, that's the construct we are talking about that which is now so powerful that it is reorganizing all kinds of human relationships and human organizations, including the economy.
China and US are said to be in a race, when you talk about the AI race, that must be huge ‑‑ controls artificial intelligence would control the whole world and people like Steven Hawking ‑‑ what can happen if we do not do something about this technology. So we are talking about that particular social phenomenon here and the technology. That's what this workshop is trying to do, reclaim artificial intelligence and not just a set of technologies.
The second part of the name of the workshop is equity and social justice. Again, here we created as a structural ‑‑ it's not something that you have systems working and once they are working, then you recognize or what is the kind of injustice it's creating and have a strategy to correct that. We want the system as they are ‑‑ to ‑‑ the design that social justice inequity is a part of that system. And that's, again, a part of this workshop.
We hear a lot of talk about, there should be basic income because people are getting out of jobs, which is very important, but I think that's ‑‑ it's a new way of addressing the mention of artificial intelligence propose preponderance and ‑‑ social inequities, we want the whole system and how it organizations human affairs to be ‑‑ in a manner so that the system as it works itself is fair and just, and it allocate resources and ‑‑ in just manner. This part I would now pass you on to the panelists, the first is ‑‑ working on digital rights, Juan Carlos.
>> Thank you for the opportunity to participate in the workshop. Digital rights organization, we are based in ‑‑ as it was mentioned, and what we do is mostly try to defend human rights as related to the use of technology. And this is, this has many consequences regarding where justice comes from, where it will go, what is the use of technology that is respectful of human rights and what can we do about it as people.
Sometimes we find that we have to use this or that technology to protect ourselves, sometimes the problems that we find are much deeper than that, and this is one reason why social justice is an important factor of the things that we do, and why, while we started thinking about digital rights as the individual rights taken to technology, the truth is that complex problems in society have much deeper roots and have much different forms of solutions in case we want to reach those solutions.
This starts, of course, with the idea of social justice and equity as the determining factor or the goal that we want to reach. If it's true that artificial intelligence and technology is part of society, if it's true that it's a social construct, then it's part of our society also to determine which are our goals and which are the tools for those goals. Coming from Latin America, having that perspective, what we see in the field of artificial intelligence is often the use of a marketing term. But it's used to sell some technologies that have already set up its most relevant features, for the use, for the implementation without necessarily being influential in their design or in the data that they gather or the results that they present. So, in that sense, the disconnect between our level of development and the development of these tools becomes all the more grave.
And in a sense, we might be as developing countries, again, just clients of certain market, but the truth is because we are having this kind of technology that can process ‑‑ data and can be used to make decisions based on that data, what we find is governments and companies finding in technology solutions to problems that were not necessarily the problems that society may find. So, to go back to the idea of social justice and equity, in we do not ‑‑ the premises behind the design of artificial intelligence systems, what we find is another version of techno solutionism where we rely on technology to solve problems. But social problems can have only social solutions and not technological solutions. In this sense, what we find is that what we need to empower is not only the idea of having better technologies, but the idea of influence, of seeing the development of technology and the deployment of technology as a matter of participation, as a matter of citizenship, and it's a matter of involvement in those deep decisions.
If we want to solve social justice, by whichever tool we want to find, we need to correctly identify which our social priorities, AI can be helpful in that, finding the partners and the sources of certain problems that we might not have seen from existing sources or through findings that by only human effort might take ridiculously large amounts of time. But to get to that point, we still need to as with any other technology, to look into society as the source of the answers to the question about what is the problems that we are trying to address, and what measures do we need to take in order to achieve social justice and how do we measure when we have gotten there. Thank you.
MODERATOR: Thank you so much. You just took four minutes and we are going to (?). And now our next speaker is Norbert Bollow.
>> Sorry, guys, Norbert is the next speaker, who is the coordinator of Just Net coalition, we will try to give a little bit inputted in the proceedings about what kind of things solution technologies are and what can we do or not and what can we be made to do. Norbert?
>> NORBERT BOLLOW: Looking at this, the technical ‑‑ behind it, and that's what I want to try to give us some very brief and simplified grounding which are proposed to use for the purpose of this workshop. Now, the term artificial intelligence, it's the opposite ‑‑ (audio fading in and out.) . We receive ‑‑ through data through our eyes, through what we hear, and as we process this data, we learn to recognize patterns and those patterns ‑‑ we learn many things. In a way, the human intelligence works like a muscle, the more you exercise it, the stronger it gets. Different from a muscle, it can learn to do very different things. This muscle which ‑‑ can only pull my arm, maybe it is weak, maybe strong, but it can do different things. The huge intelligence can learn to do different things, Chinese is different from German, and so on.
Talk about artificial intelligence, I would propose to think about it, very much simplified again as something that can recognize patterns and on the basis of those patterns, generate human readable output or make something happen. For example, on the basis of some patterns in language that someone types in a social network post decide to make the post visible to many or only to very few people. Or on the basis of data about a natural person decide whether that person is going to be a profitable customer and make them a good offer, or maybe that person looks like a risky customer or someone not likely to have much money, gives them a very bad offer only. Or the famous example of a self‑driving car, there is something inside which, depending on patterns like red light or green light and certain markings on the street, will drive or not drive and so on. So these are systems, speak about artificial intelligence systems, they have many components.
I want to briefly highlight two types of components, one which I call (?) components, that is something like a traditional computer program, they may be very complex, but this component is based on a program of understanding some kinds of patterns and programming the computer to recognize them and take certain actions. Using just algorithmic components, you can program a computer to play chess, it may not likely play terribly well, but it could play chess. On the other hand, in recent times, computer hardware got well enough that, on to a different type of component has become practically feasible, I would contrast them to the algorithmic components by saying algorithmic component is based on being inspired how humans think consciously, how humans reason or doubt, how humans decide to explore something in more depth, think more moves ahead, and ‑‑ that's all inspired by conscious thinking.
Now we are at the time where it becomes possible to build something that works similar to unconscious processes of a brain, this is called neuro networks. And neuro, this kind of technology has made it possible to build systems that do not only recognize patterns that programmers understand, but to recognize new patterns that no human has ever articulated consciously before.
This can be very powerful. The disadvantage is, you need lots of data to train those computers enough so that they recognize the patterns and recognize them in such a way that they will know which patterns are important for generating the desired outputs. Of course, you have to tell the artificial intelligence system that you want certain types of outputs, you want to maximize your profit or whatever.
Typically the goal of social network company is to maximize advertisement profits so this gets put into the system and the system figures out the patterns that it will then recognize, so this needs lots of data, it's one restriction, if you have a problem where you don't have lots of data to train it, this will not work. And the other restriction is you need to have a precise definition of what you want to optimize.
You can optimize your profits, you cannot use an AI system to optimize social good because you can't put that into a number. Thank you.
MODERATOR: Output social justice in a number and put it into artificial intelligence and that's the kind of regulatory objective we have to develop. Our next speaker is Mishi Choudhary, and she will give us an understanding ‑‑ more of a social setting. Mishi?
>> MISHI CHOUDHARY: Thank you. In the coming year, we (?) are not only going to invest more in currencies, but we all know ‑‑ which will have the magic term artificial intelligence is where the money is going. So, but apart from the hype, they are already available examples around all of us in which we can see how the data plus machine learning is changing society around us. For the last five years, at least, we have watched the technology which is responsible now for making a lot of health care decisions around us, it's deciding whether prisoners should go free.
I'm sure some of you at least follow the public investigation of north point. It also (?) what we see on the internet. That perhaps is the closest to all of us in the room, even if we haven't noticed where it's creeping into our, the rest of our lives, it predicts, tells you what you want to watch on‑line. Recently we have also seen the ability of use of similar technology to create different kind of videos and audios, which seem so close to reality that it is very difficult to understand what is exactly there.
I don't know if you are like me, interested in wasting time on YouTube, seeing videos where the horse turns into a zebra, or use celebrities, pornography, it looks like actual people, it distorts reality and understand some creativity, but it also creates a dystopia and utopia in the same vein. What we are also realizing, a little late, but definitely for the political climate is helping us realize is, that the human society is going to permanently shift because of the power we are investing in the robot.
This is not just the science fiction and it's not the extreme of everybody's job is going to be lost, let's talk about universal basic income, which is what silicon valley likes us to talk in those terms, but this is slowly moving about there are things that we do in everyday life into the network. No wonder we are uncomfortable with the ability to automate decision making, but because it's challenging not only what the role of the unskilled level would be but also a lot of experts. We have seen how this changes democracy, how it changes our pattern on behavior, because if you watch the companies, everybody wants now to know that they are AI first and everything else later. And this is not just the ‑‑ companies, these are the companies that provide you food for agricultural companies, for the lastest time ‑‑ better outcomes will produce better results, but what is change now, it's about data sets, the data sets are the ones which produce very different results.
This paradigm shifts, which happens because of this thinking; that is what is fueling the development in AI. Now, we are all creating enough of behavior and data sets every day to be fed into the system and that's what we are helping make the systems better in various faces.
The purpose of the workshop is not to talk about just tech, but how it impacts society in general. So, if this is already happening, it's happening all around us. You are not going to all sit out because of a lot of times it's happening in places you don't even know and you would actually like some more machine learning to be there because it just helps you live a better life. And what do you do at such points? The first thing is, you do demand transparency, it's not enough, but you do want transparency.
Earlier we all talked about free and open software for all this time, the ‑‑ which you cannot read, you can never trust. If the machine, what is in this machine I cannot read, I cannot trust, now combine that with data and that all works together intricately, into ‑‑ the first time it said it's too complicated, the second one, we will perhaps give you the software but the inference and data is what we want to keep, that's why you have all these free tools which the companies are ready to give you, but not the data. That's why I think the most important thing to demand is transparency which will enable a lot of participation., you not leave it to the academics or the companies because even in this setting, look at all the civil society organizations who come and participate here.
It's only very recently we are seeing a lot of people who have been active traditionally deciding about what social justice is, but not active in this space, if it took them this long and they are still not here, what will happen if we all are not participating in what is now being determined in our lives later on. We have seen at least in 2017 itself, there have been at least six major big groups which are now looking into what either ethics would be of AI, or ‑‑ and research ‑‑ this is just a precursor, because self‑regulation by companies is always what comes first before qualified outcomes, outcomes come later, politicians join later to the party, but self‑regulation comes first.
If 2017 is what we saw the formation of these companies, we are going to now start seeing the results in 2018, 2019. So, I'm not saying we are already there, but it's already time to start participating, demanding the transparency. The other thing I just want to say, is that coming from a different part of the world, I do practice in New York as well as in new Delhi, in some parts of the world, technology is still very fascinating.
When you look at something, then it can tell you, this is my face, and it can suggest in a photograph that you should go tagging such a thing, it's a very fascinating to somebody whose first interaction with computer as small device. It's not the same as someone who grew up on computers, and who hasn't leapfrogged into this generation. So, a lot of people who are unconnected and coming on line for the first time, they still think of this as big magic and they are very fascinated.
The narrative in a lot of these companies are innovation trumps everything ‑‑ everything subservient to ‑‑ government wants to move fast and break things there by forgetting a lot of traditional values of democracy, social justice, and equity, which it has taken generations for us to build, but because innovation is most important and economic development is the major goal, it's easy to ignore all of these. I would say a few other things, but I think my time is running. But I do want to stress upon that our role in building intelligent data sets that can teach the machines to develop this kind of artificial intelligence cannot be underscored or emphasized enough.
There's an opportunity to create a full spectrum of training sets, but reflect a richer portrait of humanity with inclusivity and diversity. Right now it's only a few kinds of human beings who are making those data sets or who already have the privilege to be using the tech, are the ones that are going to be teaching. So, if skin color, if culture, if gender, if various other things which we think is what makes the human race are not even being reflected in the data sets, you are really far away from what the machine will actually learn and then spit it out.
MODERATOR: Thank you so much, Mishi. The second part of the workshop, which we would soon move to after the speaker interventions, we are going to focus what can we do about it, because this is the framing of the issue about what is the problem, but as social justice activists, we are very interested in actually being able to do ‑‑ about things. We are going to get into that discussion soon. The next speaker is Hans Klein, and they describe the kind of ‑‑ that are already getting built into international policy menus for these discussions, because we need the discussions ‑‑ there isn't much time. Thank you.
HANS KLEIN: Thanks, thank you for the invitation. So this is a topic of personal interest for me. Because before I joined the idea, and I have been here for 10 years, around 12, 13 years I work as an AI engineer, researching AI, this was a time when the ‑‑ marketing, there wasn't a marketing buzz around it, no money around it. No money in it. And most of the money was in ‑‑ if you are a scientist in this room, you know what I'm talking about.
Anyway, let me start on a positive note here., from the idea of perspective, from the UN perspective, for others, very clear, AI has the potential, I underline has the potential, to improve life around the world in fundamental ways, it could play a major part in obtaining (?). Has the potential because there are significant challenges and we need to work on those challenges before we get to where we need to be. But if you look any other ‑‑ you can have sufficient, decent quality data or you can have the means to collect the data, AI can have an impact. It can be health, nutrition ‑‑ for people with disabilities, transportation, many, many different datas. These, just early days, we can see a lot more happen, a lot more good happen, but all this will happen only if we are comfortable to tackle the challenges. And the challenges are complex, and you will be hearing it throughout this session and many other sessions, the ethical issues, of course, and these are conversation with the UN, issues, ensuring that the conduct from the economists, AI systems and bias AI systems, about identifying liability, whole other topic of ‑‑ there is a group of experts meeting, there's an organization called the UN office of ‑‑ affairs which manages, it's chaired by the Indian ambassador in Geneva, so, that's a whole separate process.
Technical challenges, I'm trying to give a category to each of these challenges, but all these are inter‑represented. So, technical challenges, ensuring transparencies, something which comes up all the time, data challenges, entering security, if you are using it for critical applications, then there's the big transformation of social economic challenges, of course, ensuring that the developing countries don't get marginalized, especially those that have a large population not connected, less than 50 percent connected. You want to make sure that the inequalities are not amplified and then the subject of jobs, social welfare systems, impact on social welfare systems, at least in the short term.
There is a big concern that the most sophisticated of divide is opening up and which will have profound implications for the global economy. So the UN ‑‑ has already termed this as a frontier issue and they said strongly ‑‑ promoting global cooperation on the issue. The (?). And that summit, the UN secretary general said that the UN ‑‑ platform for discussion, and we need to make sure that AI is used to ‑‑ human dignity and global good. This is what we should all strive for.
The UN, we are looking at it from a system wide perspective, not from individual agencies. I would say three angles in which we are looking for, looking at this topic. One is how should UN structure its response, and some of the conversation I hear that, you know, of course, number one is ‑‑ creating a multi‑stakeholder ‑‑ one instance of that, establishing an external panel of experts to advise the UN on different facets, expertise, outside the UN. We have experience expertise, but the top guys are outside the UN and they recognize that. Establishing an interagency ‑‑ internal one to make sure that the agencies are coordinated.
The second group, what research and ‑‑ action should be undertake end. Example in our reviewing of the impact of AI on current frameworks or having more evidence based research on social impact, encouraging the whole topic around jobs, whether there will be more jobs produced in the long term but we lose jobs in the meantime. That's something that agencies are looking at, and the third is capacity building, which will be key.
You know, there's a big concern that the whole discussion is being shaped in some countries, and some regions, and also it's very clear, we need to ensure a fair, equitable, the benefits of AI, especially if we aim to spout the implementation of the ‑‑ thank you, I'm done almost. The agencies are doing different things, the idea is organize the first ‑‑ summit, this June, with 21 agencies partnering with expertise, some of the top guys in the world, presenters from all stakeholders were there. This year we plan to organize, I'm sorry, next year, 2018, May 15 through the 17th of May, and you may need to mark your calendars, there are other things, like we have a focus group on machine learning and ‑‑ if you look at other agencies, they a whole body, if you will you are interested, I will be happy to explain.
>> Thank you, I think this is important, we should recognize it's a positive force, and that's ‑‑ because we become too critical because we are talking about the problems of it. So ‑‑ positive force, in this, we could not have done without, and now visualization is like that. It's possible for creating efficiencies in all areas, this is what we are going to use in all areas, but our questions are also what to do with its negative effects and how to make it equitable and socially just.
The last speaker is from the digital ‑‑ hub, along with some other organizations as a network have been working on this area, and they will bring the different streams of discussions which have been happening into this workshop.
>> Thank you. I always feel this is one of those conversations that's very much like the old, you know, life of ‑‑ Monty Python are, for those of you who have a sense humor like I do. I always think it's the same conversation with artificial intelligence, it's like what can AI do? Yeah, of course, the roads, planning, transport, the sewage, all this, but apart from that, what has it done for us? And I feel in a lot of conversations we are ‑‑ toward that end of the thing where there are all these amazing capabilities, potential that hasn't been realized, but a part from all of those achievements, what is it really doing?
I think it's a really, really helpful space to be, it forces us to look at problem solving, it forces us to look at what things can go wrong. But I think many of us tend to live in the dark space, a lot longer, I think. For me what is really interesting goes the conversations that AI is forcing from a social justice point of view. You have a robot like Sophia, what is the citizenship in Saudi Arabia and suddenly we are discussing what rights the robots have, in certain countries, which I think is a very powerful frame of saying are we suddenly treating them better than we are treating humans? Do they have rights that we don't have?
And I know I think John, who I see here ‑‑ wrote this great ‑‑ but how, if we give robots rights, we would almost certainly do it unjustly. I think that's a useful way of saying in trying to think of what kind of rights we give robots organization should we also be thinking about the rights we are not giving certain demographics or certain people in ‑‑ space. I think the conversations are unbiased and discrimination, again, is a really set of good feedback loop in terms of off line, on‑line. I was teaching a class in Hong Kong last week and some of my students came to me and said yeah, we get you will all the risk, we understand what is terrifying. But look at us, when we walk into a store or bank or nightclub, we are discriminated because of the way we look without them knowing anything about us, maybe on‑line we have a chance of being treated fairly when they don't know anything about our attributes or what we look like or the fact that we are Chinese, and maybe it's a fairer space for us to be, and that was a really interesting perspective.
I think it's forcing us to have a conversation about skins and I think it's a very fraught conversation, because I think with most things, work, the emotional and physical labor of being digitally literate and being included of having access is often visited on the people least able to do it. You are asking people to get literate when they are the ones being discriminated against.
The platforms are never asked to do things by design or at the infrastructure level with the poorest, weakest, most marginalized, are ‑‑ systems work in their favor. I think that's another important conversation that this has triggered, Mishi was talking about how innovation trumps everything. I see that a lot in Asia.
We ran three conferences in Tokyo, and it's very much the narrative, and you can see why, because a lot of countries see something like AI as a chance to leapfrog and catch up on phases of development that they missed out on. They are saying we didn't participate in many other stages of the industrial revolution. Maybe this is what we can turn our IT resources, that's what we can turn to now that the out sourcing ‑‑ is winning, they see it as an opportunity.
As I said in another session we did just before this one, they see it as an opportunity to ‑‑ populations, they are not having as many children being born, they see something like this as ‑‑ missing labor, job force, strong immigration controls, they see it as a way of providing companionship, help for other people that they otherwise would have immigrants do. These are all interesting ways that it's playing out and that have a link to equity. I think it also brings up this question of what is ‑‑ that may be solvable. Social and technical solutions and sometimes we sort of swap the technical solutions and social problems and the other way around.
But I think it's really forcing us to think about what are we using this in service of. There was a great tweet done but ‑‑ kicked off about how when he was teaching a class at Princeton and they were trying to define fairness, people came up with 21 definitions of fairness. How do you uncode that? How do computer scientists engage in these ideas, that we are comfortable with having a multiplicity of definitions. How do you code these things into systems. I think all of these have implications of equity.
I would urge us as a community to focus on what AI is really adding or subtracting from the social justice conversation, because I think one of my pet peeves is we talk about AI robotics, big data analytics, as if they are all the same thing, I think we all have our definitions and vocabulary, but we run the risk from different disciplines of mixing them all up and talking as if they are the same thing. I think that's why intersectionalty matters. You need the computer scientists and lawyers and humanities talking about these, the fact that we are all here, the fact that all kinds of people who never thought about AI are suddenly working on AI, speaks to how much of a game changer it is, how transformative it is.
I think it's a brain drainer, a time ‑‑ that all of us are focusing our energy on AI and one year ago we weren't and sponsors are requiring us to add it, we feel it's touching every aspect of our life, to our search engines, to what images are shown, how fake news works, we can't get away from it. I will send with one of my favorite quotations, the future is already here, it's not evenly distributed. I think the sooner we stop talking about AI as a way of future potential, that may or may not happen and realize it is here, it is now, it is embodied in the devices and products we use and it is something that we are part of as an ecosystem ‑‑ at a Rio conference we had, talk about he liked the ‑‑ AI and inclusion or AI and something else, because we are actually part of it, we are within an AI ecosystem, we are actors, participants. We may not all have the same agency or choice, but it is an ecosystem that we are all inhabiting. I think we should recognize that it's here and now and work on partnerships that would yield real outcomes, I think it would be a great step forward. Thank you.
>> Thank you, I think you added well to the State that you need to first recognize the opportunity and the great force that it is, as I said in the opening, intervention, it will phenomenally change how all our human systems are organized from industry to governance to social relationships, it's very powerful. It would change in this manner because it's useful. Highly efficient. Otherwise it wouldn't change in this manner. But also starting to think on what would be the role of the policy makers, because there is always a ‑‑ which is highly optimistic about it because they are the centers of artificial intelligence, the owners of artificial intelligence, there has to also be corresponding thinking about what are the problems with over sending of that (?) that balance is also required.
And when you see whether the balance is on this side or the other, you need to make those kind of interventions, because policy makers and social justice activists also look at power, and there's always an abuse and you should start talking about what kind of abuse and what should be needed to confront that abuse, if there was an abuse in ‑‑ concentrated there are problems and we need policies in the industrial age. We work out problems with intellectual property ‑‑ work with that.
Now there is an intelligence capital which is getting concentrated which is really useful at the heart of human reorganization, but policy makers need to think that what is it that we have to add to, what is already been done by the market ‑‑ it's not just equally distributed, some of us like to say that artificial intelligence is so, so hyper efficient that we may have solved the problem of production. And economics basically there are two problems, one as production, another is distribution. And therefore, since the problem of production would in theory have been solved, they should focus a lot on the problem of distribution, that's what equity and social justice is about.
I open the discussion to the room to also start talking about the kind of things we need to do, and quickly, I have just three things which comes to my mind as a conversation starter, about actually things we can do to take control of artificial intelligence to solve social goods. One is whether artificial intelligence has some defined targets and task which it would have to be doing, and it ‑‑ ways to do it, rather in critical area like health and transportation and so on, they have to reach that level where they could come up and ‑‑ tasks and targets have to be integrated into the four ‑‑ of the transportation or health system so whenever that machinery works, it just not does maximize the efficiency but also the social goods, and Norbert, and ‑‑ talked about it, we need to start figuring out how regulators could code fairness in the immigration of AI and give certain targets which have to be part of the critical systems.
And second part, and also there is a lot of this going on, the problem with artificial intelligence systems is really you do not know how or why these things, for example, if ‑‑ the new privacy regulation which has one provision that you have a right to know from an algorithm how the position was taken, it's ‑‑ artificial intelligence because that intelligence cannot tell you how the decision has been taken, so rather we would like some part of artificial intelligence delegated to start recording whatever it does, to put in human language the logic it use to make the decision, it may be difficult but this ‑‑ we can audit the steps artificial intelligence system, takes positions which can be socially unjust and therefore audited back and correct those things.
All this may add certain inefficiency to the system, but I think there has to be a ‑‑ whether we need better equity and better social outcomes which could use efficiency of artificial intelligence and that's a decision societies will take. And another issue is artificial intelligence always tries to centralize. If the ‑‑ is being organized, it is better to have one coalition center rather than three, if we are to put in five in three, it's better to put ‑‑ in one place because it provides better coordination. That's why you see these industry ‑‑ artificial intelligence has this centralizing tendency but can we force it to de‑centralize to a certain extent so that even the we have ‑‑ platforms, of course if it's ‑‑ single transport company, which holds all the data and would do things more fishily, if there were three transport companies, there would be some sacrificing of efficiency, perhaps ‑‑ even at the expense of some efficiency, whether we can have, for example, AI as a public good in some cases, there is a concept of open AI, which ‑‑ used. These are the kind of things we need to start thinking about and I wanted to contribute these points before we start a conversation. And the gentleman there first, and we will get ‑‑ about five first and decide how we go forward.
>> Yes, (?) look very ‑‑ (?) her finger on what I consider a crucial question, but I will ask the question without answer. Why don't we have, after two or three inquiries of democracy, why haven't we implemented social justice and equity yet? It is not just for lack of robots to help the elderly lady to carry a bag, it's not for lack of data or ‑‑ it is actually said because we have too many definitions of equity and too many definitions of social justice. All the wars and ‑‑ of the past two or three centuries on outcome of too much, too many ‑‑ conversation on what is equity and what is just and what is social. So, it may be that robots will be able to fix the thing in ‑‑ between robots and then we will enjoy the ultimate and best (?) it might create a very boring society. You will experience, I'm too old, but I think that this issue of, is artificial intelligence able to appreciate the different conflicting views of what is justice, equity and social.
>> Thank you. Next, please? Go ahead.
>> I would like to thank all the panelists, especially the ones who likes my blog.
>> I want to speak, though, to the gentleman on the left, since the first speaker mentioned ‑‑ I'm terrible with catching the name. There is this problem that you said AI couldn't ‑‑ my comments the same, actually. I can't optimize for social justice because that's not just a number. But this is a problem that we have or ‑‑ that wasn't you, that was the chair. I'm sorry. Okay. But that was a problem for us in general, we need to define human ‑‑ better and economics better and, yes, as several people said, this is a problem for us in general. I think I will stop there.
>> I'm from the University of Geneva ‑‑ education and what kind of systems they are and how to apply them and what it means to apply them. Even when the data is unbiased, we can't be sure that a system to whom we don't give any rules, would produce the correct rules, and it just doesn't work. So if we want to have certain rules, then we have to give it to the system. And this is, of course, a difficult task because we have to first define those rules and if we can't define the rules, we can't expect the AI system to define them for us in the correct manner. It just doesn't work.
MODERATOR: Thank you, sir.
>> Hello. I'm from Brazil, I'm here ‑‑ my first IGF program. I have a question about the labor force and labor markets. I come from ‑‑ and the ‑‑ from the country is unqualified, and artificial intelligence as was mentioned here so many times were (?) unqualified positions will definitely were the first ones to be taken by AI. As a consequence, the workers have to go outside, they are the first ones to be affected. As everybody knows, those workers ‑‑ from the countries ‑‑ will not be the (?) between the global north and the global south.
MODERATOR: Thank you.
>> Just a quick provocative answer to the first gentleman, there is an argument that we are in the democracy, if you look at writers like Aristotle, chose then by random, not ‑‑ he called it the majority, just a thought, maybe, that may be part of the answer.
>> No, you shouldn't ‑‑
>> I think the larger point I had about AI is that it's, I think you pointed to that, the last point that you made, it's fine when it's about production, it's about effectiveness, efficiency and productivity. When it's about decision making, about human behavior, then we shouldn't use AI at all, I would argue, because the only thing that AI could do is examine human behavior and simply draw conclusions from that and not from assimilated hundred planets in parallel universes where you design utopian ‑‑ it be no the tell us what is good or what is bad, it just can't.
>> Thank you, from Germany. I wanted to refer to a previous session, two days ago, for artificial intelligence and inclusion. Because as you said, there are only social solutions to social problems. I don't necessarily agree with that, because we have to (?) also the categorization that we have technical solutions to social problems and social solutions to technical problems. And I try to extend that to, we do also have technical solutions and social solutions to social problems and I want to refer to the other term. I would not necessarily say that artificial intelligence is the opposite of natural intelligence, because that's, immediately ‑‑ 0 and 1 relationship between artificial and natural ‑‑ much more physiologic than only having the two sides of a coin.
>> That was, that comment was meant to refer to the origins of the world artificial intelligence. I totally agree that in practice, how it’s going to be used going forward, the powerful implications of artificial intelligence systems will involve humans. It's humans who can make use of it. And they become more powerful or more effective at whatever the task is.
>> Just a minute, you wanted to say something?
>> I'm good.
>> Briefly, just a point about social problems and technical solutions. I see your point. But the thing is that we should not look at them separately. I think they gave an answer to that quite ‑‑ in the terms that we do not need to look at this as separate things, this is a social construct that we need to look at participation and agency within the system and that involves the use of technology, necessarily, but as within our agency as people within the system to change the system to the betterment of all.
>> Okay. I want to react to your comment, also, that artificial intelligence cannot address social justice because unlike profit, it's not equitable. But we can find metrics of social justice, also, like indicator, we can work true metrics. It's like ‑‑ for instance, it's not technological, it's not a number, but we address ‑‑ profit and there are indicators, a simple indicator would be ‑‑ between rich and poor for social justice, and the problem is that we want to quantify everything, that's another problem. But I think social justice can be calculated.
>> That's the main part. I think the scientists and technologists are trying ‑‑ it's not only what is measurable, which is important, but what is important for its own sake, whether it's measurable or not, and we need to figure out a tradeoff. We cannot say ‑‑ only things which are measurable, the best things are life are not measurable but we can still use (?) these are efforts to figure out some eligibility about ‑‑ even if we lose efficiency because the ‑‑ you want to say something?
>> I was only trying to say is that the rise of the ‑‑ currencies, which I'm sure snob has missed out. If you go back to 2008 and 2009, the group of people ‑‑ (?) but when we start doing it, it emphasizes this increasing trust in ‑‑ and technology over human created systems. The financial institutions, the governments, all the policy, all, everybody, the economists who actually told us there is more trust in this rational human being which also has been destroyed as a theory, had failed everybody. And the bankers had ‑‑ and everybody knows what happened in the subprime mortgage cases, they failed us, there is an increased tendency in large swats of human population to say the tech is predictable, the human judgment is greedy, and unpredictable. More and more trust there, that's why it's easy for the governments to sell the ‑‑ off, everything else is subservient to innovation and it leads to far more predictability in the behavior of their own populations, and ‑‑ what is democracy? Because if everything is going to be fine-tuned and changed, everything else is up for the grabs, and the human society is now changing. So that's all I wanted to say, that the larger phenomenon should not be ignored, because that's how many of the people are currently thinking.
>> Thank you. My comments actually somehow related to this then also following up on the point of innovation and the financial crisis. I was wondering to what extent, this is a question for also all the panelists, if you had some thoughts on how the current ‑‑ pressures that influence the different development of those and like the AI and the, the pressures on the companies, how that influences what types of product they come up with and how they shape and move around, the competition between the different companies, they want to be first, and whether in our reactions, are they trying to come up with solutions and what can be done, whether thinking about different business models or different approaches that are kind of, that go a little bit counter to that, that pressure and that competition, and something that is maybe a little bit more open, something along the lines of previous initiatives around creative ‑‑ and copy left approaches, if you had some thoughts and ideas on all of that.
>> Okay. I will give you, on the panel, quickly a minute and a half each, and the end of it if we still have time ‑‑
>> Thank you. It's been fascinating for me listening to this. Because everything that I have been reading is warning against the fact that AI reflects the biases that we see in our society. As a woman of color, if you are interacting with a system which has been trained by somebody who is not a woman of color, you can expect that you will pick up some biases. This idea that governments feel that technology is neutral, you are coming out and saying they are worried about human systems, they have more faith in tech systems, this is what I heard you say.
>> I actually said the citizenry is now showing more faith in ‑‑ tech, not just the government.
>> Well, I don't know about that. Maybe I read the wrong stuff, because that's not what I'm feeling at all. I mean, if you look at the discussion, you are looking at major platform companies that ‑‑ as we call them, and citizenry is becoming more and more nervous about the control.
>> Oh, I'm with you on that.
>> If you talk about algorithms and their control over data and over your lives, you are talking about AI as well. You know, so, I think that, I was getting some contradictory signals from what was being said on the panel. But essentially we see the systems that we have created will reflect who we are. And we haven't gotten to the stage yet where we can insulate them from that.
>> Thank you. I just want to add one point, because that's the political economy and we try to bring the political economy approach to it. People don't look at biases, not just biases, there is also interest in AI, interests are different from biases, we like to talk only about biases as if everybody had the right information, but here, interests are integrated by the controllers of technology.
>> Let me just make one more point. I'm sorry, I forgot. The point of innovation. What I have been reading is innovation is dangled out as the reward for no regulation in this space. I see that as being a company narrative, big business narrative, to leave them in peace. I don't see that necessarily as a developing country narrative, which is what I was getting from you.
>> That's something which I will actually disagree, the other things I am with you. I think you are right, for the longest time, it was dangled as the innovative thing. But what has happened now, the government narrative has backed into it. The public/private partnerships, I will tell about my own government where I come from, India, where it's been now the data based innovation is the biggest thing which most governments now want to bank upon. Look at the countries like the republish of China. A very different economy, very different political structure, but everybody now wants a share in what particular based innovation, which is going to be riding on the wave of data, is able to offer their citizens. The price may be democracy, and the price may be, there's other values which we may hold very dear to us after all the struggle, but innovation is not just being somehow used by companies, it's very interesting, but yes, they would also like to move fast and break things ‑‑ has a bio metric database which we are ready to export to everybody else because we think we are very innovative. And ‑‑ the data protection of privacy, et cetera, becomes somewhat of a difficulty or inconvenient issue. So ‑‑
>> It's happening.
>> Before I go to the questions, people who have been silent on the panel, if you want to take any ‑‑ make any points, they can do it. The presentation of the last one, yeah, and then the last one.
>> I just want to make a couple comments made by the chair, in a sense, summing up what the panel was saying, as they first finish speaking, which is about the concentration of power, I think that's an extremely important issue to have on the table. You mentioned the concentration of power, and historically in the terms of the number of episodes, in terms of control of land, before that, and the back lash always being towards demand for accountability and towards some kind of redistribution in a sense of, perhaps, articulated through access to resources but in a sense it's about actually redistribution of power itself, manifested through redistribution of resources in some way. That's important.
Some of these issue take the analysis of what happens in terms of concentration out of the equation and make it into something in a sense apparently ‑‑ sufficient as efficiency, innovation and technological achievement. These concepts are fundamentally ideological concepts as well. We can want to do things better, but what we define as how we do it better is, in a sense, how we approach that is a human value that's being put on to it with power embedded into it. I think the conversations around AI, if they do not embed this kind of very fundamental in a sense analysis around the political economy, in this which its operating and reproduction and change in terms of decision making through AI systems, if there isn't accountability embedded into it, we need to figure out how to get accountability into it a midst all of the positive elements that there are in AI.
>> To go to another session, I'm going to be very quick. I think sort of picking up on your point, one thing I wanted to say was, what expectations do we have of AI? Do we want to it accurately reflect reality? Are we imposing an aspirational ‑‑ because reality is screwy and messed up and biased, do we expect this to give us something better? And in doing so, how do we do that? And are we going to run up against all the problems we have had with attempts to do affirmative action? I think that's a very fruitful exercise for this community. What does that look like? Who are the actors who get to shape the discourse?
We have seen it with sort of, me, too, with over other campaign where we talk about, where we surface discrimination, there's always this back lash, which is a good way for AI to actually help with that? And like ‑‑ Preetam was talking about ‑‑ there are techniques to actually normalize for that and actually make the data more representative. So I think there are things, I think one of the ways, I think a lot of our discourse in this space focuses very heavily on forms of AI that are data intensive.
But there are techniques like reinforcement learning that actually function in a vacuum of data and humans where the system runs against it elf is, like alpha ‑‑ zero, which didn't learn from exposure, it learned by playing against itself. Something like that which is not so reliant on data, does it help us actually escape some of these questions of bias and should we, of course, it will throw up inevitably new questions, they could be solutions to some of the data questions. Just to end, I want to say we talk about transparency as sort of one of the big goals here, but should we be actually looking for proxies where a system isn't screwed up or algorithm, explain ability isn't possible because the system ‑‑ which God help you, if you know how it actually works, other proxies, are there proxies for fairness, other things we can look at, are there ways that we can audit, and other accountability measures that can work with transparency where transparency falls short and not imposing ‑‑ on the users or the victims but can actually put pleasures on the player at the design level. I wish we could talk about this for hours, but I need to ‑‑
>> She needs to go to another workshop. She will leave us now, and the person at the end and then you.
>> Johnny, University of Cambridge. I like the complex poverty of vocabulary, I would like to add for whatever it's worth, the historians of technology are trying to chart some of the history of how we arrived at this point. You can Google the maintainers, or go to maintainers.org, it's trying to offer a counter balance to move fast and break things by saying move slow and fix things. And offering maintaining technology and that the influence that that has over society as a kind of balance, counter balance to the idea that innovation drives progress, or that progress is all that important.
Also quick practicality thing, from my view, I think the on ramp for data ‑‑ for civil society is still pretty ‑‑ these would be actually good problems to have. I don't see many civil society to capture data and ‑‑ I want to echo a point made by a woman over here about the need to on degree gate around data policy, we are in the United Nations, this is a place where people meet to discuss best practices is equivalent for the ‑‑ data that civil society can gain by working together? And this is a wonderful panel. Thank you.
MODERATOR: Thank you. Yes, sir?
>> Thank you. I just wanted to go into the distinguish between AI and ‑‑ they are not the same, they are the opposite. AI, don't trust it, there are no rules. It's just learning by example and making up their own rules. It ‑‑ your examples and everything between those examples is ‑‑ which might be good or might not be good. (?) on the other hand is simple, clear rules, everything goes transparent and you can trust it ‑‑ (?) we need to look at what kind of technology we are looking at and see where we can employ it and where we can't. Thank you.
>> I completely agree, basically it's not technology, if somebody said there are technical solutions, I think ‑‑ is social, it cannot be technical, but we can get into definitions later on, but I think that technologies contribute to making new ‑‑ social systems is you are saying there are complex technologies and less complex technologies and combining them together we can make financial systems, we can make government systems, we can make other kind of data systems which kind of carry forward our social objectives, depending on those, we can combine different kind of technologies, I think social is always ahead of technologies, and I can't see anything being technically ahead of social, but ‑‑ different ‑‑ are there any inputs, otherwise we will go back to the panel. Anybody want to make the last two interventions? We can go back to the panelists.
>> Thank you very much. I just got a question. Who is going to push AI or artificial consciousness, developers or developers of the companies to a more transparent (?). So if ‑‑ has not ‑‑ the definition of ‑‑ consciousness, (?) Sophia robot, or making decision, participating.
>> I would first give a chance to those speakers who did not ‑‑ during the interventions about a minute each, and then Robert and then Carlos. Please. And then Mishi and myself, if there is any time left.
>> Thanks. So, hearing the conversation, some issues have come up so many times, that at least according to me, I think there is a need to de‑mystify. For example, I heard ‑‑ largely, too, that AI as black box, doesn't know what it's doing. And it's very difficult to kind of ‑‑ so, technology has ‑‑ that's true for ‑‑ and deep learning algorithms. But there are networks ‑‑ by definition is (?) when you train the algorithm, you can actually see, you can print it out as a ‑‑ (?) there are different classes of algorithms, but basically it's not that difficult. And these kinds of algorithms are used for natural language processing, for speech recognition, but even that could be traced. So, let's not put everything in one basket and say that we don't know what's happening.
Let's understand what the technician and technology is and try to find a solution for everything. Because then the worriers, you are looking for a dooms day scenario. I wanted to close by ‑‑ quoting ‑‑ that I heard from the conference in China, and they said something along the lines of, he is not worried about machines thinking like humans, he is not worried about humans thinking like machines. I will just leave it out there.
>> Yes, briefly, yes, there is a strong urge to out innovate and outpace the rest, and inefficiencies of democracy are taken into account attempts to slow down processes so they become various ‑‑ and for capital, for power in general to promote this kind of growth without caring about the distribution of the growth.
However, we are not going to stop that necessarily by just mentioning it, and it's important then to be involved in those processes ‑‑ and Mishi mentioned that. And the participation and design and ‑‑ accountability and ‑‑ is necessary, it's all within the larger framework of society trying to make better conditions for itself. It's good to participate in that setting because that sets the rules for the game for the future. However, because, now ‑‑ participants in those processes without leaving aside all the other questions that appeared during the panel about democracy, about being participants, about highlighting some values and bringing them into these conversations and also to contest and challenge and correct the results of the use of technology when that might not go towards what we consider social justice. Thank you.
>> I would like to highlight one point, which is a difference between social justice and social justice indicators, which, of course, each are important aspects. And I believe it also addresses the question asked at the very end, at least to some extent. I'm going to quote from the human rights, 25A of the ICPR, every citizen shawl have the right and opportunity to take part in the conduct of public affairs, and defining what a AI algorithm should do is becoming, at least for the big implications, a matter of public affairs, we have the right, we must demand our human rights to take ‑‑ in the public affairs. We need to take a seat at the table, not just for the big techies and big companies, it's for us.
>> Thank you.
>> Thank you, Norbert. We still have time, maybe half a minute.
>> I just want to say that I agreed with Preetam about what is the segregation and the triage and understanding of terms is important. But I want to concentrate on one simple thing. I think in the future when we look at, it's not just the lower skilled labor which is seen, at least a little bit of changes in what the new jobs will bring, but it's also at the very expert level, which requires kind of precision. And there's a lot of other kind of benefits which are coming up.
You all saw, perhaps, Kepler, new planet we discovered with the help of Google network. These are the kind of benefits, first time after the ‑‑ revolution, we can predict which of the kind of ‑‑ (?) and production happens and make work safer and also about education, just to make education effective, which I think has been an endeavor of human society, which now directly leads to all kinds of benefits later on, to make ‑‑ more effective, and X is already doing it and other companies are deploying it, but also trying to do it for ourselves and our schools, which is the benefit coming out of it.
I cannot emphasize enough that more participation or at least even an attempt to understand all the issues just makes it a little bit more ‑‑ than somebody telling you the top down approach, that this is how the tech works. I think the last report of ‑‑ 52 percent of the websites on the internet are in English but only 25 percent of the population of the world speaks English. That also tells you the disparity of what the being taught to us compared to what we want to learn. Let it not come as a afterthought, that this is the kind of fallacy you would like, but ‑‑ getting interested in improving our own education.
>> Thank you, Mishi, thank you, everybody. We may not have been able to answer some of the questions, but the questions were important. Thank you very much for participating. I just want to add to what Mishi said, we do not want equity and social justice to be an addition to the new digital systems around artificial intelligence have been set up. We want these to be a part of the design and we are informative users of artificial intelligence society and therefore we want to carry on this dialogue, whether we want to code social ‑‑ (?) I we have that kind of trade off.
Those who are interested in ‑‑ can go to Just Net coalition website or the other website and there is a general ‑‑ (?) and write to us. We would carry on as a group and coming back to the next idea ‑‑ in some kind of a grouping which will carry on an effort to see that equity and social justice is part of the new systems and not added to them later on. And thank you, everybody, thank you, the panelists, so much.