You are here

IGF 2019 – Day 2 – Raum II – WS #282 Data Governance by AI: Putting Human Rights at Risk? - RAW

The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> MODERATOR: Good afternoon.  Guten Tag.  Willkommen. 

[ Speaking non‑English language ]

Good afternoon, ladies and gentlemen.  Welcome to workshop 282, entitled Data Governance by artificial intelligence: Putting human rights at risk.  We have an extraordinarily high‑level panel who I'm very, very thrilled to be able to introduce to you shortly.  And we also have people, we believe, are here by remote participation.  So please bear with us while we just sort out who is here online and who is not.  My name is Marianne Franklin, and I'm representing the Internet rights coalition and we're very happy to have co‑organized this panel with Amnesty International Germany and to be able to address to the floor, to the IGF, and for the record, the pressing questions around governance, artificial intelligence, and human rights.  So the guiding question for today goes like this.  What are the regulatory, technical and ethical considerations for what we could call human rights, artificial intelligence by design. 

        So before I introduce the panelists, I need to explain a little bit this format.  It's based on a very popular television show in the United Kingdom, still a member of the EU last time I heard.  It's called "Question Time," and it's modelled on prime ministers question time where elected representatives in the House of Representatives which is called Westminster for short in the UK asked questions.  So this is the same today.  But this is a TV program, so we have our questions ready and scripted.  We have people who are going to ask those questions.  They have sent us their questions in.  These questions will be short.  And there will be time for you in the audience hopefully if there is enough time to our panel. 

        I also ask our panel to do some opening statements as well.  Basically it's question time.  So the panelists do not know the questions that they will get.  But I'm sure they will all cope.  They are not under any obligation to answer every question put to them.  So just so you know.  And that's the format.  I cannot possibly try to be Fionna Bruce for those of you who know who currently presents question time.  But I will do my best. 

        So I'd like now to introduce our panelists.  To my right, I have Renata Avila who is executive director of smart citizens foundation.  Thrilled to have you here, Renata, who has to leave at 5:30.  So just so you know, Renata will be leaving half an hour before we finish.  Thank you very much, Renata.  And to her left is Markus Beeko, welcome, Markus.  And we have a new member of our panel which is Jai Vipra from India and I'm Marianne, as I just said, and I believe Alex Warden from Google is on her way.  Hopefully she will arrive.  Is there any Google representative in the audience?  Okay.  Tech sector were invited, so let's hope that Google representative can get here.  And to my left immediately is Paul Nemitz, principal adviser for DG Justice at the European Commission and a member of the German Data Ethics Commission as well.  Thank you so much for, Paul, for being with us. 

        So let's get started.  The first part, of course, we'll ask our panelists to do two things, just to get us settled and thinking in the right space.  They have been asked the very simple job of defining, in so many words, what they mean by artificial intelligence.  Then they're going to have an even simpler task, to list one and no more than three most pressing issues at stake that they consider are at stake at the intersection of artificial intelligence, research and development, and its online deployment, particularly in light of human rights law and norms. 

        So we'll let them have the floor first.  And if you're running out of definitions, I have two official ones here to add to the record in case we need any more.  But let's start first of all to my left, Mr. Nemitz.  Thank you very much. 

>> PAUL NIMITZ: Thank you. 

>> MODERATOR: Sorry.  First of all, the definitions and then pressing issues, just so you know. 

>> PAUL NIMITZ: Yes.  Thank you very much.  And it's a great format.  So on the definition there are, of course, hundreds of definitions in the EU.  We have no legally binding definition, but we have a definition of the high‑level group on AI which was an independent advisory group of the commission which basically said AI are systems which adapt to their environment in an effort to fulfill certain predefined goals and act from the feedback they get from the environment.  It's sort of a complicated definition, but I think it serves the purpose, it's wide enough. 

        The three ‑‑

>> MODERATOR: Oh, sorry. 

>> PAUL NIMITZ: The three issues? 

>> MODERATOR: No, we'll go to definitions first, okay, Paul, so that the definitions are all clustered together.  Thank you so much. 

>> Is this working? 

>> I am from a Civil Society O called IT for change.  We prefer digital intelligence over artificial intelligence because the meaning of what is artificial and what is intelligent changes over time as human expectations of technology change.  And so we think that digital intelligence underscores what is important about the technology that we are seeing these days, which is the digital infrastructure and the use of data. 

>> MODERATOR: Thanks so much.  And onwards, Markus. 

>> MARKUS BEEKO: Thank you.  Can you hear me?  It's great to be in question time as I have many questions and I'm looking forward to discussing them.  Yes, arts official intelligence.  Rather than trying to add another definition, I'd like to share with you what we at Amnesty International currently focus on, and it's artificial in a rather narrow sense and focusing particularly on algorithm‑based decision‑making and the applications in rather concrete and defined areas, and we look at this especially in differentiation with ‑‑ around degree of human interaction, for instance, in the decision‑making ‑‑ differentiating it between algorithm‑based decisions which are human decisions but supported, algorithm‑driven decisions which are largely shaped, outputs of algorithmic systems and algorithm‑determined, sorry, decisions which trigger consequences. 

        And then secondly the potential human rights impact that artificial intelligence has. 

>> MODERATOR: Thanks so much, Markus.  And moving on, Renata. 

>> RENATA AVILA: Yeah, I didn't find a good definition that will make me fully happy, and I shared the vision for idea for change, and I think that that's a good way to frame it.  But I would like to refer ‑‑ I don't know if I should jump immediately to the three pressing issues that I think as a human rights lawyer makes me very, very, very worried.  The first is augmented inequalities.  The ability of the employee deployment and decisions at this level harm horizontally a large number of vulnerable people.  And not only crystallization of inequalities that we have in the world today, but in making them more severe and more invisible and harder to hold accountable. 

        The second is the democratic deficit in decision‑making and accountability.  We spent the last decade trying to open governments.  We started with the access to information laws, and we started to try ‑‑ with the open data movement, trying to open and see the machine from the inside.  It has been harder with companies because of the trade secret laws and so on.  But now we are at the moment that a decision ‑‑ not even the decision‑makers will be able to explain properly and be held accountable because they will blame the machine.  And they cannot understand the machine, and the machine you cannot open, then we will have a big democratic deficit as citizens because we cannot even point out ‑‑ we will not be able to point out what's wrong in the system, just the defects. 

        And the third is automated manipulation.  So one of the principles on Article 19 is to access information without interference.  And we have seen many of the experiments done in the artificial intelligence field is on news, distribution and curation.  So you have a machine separating your news and your access to information and selecting what is best for you.  And I think that that will profoundly modify Article 19, and it will deeply affect our freedom, basically.  If you can manipulate the larger scale, it is a big problem for democracy and for human rights. 

>> MODERATOR: Thank you very much, Renata.  So going backwards in direction, Markus the three pressing ‑‑ you can have only one if you like, but you have no more than three pressing issues to state today.  Thank you. 

>> MARKUS BEEKO: Yeah, thank you.  And rather than looking at pressing issues, please allow me to look at three actors, I believe, mostly relevant, and let me start with regulation on the state and the international community.  And as Amnesty International is extremely concerned about the widespread adoption of machine learning and automated decision‑making systems without adequate consideration for their impact as a critical threat to human rights.  And we believe that there is the need to ensure that artificial intelligence is not only human‑centered, and when I say human‑centered, I'm not looking at, for instance, those using and applying artificial intelligence and also not using ‑‑ and also not looking at individuals as users but really focusing on individuals as individual rights holders and the effects on their fundamental rights. 

        So Amnesty International believes that we need to apply existing human rights protections to the development and the use of these new machine learning technologies.  And so this should happen on the basis of new international law to prohibit the development and use of certain artificial intelligence technologies such as autonomic weapons systems and also define the use of artificial intelligence.  And just going back to what the German data ethics commission on which Paul also said, looked at ‑‑ we think that one important criteria for this is to have a proper assessment of the infringements and the interventions artificial intelligence potentially has to basic fundamental rights and develop a system probably related, as they have, within their report to rank ‑‑ to rank the impact artificial intelligence system may have and accordingly to that also define different levels of regulation and oversight. 

>> MODERATOR: Thanks very much, Markus.  Jai. 

>> JAI VIPRA: I think when we're talking about artificial intelligence and human rights, one of the main issues that comes up is the question of development, specifically how do countries or sets of people within countries pursue development, which is central to the human right to live a life with dignity.  How do you program dignity in a situation where we've seen that artificial intelligence has had till now on monopolizing tendency?  How do you provide choice to people within countries?  How do you tackle the issue that most development of artificial intelligence technology till now has happened in private companies, and we cannot control the users to which it is put? 

        The climate implications of using artificial intelligence algorithms and training datasets for which there is new evidence to show that it has extremely adverse effects on energy use.  The second most important issue is that of data, specifically who is able to control the users to which data is put through artificial intelligence.  Because we might want to think about cases where AI should not be used, specifically where human rights will be affected very badly, and you can only wield that through through wielding control over data that AI uses.  And we might also want to use AI to ‑‑ you might want to put it to good use to solve some human problems to ensure human rights.  And to be able to do that in a democratic way, you would want democratic control over the data that the people themselves generate. 

        And so these, I think, are the two main issues. 

>> MODERATOR: Thanks so much.  Paul. 

>> PAUL NIMITZ: Yes.  So the question was, what are the main research challenges?  I would say the first research challenge is how can we build democracy, rule of law, fundamental rights, sustainability into systems of AI?  This thought is, of course, inspired by the legal obligation in Europe to build privacy into systems of data protection by default and by design, and I think we have to extend this principle which so far is only in the GDPR for data protection privacy to the broader principles of free societies.  And, you know, it's great to state those principles.  The question is how to then operationalize this in technology?  I think that's the first important research task. 

        The second, which of the challenges identified in the around 70 ethics codes including the AI high‑level group ethics code for the European Union on trusted AI, which of those challenges require a law, a law which has democratic legitimacy and can be enforced also against those who don't want to play ball?  And which challenges on the other hand are of such nature that with good conscience we can leave them to ethics codes instead of regulation?  I think that is really ‑‑ we are now at this critical juncture, the president of the European commission from today onward, Mrs. Funderline, has said very clearly we will do a law on AI, on trusted AI, AI which works in line with our values.  But the question is what to put into it from the challenges identified in previous ethics discussions, and I think that will be both a political issue, but it's also still an issue of drawing the line, what's so essential that it has to be in the law. 

        And the third question is how are we going to assess impacts in the future of AI systems?  How are we going to develop control technologies for these very complex systems?  I think we cannot accept that systems which are not able to explain themselves, and that's not only that we can't accept them because of the rule of law, you know, the government always must give reasons.  There can be no programs which don't explain themselves in government service, that's for sure, but also let's say for security, for cyber safety, for cybersecurity will never use programs which you don't understand.  So it's about understanding the programs from, on the one hand, analyzing the code.  Of course, not just by humans looking at the code but by having controlled programs for AI, we need to develop control technology which on the one hand looks at the programs, but on the other hand ‑‑ and this is a new school of research here in Berlin at the institute, there was a professor just coming from the MIT in Boston (Inaudible) who is developing a new methodology to look at these programs on the basis of the code, sort of look into it, but also look at them from the outside, you know, like a psychoanalyst looks at the human behavior from the outside and asks questions and, you know, how do you feel and looks at the behavior, I think we need the same type of new methodology and research to reengineer and understand how do programs which have this or that code, how do they actually develop?  How do they learn?  How do they mutate?  So this whole issue of controlling these highly complex systems to make them transparent and to make them controllable, I think that's a key research challenge today in order to ensure that the values according to which we want to live are respected by these machines. 

>> MODERATOR: Thank you very much.  So many more questions.  So at this point I just need to do a little bit of a technical check.  I'd just like to thank our remote participation moderator, Sebastian Schweda, and his employer and law firm have kindly let him come all the way up to Berlin today.  So Sebastian, I just want to check, are the people on remote participation ready to ask questions?  Is anybody listed there yet? 

>> SEBASTIAN SCHWEDA: There's one participant who just logged in.  I am not sure if she has a question.  I will make sure to gather any ‑‑ to collect any question. 

>> MODERATOR: So they'll be listening.  I have a question now ‑‑ we'll do two questions, I think, just so we can get the questions on the table.  And I will request the panelists to answer as briefly as is feasible, mainly because Renata has to leave at 5:30, so I want you to have at least a couple of questions under her belt before she goes. 

        So we have the first question, and the other thing is I know there's some remote participants from London who are youth, young students, who have sent questions in.  So if they have not been able to get the bandwidth, I'll ask the question that they sent me on their behalf if that is acceptable.  But let's have the first question from Steven.  I believe Steven is in the room.  There are roaming mics ‑‑ there are standing mics, so just give us a moment.  And I'll take that question from Steven and then one, if it is a remote participant, or I'll speak for them.  So Steven. 

>> Hi, thank you.  So my question is should the use of AI for certain purposes be banned or at least limited due to the risk these processes pose for human rights? 

>> MODERATOR: Okay.  And is Chan online, Sebastian? 

>> SEBASTIAN SCHWEDA: No.  He's not. 

>> MODERATOR: Okay.  I have a question for ‑‑ I'll open the floor.  Minda, that's okay.  Plenty of time.  I have a question from Chan, and here I think it links very nicely.  She wants to know this.  Since data is essential to machine learning, how do we measure and mitigate the political gender/racial bias in the data?  So the panelists, who wants to go first?  Answer both of them?  Neither of them?  Or one of them?  Paul? 

>> PAUL NIMITZ: Yeah. 

>> MODERATOR: Just briefly, please.  Thank you. 

>> PAUL NIMITZ: So on bans, we have a long history of banning this or that technology.  We have, you know, chemicals which have very great impact against certain insects but have been banned because of their side effects.  You know, we banned cars without seat belts and so on and so on.  So I think certainly there will be technology which also in the future will be banned, atomic power is one example here in Germany, which has been in the course of being banned, and I think for AI, one cannot exclude this at all.  The question is also ‑‑ this was discussed in the AI high‑level group, should it be actually allowed to research in order to give AI a soul, which have developed programs which have human remore thanes, human feelings, and there were some in the group who said no, this should be actually a red line.  We should not develop machines which have human emotions which feel pain, for example, interesting reasoning, because this would be an innovation obstacle for technology.  Because in the moment, the technology feels like a human feels.  You cannot actually treat it as an object anymore. 

>> MODERATOR: Okay.  Thank you.  I think Renata, you have a response. 

>> RENATA AVILA: Yeah.  I agree that we need to regulate and we need also to ‑‑ I think that the global trade system and the global rules on commerce are not prepared to deal with this regulation and this will be like a big battle.  And I'm very afraid of that battle because I feel that our field of privacy and human rights experts and activists and campaigners is completely disconnected from the global trade arena.  And there's an upcoming ministerial next year in June, and all these issues will be discussed.  And this is the moment of that preemptive fight.  Because if global rules and commerce are crystallized in a global agreement on these topics, we will not ‑‑ it's really hard to undo.  And it will be really hard to regulate.  And in that line, it is not only banning, but it is, like, when handling hazardous materials or risky business, we have a duty to respect humanity basically and also key word here is transparency.  And we cannot accept ‑‑ we cannot ‑‑ we have to reject the terms of closed ‑‑ like black boxes but also closed rooms.  We need to see the effects, and we need to evaluate, it's changing the culture because usually when you have something in the mark, it goes kind of by itself.  With this kind of technology, we need to supervise at different stages and measure different effects.  So it will be, like, very ‑‑ and I shared the concern where the global (Inaudible) is there because will we be able to deploy those systems of evaluation and measurement? 

>> MODERATOR: Okay.  So, of course, it isn't just about banning but who gets to ban what as well.  Markus and Jai, do you have any comments to this particular ‑‑ we'll move to the second one in a minute. 

>> JAI VIPRA: Yeah.  The question on data is the one I would like to answer. 

>> MODERATOR: Okay.  Before you do, just checking, Markus, did you want to respond to the ban or do you want to respond to the data bias? 

>> MARKUS BEEKO: Quickly to the ban, as I said before, we believe that it needs a legally binding prohibition, for instance, on the development production and use of weapons systems.  But as Paul mentioned, I think one thing is to look at those areas which clearly infringe fundamental rights or like the right to live and the lack of accountability and remedy in these cases.  But as we are moving forward at a high pace, I think one thing is to have the built‑in default, which Paul mentioned, to make sure that we develop ways of having a human rights‑respecting development.  But I think the other one is to make sure that companies developing also are part of human ‑‑ they have all their production and development processes as part of a binding human rights due diligence process. 

>> MODERATOR: Okay. 

>> MARKUS BEEKO: Not only the built‑in processes but also a constant assessment why development and research are happening. 

>> MODERATOR: Thank you.  That's very clear.  So let's move to the second question which I will repeat.  Since data is essential to machine learning, how do we mitigate the political gender, racial and other sorts of biases in the data?  So Jai, you wanted to speak to that. 

>> JAI VIPRA: Yes.  We can look for this at the fintech sector and there are three AI laws that prevent discrimination in terms of who you give loan to, for example.  The great challenge now is to translate those laws to the use of machine learning and the use of big datasets.  It's not always possible to regulate the inputs that go into an algorithm or the way an algorithm is written, for example, because it's not always possible to improve the correlations that we used.  But it is possible to regulate the outputs.  It is possible to set standards for outputs.  And for which we are agreeing a lot, but I would like to agree with Renata again.  It's crucial to preserve policy space for states to be able to do this, for people to be able to democratically engage with the state and ask for certain standards for issues that matter to them and for which it is, again, crucial that e‑commerce agreements at the international level are not at this moment signed. 

>> MODERATOR: Right.  Any other comments to the issue around data as biased, racially biased, gender biased?  No?  Paul?  Yes, sure. 

>> PAUL NIMITZ: I mean, you know, if you develop AI for the purpose of optimizing torture or for the purpose of mass surveillance, you know, there are companies which do this already and, you know, they have great customers among dictatorships in the world.  We have the problem, for example, that our data protection law, in Europe it doesn't cover technology.  So it can actually produce legally technology for an illegal purpose which is to grab data, to scrap it, to do mass surveillance.  And I think this question of having a limit on purposes for development like, you know, is it (Inaudible) to develop AI for torture or mass surveillance which will be illegal in most cases?  I think with the increasing power and ubiquity of this technology, these will become serious and important questions.  So the question of ban is a question which is important and I agree also on autonomous lethal weapons.  We already see the discussion starting in at least some thinking going in the right direction.  On the data I would say if we just unchecked take data as it exists today, the empirics of today, we live in a society where there's a lot of discrimination.  Then we just perfect the discrimination of tomorrow because AI will perfectly apply the discrimination which exists in society today.  And by the way, we would also have no impetus in these machines.  These machines don't have the human impetus which you may find in a judge who says, you know, the practice we had until now is not okay.  We have to change jurisprudence.  This impetus of wanting to do the better, to ask how should the world be rather than how is it is not present in these machines. 

        So I think we need a very important thinking about how actually societal innovation ‑‑ technical innovation also ‑‑ how the quest of the human being which grows out of discontent with what is is actually finding its way in this future where the machines take decisions on the basis of past empirics. 

>> MODERATOR: Okay, thank you.  Thank you.  Before we even ban, we just don't stop kind of idea.  That's a very important point.  Thank you so much.  Bringing in weapons.  Renata, you had a comment about biased data? 

>> RENATA AVILA: Yeah.  Well, you know, one of the areas where it is more ‑‑ there's more enthusiasm to implement AI is in social protection.  And in large interventions to the poor and more vulnerable.  Usually that's the case.  Usually the experimentation, the level of experimentation and the decisions go to places where the harm can be, like, really, really lethal, leave people out of social protection systems and so on.  And especially women and certain type of women.  We need to also look at the role of states and local governments on this and the responsibility to states and local governments have to play in not only respecting but enforcing human rights.  We just launched recently A Plus Alliance, and it's an initiative in precisely how to reverse the design, how to prevent at least in the social protection area, at least for women, these biases corrected in time because it is one of the ‑‑ women will be affected, are already being affected by the systems, and it is an obstacle for gender equality. 

>> MODERATOR: Thank you so much.  Moving on to the third question, I just want to note we have two people waiting on remote.  One is Elizabeth, just to let her know that her turn will come.  She's already queued.  And we have one also on remote who will get a chance towards the end.  So just not right now.  Just so that we know they're there. 

>> SEBASTIAN SCHWEDA: There's no question yet. 

>> MODERATOR: No, no, just so they know. 

>> SEBASTIAN SCHWEDA: They're just online. 

>> MODERATOR: We haven't forgotten one.  We have a third question from Claudia.  Also can I ask Ryan if he's in the room to get ready for the fourth question.  First of all, Claudia, would you like to pose your question to our panel? 

>> AUDIENCE: Yes, hello.  So as we know, without data there is no AI.  So I see a big tangent between data protection and AI.  So my question is, is there any way to make AI‑based data collection analyzes less intrusive? 

>> MODERATOR: Thank you.  That was short and sweet.  Thank you.  Any way we could make AI based or AI based data collection actually less intrusive?  So a response, equally quick and short responses, perhaps, to that question.  Maybe it's just a yes or no.  Paul, can you do it in a yes or no?  A maybe? 

>> PAUL NIMITZ: I mean, if things get interesting, it's always a little bit boring to say to end in one sentence.  So first of all, AI is something which with great benefit can apply to nonpersonal data.  You know, take the weather forecast, you know, will become much better with AI.  Let's not get so much hung up on the AI has to be used for personal data because the biggest interest there is public relations and advertising, and really this technology, if we want to use it for the good of humanity, the first concern is not to make it possible for advertisement to use AI.  So I would say let's be creative and invest in searching for opportunities for AI, in research, in physics, in chemistry, in science which have nothing to do with personal data.  And there there's no problem, and let's go for it.  And, you know, there's a lot of opportunity there.  If you think of AI to model the world to climate change development, you know, would be nice to have. 

>> MODERATOR: Okay.  That's a very important point.  It doesn't always have to be about personal data.  Therefore the question of intrusiveness may not arise.  Renata. 

>> RENATA AVILA: And in the case of they are doing it, I think that pilots, small pilots before doing big deployments is necessary and having the pilots open to different groups to academia, to human rights groups and so on.  So it is a process of validation somehow.  And incentivizing that, but that needs resources, and there's not so many actors willing to invest in pilots to evaluate whether it's possible.  Although the issue happens in locked rooms again. 

>> MODERATOR: Okay.  Thank you.  In consultation by design.  Markus or Jai? 

>> MARKUS BEEKO: Yeah.  Just very briefly, I think one thing is nonpersonal data.  I think the other one is the pattern we look for.  And I think in general, my feeling is that there is a debate we still have to go much more into around the data and probabilities and the way that probabilities and the aim to determine future behaviors have become part of the AI paradigm, and I think that's the thing around personal data, which is the most relevant to look at and find a new paradigm to secure self‑determination and autonomy of the individual. 

>> MODERATOR: Okay, thank you.  And Jai. 

>> JAI VIPRA: Just very briefly, I think we should enforce the principle that just because you can collect a lot of data doesn't mean you should. 

>> MODERATOR: Thank you.  Ryna, I think you're waiting at a mic.  Thank you very much. 

>> AUDIENCE: Hello.  Okay.  Thanks a lot for giving me the opportunity.  I have a question regarding decisions from AI systems.  Who should be held accountable for this?  May it be ‑‑ may it be maybe the recognition of traffic ‑‑ traffic signal somehow from an automated driving car?  May it be image recognition from a drone for, like, some drone strikes or something like this?  So my impression is that maybe the individual using it can't really be held accountable because they don't understand how it works.  But the machine itself, also not because it's a tool.  It's a tool somehow.  But who is it?  Is it the programmers?  Or is it the organization buying it or the person deploying the algorithms?  And is there already something in the data protection law that could already be applied for this?  Thanks a lot. 

>> MODERATOR: So who rather than what should be held accountable, and does existing a law, is it adequate for holding who accountable, whoever that might be?  Thank you.  So let's have some responses from our panelists.  Accountability.  This is a very important part of the work and the thinking because this is the operationalization of all our dreams and desires, accountability.  Who takes the rap, as they say, and who should take the rap?  Or should we outsource it to a robot who, after all, has no feelings so they don't care.  I mean, this isn't just science fiction.  This stuff is actually happening in many ways.  I'm not being facetious.  Let's try and address the accountability question at this point in the proceedings.  There are other questions to come.  So I'm just going to queue Lynn for when Lynn is ready eventually.  Renata, accountability, please. 

>> RENATA AVILA: I think there is a nice incentive for everyone to be responsible in the process, the company developing the technology, the person deploying it.  And I think we need to take this very aggressive ‑‑ if I was a judge and in the case of this kind will arrive to my table, I will adopt that really broad approach on this because it's so ‑‑ if we go ‑‑ if we repeat the mistakes of not holding anyone accountable as we did with the oil companies or other big, powerful industries, I think that we risk a lot.  So full responsibility to broad extent and up to the president of the company, you know. 

>> MODERATOR: So you're saying it should be the designers, the owners and the organization, and that would imply the governments as well. 

>> RENATA AVILA: And investors, even. 

>> MODERATOR: Point taken.  So if I could just press on a bit, I'm getting into Fionna Bruce's role.  We can't be all accountable all the time to everything.  So perhaps, Paul, do you have a comment to maybe governments or to address specifically the question as to who should be accountable at which point? 

>> PAUL NIMITZ: Well, yeah.  Your question was is there something already on law on this?  There is, actually, because Google in the right to be forgotten case was a Spanish citizen who wanted not to be listed anymore by Google, rehearsed the argument in that case that Google is not responsible for the search results which come up because it's the algorithm who does it in an automated way.  And seriously, they wanted to shield the company from this algorithm, and the judges said no way.  So that is the leading case actually on responsibility.  The fact that there is an automated system, AI system, algorithm, doesn't shield the natural or legal person which puts this in business and makes money with it from responsibility.  And I think this is something we have to maintain.  I think there can be no question of giving responsibility just to a machine or just to robots.  This really only serves to deresponsiblize the people behind it who make money and put it into the world. 

>> MODERATOR: Thank you very much.  A very important point, outsourcing to machines, not a good idea.  Jai? 

>> JAI VIPRA: I think a model has it on one side and another model on the other.  The model has it being that if you don't hold the people responsible, there's an incentive to be reckless.  And the other panel is that innocent people get punished for something they did not actually do.  So I think there ought to be different levels of responsibility for different types of users of AI and the kind of impact that ‑‑ the kind of adverse impact it has had.  And also importantly, as Markus said, due diligence requirements initially in terms of how the code is developed and implemented.  So a proper chain of command with someone held responsible that are existing regulations in certain sectors specifically in finance that already do this for AI in the U.S.  So maybe we can look at that. 

>> MODERATOR: Thank you, Markus, do you have anything to add at this point? 

>> MARKUS BEEKO: I think it just emphasizes how important it is to look at decisions which are so solely algorithm determined and to trigger consequences that here, number one, the responsibility, of course, is totally clear but it just shows that this can only be applicable for very, very narrow area of application.  But in general, we need ‑‑ we have people still in the loop who also then share the responsibility with those who put systems in place.  And therefore, as you just reminded, it is in the interest of companies to have a clear binding national and international regulation which regulates due diligence within the process because it not only builds trust and takes care of responsibility, but it also makes sure that there is risk management on the company side which is, of course, important. 

>> MODERATOR: I have an eye on the time.  We have a question from Lana here.  Could I just note the empty chair?  We did invite the tech sector to participate, and they do have the right to respond.  So just in case anybody is wondering, they were invited.  Because sometimes we are challenged about our multistakeholderism. 

>> PAUL NIMITZ: Which company was invited? 

>> MODERATOR: Google was okay.  Just in a clarifying case anybody is wondering, no more to be said.  We need to move now to the next question so that Renata has a chance if she wishes to respond to it before she needs to leave.  So Lana, the floor is yourself. 

>> AUDIENCE: It's good that you reminded us of that.  My question is oversight bodies, what do you think role they could play and what resources do they need to be able to fulfill that role on this oversight technology and then with regard to the private sector, do you think that self‑regulation is sufficient or does it need greater oversight over the private sector? 

>> MODERATOR: Thank you.  So Renata, would you like to respond? 

>> RENATA AVILA: Yeah, I'm not a big fan of self‑regulation, I mean, especially because we are dealing with something so powerful here.  We cannot leave those who cannot ‑‑ do not want to even open their offices.  I mean, when you visit the office for Google, just for the visit you have to sign an NDA.  So that's the level of trust that they have when they communicate with other stakeholders.  So I don't believe in self‑regulation.  I think there is only PR exercise.  I have seen many of those mechanisms, an insult to the human rights community, I will say.  But I also worry because on the other side, we don't have the resources.  We don't have this sophisticated knowledge to be a counterpart in these mechanisms.  This is a big board and I think that the state has to resolve that with a series of measures.  And in that I'm very worried about the capture of Universities because many Universities dealing with AI, like, flow with money either from these big companies or from projects dedicated to the military.  So we are in a very bad position when we talk about oversight because on the one hand, we cannot trust them, the potential perpetrators, we cannot trust them.  And on the other hand, we are not equipped as citizens space with the right to hold them accountable. 

>> MODERATOR: Thank you very much, Renata.

I think that's a good segue now to Paul.  I think you have some responses to this particularly from the data if its commissions work.  So please go ahead.  And Renata will just quietly ‑‑ just before, Paul, thank you so much, Renata, for making it.  And all the best.  And gracias and we'll see you.  Thank you.  

[ Applause ]

 

>> PAUL NIMITZ: Yes.  So as I said initially, it's not only the view of the ethics commission in Germany but also the president of the European commission in her policy guidelines has very clearly said that we need a law to ensure that our values are respected by this technology.  And I think this statement is based first of all on the recognition that we can have a level playing field in the internal market but also globally only if we have a law which obliges everyone and not leave it to, you know, people joining or not joining some self‑regulatory code. 

        I would also say there is some learning in Silicon Valley if you read the article of Mark Zuckerberg in the Washington post on the 30th of March, the new book of Brett Smith, the president of Microsoft or the statement of Kent Walker, the chief lawyer of Google, they all actually say we need laws and we are ready to submit to democracy.  That's very new music.  That's not John Perry Barlow anymore, the independence of cyberspace.  So I think those who say we don't need law are increasingly lonely.  Even the president of Google said we need sector‑specific laws.  So he doesn't want a general law on AI but only sector‑specific laws.  So I think we are now actually at the point of the years of ethics committees and ethics debate to answer the question, what has to be in the law?  I think the question of whether to have a law or not is already in Europe is finished.  The question is what has to be in there?  Now, as to bodies of oversight, that's, of course, going to be a classic debate, you know, in Europe there is a classic reaction oh, please, no new institutions.  No new bodies.  But then again, this is called the light touch policy.  Let's not create new institutions.  But on the other hand, politics and governments in Europe recognize that ‑‑ and companies are claiming that this technology will be the big thing.  It will be ubiquitous.  It will solve all great problems.  It will be powerful, and we are starting to throw a lot of money at this technology as the basis for technological and economic future. 

        And then to say that this huge thing which will be present everywhere, it doesn't need an institutional structure of oversight is a little bit, I would say, risky, to say the least.  So I think the discussion will rather focus, like you can see it in the German ethics and also on the European level, the debates, on how to build the oversight.  For example, on the question are there existing institutions, the powers of which and the competence of which in terms of technology abilities can be enlarged to exercise this oversight or what type of new institutions want needs?

The European parliament, in the last mandate, already put a report on the table and said, very clearly, we need a European certification body for this.  So I think it's now important to focus on how to do it rather than on the weather. 

>> MODERATOR: Okay.  Thank you very much.  We need to think about appropriate focused and human rights‑based law.  So we are talking here about human rights law already exists.  The national level and the international level.  So Markus, did you have any comments here about oversight?  Are there current human rights institutions sufficient oversight to address the issues that Paul has just slightly raised, or do we need something else? 

>> MARKUS BEEKO: I mean, Paul, I think, very rightly pointed out that it will need a combination of the empowerment and capacity building of existing institutions and oversight bodies and at the same time also the development of new ones.  I mean, what I see is that we are seeing that governments and international bodies are reacting.  They are moving.  Perhaps not as fast as technology and the tech sector has been moving, but they are stepping up.  And so I would think that, yes, it's possible.  I think it's going to be a challenge.  And there might be some need also to look at certain areas where we pause and halt to make sure that institutions and oversight mechanisms can be installed.  So, for instance, where we see that the public sector and governments bring in private companies to take over public services and also deliver public goods, the application of artificial intelligence, I think this shouldn't happen before we don't have the right institutions and oversight (Inaudible).  Just so self‑regulation, as we this year have the 70th anniversary of the signing of the Declaration of Human Rights, and we've since then seen the establishment of international human rights framework.  We've seen U.N. core conventions and many constitutional guarantees around human rights. 

        I mean, I think history shows us that there wouldn't be human rights if they were based on self‑regulation.  So ‑‑

>> MODERATOR: Thank you. 

>> MARKUS BEEKO: I think that's ‑‑ I think that's the lesson learned from history. 

>> MODERATOR: Thank you very much.  I think you both put your finger on it.  I think, Jai, you've got something you want to say.  But I want to give some time to a couple more questions from the floor.  And we already have the parties have decided to start.  So thanks, they're practicing it's called a sound check.  Sounds to me like they're rehearsing the whole playlist.  But anyway, Jai, for you. 

>> JAI VIPRA: Very quickly, I think there absolutely needs to be oversight and that it has to be public so it cannot be like Facebook's efforts to have an oversight board which is entirely hand‑picked by Facebook essentially.  Because if you want to function as a company, as a public utility, you have to have public oversight.  If you don't like that, you should consent to your monopoly being broken up. 

>> MODERATOR: Okay.  So I have a remote participant, Elizabeth, who's been waiting patiently.  Elizabeth has a question.  Is she on audio?  Or do we ‑‑ how does that work?  Is she ready?  She might have gone off and made a cup of tea.  Gin and tonic. 
[ Laughter ]
Hot chocolate.  Maybe just queue her, let her know that we'd like to ask a question. 

>> SEBASTIAN SCHWEDA: She's on the line now. 

>> MODERATOR: She's on the line.  Hi, Elizabeth, glad you could make it.  Through remote participation, I'm a full supporter of these wonderful uses of technology to increase participation.  Elizabeth, we're all listening to you.  Your question, please. 

>> AUDIENCE: Thanks for letting us be here.  So I was asking, there is a tension between the business side of things and the way of putting ethics first.  And I don't know if we can make an international agreement if some countries really play out artificial intelligence to be the future tool of sovereignty.  So I ask myself, is it feasible that all powers who develop AI at the moment agree on one international ethics agreement?  Do you think that's reasonable? 

>> MODERATOR: Is it reasonable?  Feasible.  Okay.  I think it's a good question.  I do need to ask the panelists to be brief in their answers.  It's a big question.  I know it requires a big answer.  Let's just see ‑‑ Jai, do you have an answer to this question?  No.  She's thinking about it.  Fair enough.  Markus? 

>> MARKUS BEEKO: Reasonable, of course.  Feasible, I think ‑‑ I think someone has to start.  I think we move ahead.  Whether China is prepared to join or others, even the U.S., will depend on many things, but it's worth trying.  And what it ‑‑ I think for all needs is that we all start to get this discussion not only out of this room but out of all these spaces because if it's about the future of our societies and economies, and we really believe that we live in democratic countries, we haven't even started in opening a space for people to participate or to even understand despite their participation.  So while I think we need this international space, we also have to bring this to the local very practical day‑to‑day level of at least some basic understanding and participatory discussions. 

>> MODERATOR: Thank you.  Paul.  Feasible?  Reasonable? 

>> PAUL NIMITZ: Yeah.  I think it's always good to work on global coherence.  And in fact on AI, there is already an OECD text which is not binding but agreed with U.S., amazing turnaround with the U.S. and the OECD and endorsed also by G‑20 which contains some of the buzzwords which we will also hear discussing.  And to get to something more binding, of course, you know, that's a long way, and there's a good rule when it comes to protection of fundamental rights that you don't wait until you have a global agreement before you start doing it.  We, in Europe, we deliver the rights from our charter of human rights to people.  So we do GDPR and then we see who else is following.  But in the Council of Europe, which is the bigger, more members holding international organization in Europe which has Turkey in it, Russia and so on, there is now a working group which starts on a feasibility study on an agreement, international multilateral agreement on rules for AI and, you know, this is how convention 108 started on data protection.  So I wouldn't discard it.  I would say it's worth working on it.  We certainly support this work.  But at the same time, to say ooh, let's wait until there's a global agreement before we regulate domestically, that would be a huge mistake, and I don't think that's what we're going to do. 

>> MODERATOR: Thank you, Paul.  Thank you very much.  Now, we have another remote participant who's been waiting.  Oh, sorry, Jai.  Yes. 

>> JAI VIPRA: The reason I hesitated is because there's a geopolitics to ethics as well.  And that should not be obscured.  With that caveat, sure, it's reasonable to have an international agreement on ethics.  Again, given, like you said, there is nothing that prevents nation states from doing what an international agreement would do. 

>> MODERATOR: Okay.  So we do have ‑‑ I think it's Shocklo.  Are they able to speak their question, ask their question by remote?  Just checking. 

>> SEBASTIAN SCHWEDA: She's not online.  She's not online anymore. 

>> MODERATOR: Okay.  Not online anymore.  I'm not going to discuss to answer it because I want to move to the audience.  I've got a couple other questions.  The other question sent in of Chan who asked an earlier question, I think it's appropriate just to have it in the text for us to consider.  She's asking does human rights implications of artificial intelligence have been heavily discussed.  Presuming that artificial intelligence could become conscious actors in the near future, what about the artificial intelligent rights of machines?  So this is really pushing the envelope.  I just want it out there.  Because at the moment we've been making distinction that at the moment our artificial intelligences are only machines and could never have feelings.  And so I think just consider that.  Leave that hanging. 

        I would like to turn to the floor.  I have Annette here who was here.  Where has she gone?  There you are.  Already at the mic.  On cue.  I have Parminder.  I don't have your name.  That's enough.  Number 3.  3.  Brief questions.  But Annette that, your question because I know you sent it in.  Please ask your question. 

>> AUDIENCE: All right.  Thank you.  I have a question on the dignity and rights of workers who are in a special situation of dependency.  So my question is how are the rights of workers safeguarded in the development and implementation of AI processes in respect to co‑decision‑making, informational self‑determination, data protection of employees?  For example, AI, analyzing behavior of data of employees, the question of autonomy of decision‑making at work, and liability in case something goes wrong.  And how can companies and governments in their role as employers be prevented from evading responsibility by simply transferring critical AI processing to unaccountable third parties like LinkedIn and others? 

>> MODERATOR: Thank you very much, Annette.  I'm just going to take two more questions because it's important we get the questions out there.  The rights of workers and whether their employers should wriggle out of those rights by outsourcing to third‑party artificial intelligences.  Or digital intelligences.  But you had a question.  You need to be able to get to a mic to ask it.  Parminder, can you get yourself to a mic?  We just need to get those questions on the floor.  Yeah. 

>> AUDIENCE: Yeah.  Alex mentioned that surveillance is a very bad use of AI, and you said dictators are going to be using it.  The only proof we've had so far of users by the most powerful democracy in the world, and the big five, who have been sort of invading the entire world.  So the question that I have is these companies are American, most of them, and some of them are Chinese.  And we are talking about OECD and EU laying down processes and rules which are applicable within those regions.  But I come from India.  And I think there are many countries in the developing world in Asia, in Africa, in South America.  I don't know whether we can count America and Europe to ensure that the human rights (Inaudible).  So the question to the panelists is what do you think are frameworks by which we can make sure the human rights is not a privilege of the well‑to‑do? 

>> MODERATOR: Thank you.  Very important question.  So the third and last question for this last bunch is Parminder, please. 

>> AUDIENCE: Hello.  Actually, I carry on from from Guru said.  Friend Paul from the European Commission was rightly talking about making international agreements and Markus rightly said that there would have been no human rights if we were just following self‑regulation but there would have been also no human rights if we didn't go with the multilateral treaties.  And for a long time there's been a development of digital governance mechanism in the U.N., at least a place for policy discussions could take place.  And for countries, Europe and the U.S. have not allowed any such development to take place.  While saying that digital is distributed in different sectors, but why did OECD right lie mention has a digital policy committee which develops AI policy develops public policies on the Internet but they do not allow U.N.‑based similar committee to get developed which is concerning because AI demonstrates power.  And unless there's a democratization over governance of AI, we cannot expect protocols from the U.S. or China.  The question is that situation has been thought to have come, do we need a global policy norms‑making mechanism which is tied to the U.N.? 

>> MODERATOR: Okay.  So the role of the U.N. and global norms policy‑making and mechanism based in the U.N. where all member states have a vote and can be involved.  So we have a question about the rights of workers.  And we have two questions about calling the rich privileged agenda‑setting players to account from the global south, if I could sum it up like that way.  So first of all, a response about the rights to workers, for workers, might be difficult because we haven't got necessarily employers.  But does the panel have any comments on that first important question from Annette?  Jai, go for it. 

>> JAI VIPRA: Yeah, absolutely.  I think it's a very important question, and we at IT For Change do think that workers who create data should have a say in how that data is used and for what purpose.  And this does not apply only to gig workers but also to traditional factory workers who are increasingly now creating data at the workplace.  I know that consciousness on this is rising among workers and the unions.  And I think it's important that it keeps happening. 

>> MODERATOR: Paul, do you have anything specifically about the rights of workers and problems with employers shifting the blame or shifting the accountability if things go wrong? 

>> PAUL NIMITZ: Well, I would say in Europe and many countries, of course, existing rules.  And I think it's very important not to give the impression that these rules don't apply to AI because the word AI doesn't come up in the law or even the word data doesn't come up in the law.  There are rules on, you know, having to even agree with the shop stewards on the observation of workers and, you know, their performance assessment and so on.  So I think the first step here has to be let's apply those rules directly also to new technologies and identify precisely where there are gaps, and if necessary, if the technology really changes the balance, then to amend the laws because, you know, I think it's very ‑‑ I think there's a rather broad consensus that in the balance between social partners, technology should not change the balance between workers and their employees.  So let's work with the laws and let's not create the impression that workers have no rights when AI is introduced, for example, to analyze their behavior.  Because they do have these rights.  And I think rather we should tell the people to exercise their rights.  And then if we find there are no rights, let's put them into law. 

>> MODERATOR: Thank you very much. 

>> PAUL NIMITZ: This is a problem of labor law.  In other countries I can't imagine those rights don't exist.  You know, that is a fight to be fought. 

>> MODERATOR: Thank you very much.  Now we have the ‑‑ we're moving up a level.  The issue about the ongoing role of the U.N. and why the U.N. cannot be a suitable space to start crunching out these problems between the powerful nations and the lists of resourced nations.  We have Guru and Parminder's question.  So does the panel want to respond to that challenge about possibly, if I understand it correctly, a bit of a double standard?  Human rights and AI for you but maybe we'll figure it out when it suits us.  This is not a new question but it's an important one that requires repeated asking.  So Markus. 

>> MARKUS BEEKO: Yeah.  I think Paul already referred to that we've seen quite a number of international convention and regulations which have starting points from regional ‑‑ from regional conventions and regional initiatives.  So I do believe that we also see the effect GDPR has had not only in Europe but across.  So I do believe that the initiatives we've seen in Europe but also by the U.N. high commission for human rights in these areas and also looking at some of the initiatives we've seen on other areas of digitalization like, for instance, the initiative from Germany and Brazil on the strengthening of privacy and others.  So I do believe that it's ‑‑ I do believe that we will get to a U.N.‑level conventions in these areas and that there will be a dynamic, and I think the challenge is that while this might lag on, there might be a control lack, and there might be a power play in certain parts of the globe where we don't have safeguards at this point to guarantee rights holders' rights.  But I think we just have to ‑‑ we just have to then go back and push back even if developments have taken place which are not human rights respecting.  I mean, we already see this ‑‑ the U.N. and special rapporteur on extreme poverty just a few weeks ago presented a report on artificial intelligence in welfare systems across the globe.  And we already see developments which are highly problematic.  But I think it shouldn't discourage us that we can push back and we can also roll back some of these systems and developments which have happened so far. 

>> MODERATOR: Thank you.  Now, Paul, do you have a comment or response?  And I know Jai does.  Please. 

>> PAUL NIMITZ: Yeah.  The EU, unfortunately, we are not member ‑‑ we're not a full member of the U.N., but the conventions which are made in the Council of Europe are open for anybody's signature.  So, for example ‑‑

>> (Away from mic).

>> MODERATOR: Parminder. 

>> PAUL NIMITZ: So, for example, the U.S. has signed an agreement which is rather helping a little bit, let's say, repressive purposes which is the Budapest convention cybersecurity, but we have invited the U.S. a number of times to sign up to convention 108 on data protection.  It's a big debate going on in Washington on data protection and maybe the debate would be facilitated by just signing up to this convention and many other countries ‑‑ Canada is also ‑‑ and we have some South American countries who have insoed up, so that's absolutely possible.  So there is a way to do international conventions.  And I would say many countries ‑‑ many developing countries, they are invited to sign up, and it's a domestic discussion whether, you know, the governments are willing to go this way or not.  Nobody is against the U.N. doing this or that.  But the U.N., of course, is extremely, extremely slow.  And I think these technology issues in particular, you know, they need fast action and fast moving forward.  And so, you know, as I said, we in Europe, we have to protect, first of all, our citizens, and that's why we're doing our domestic laws.  We work with others to do multilateral agreements which are open to others to sign up.  It and please join the train rather than saying we want a different train.  I wish you good luck. 

>> MODERATOR: Thank you.  Jai and then Markus has one point to add because we have another question and I have another one queued here.  I'm keeping on with the time.  The beer is getting colder while we wait.  It's okay.  We have 20 minutes.  Jai. 

>> JAI VIPRA: I think the question has never been should there be international governance of the Internet because the Internet is a global thing.  But what the appropriate forum for that governance is.  And the U.N., for example, is a more appropriate forum than a trade body like the World Trade Organization because I would not think that a trade organization should determine what ‑‑ the levels of security of a transaction should be or whether source code should be allowed to be revealed or not.  So I think it is a question of forum, and it is a question of how democratic the forum is. 

>> MODERATOR: Thank you very much.  And Markus, you had a point to make? 

>> MARKUS BEEKO: Just on global solutions, I think as there's still the strong narrative that some of these things cannot be done, I think it's so important that we prove everybody wrong who says that in a digital future, there can be no safeguarding of human rights and fundamental rights.  So I think wherever we can prove them wrong and make the point, I think it has a highly important ‑‑ it's an important signal for others to follow on a very practical level. 

>> MODERATOR: Thank you.  Could you state your name for the record? 

>> AUDIENCE: Alberto, yes.  Thank you for taking my question.  Considering that data is the fuel for AI, whatever information is inside the data will power a machine.  And then something will happen.  Certain output comes.  So I'll give a simple example of a decision that is very human.  For instance, I will drive from one location to another one.  And then I can keep in mind that I was ‑‑ I will just get there.  I can think I want to get there fast.  So I drive fast.  I can also think I want to keep in mind environmental factors such as reducing CO2.  Then maybe I won't drive this fast.  So if we want to input this into a machine, then the machine can optimize the situation and say, I will before I this car really fast.  Or I can bring this car really environmentally efficient.  Then my question would be how can we include not ‑‑ like nonexplicit parameters into decision‑making from AI that also includes the environment such as the SDGs.  But then this is not a part of the function of the AI, but it's a part from everything that happens because this AI is doing something. 

>> MODERATOR: Thank you.  I'm going to add the last question I have.  Nobody else from the floor?  Okay.  Ah.  Okay.  Implicit ‑‑ implicit considerations.  How do they get factored in with AI that is up to the job of making inferences about what might be needed?  Then we have another question.  What measures are in place to deal with ‑‑ this is from Fionna who sent this in ‑‑ what measures are in place to deal with international conflict about the governance of AI?  This is a young person looking ahead to conflicts.  And if there are measures in place, what principles of international cooperation could there be in this area?  So a little bit of brainstorming, maybe, issues around drawing inferences, implicit assumptions that might be needed, and then a third question from the floor very briefly, do you want to go to the mic?  Yeah.  Just keep it brief.  I need ‑‑ it's 5 to 6:00 and we need some time for summing up.  And responding to these questions. 

>> AUDIENCE: Hello.  My name is Doughty Carr.  I'm here for the Youth IGF.  I wanted to know that ‑‑ well, I think that many Internet intermediaries such as social networks or also ‑‑ well, like they are using algorithms, decision‑making processes or AI tools in order to encounter or to overcome hate speech.  And I wanted to know what do you think about it, like if you think, first of all, if an algorithm could effectively do it to promote free speech, and how do you think it should be?  Or also, I would like to know like any resources I can, like, later look for it because I want to really know about how to alternatives against hate speech in these kind of networks. 

>> MODERATOR: An important point because under the pressure of recent events, shootings using livestreaming recently in Christchurch and elsewhere, revenge porn, whether AI can be used to counter hate speech.  It's a very important point.  We have three really big questions.  I'm just going to ask our panelists to respond to the ones that strike you as the strongest for you because then we need to go into the summing up.  So Paul, would you like to begin? 

>> PAUL NIMITZ: Well, in Europe, we have a legal definition of hate speech, and surely the machines, they cannot do this perfectly, but they can do it at the core, I would say.  You know, evidently, clearly, incitement to violence, clearly in the core of inciting hate.  So if you're looking for resources, look at our code of conduct, EU code of conduct on hate speech and implementation reports there.  And, you know, it's still necessary to have human touch checking after the machine.  But the machines can help.  They're not perfect.  There's no law which is perfectly enforced by humans.  There's no law which is perfectly enforced by machines.  But they can help to stem the tide. 

        On the clean car, in democracy, of course, a legislator can decide that in the future when we have automatic cars, the automatic car has to be programmed, sustainable clean with minimum exhaust by default.  Well, you could imagine that any car you buy will have this type of programming in it and that you have to make conscious choices, and maybe you're limited in it to go fast instead of going clean.  You know, I mean, this is what the discussions are about already today about speed limits.  And in the same way that we have legislators who say we don't want any speed limit, Germany.  We have a lot of legislators in Europe who say we put in speed limits for safety but also for environmental purposes.  So I think we have to, you know, think of the law prescribing certain elements of how the technology is implemented.  That has always been the case and should also be the case when it comes to making sustainability work. 

>> MODERATOR: Thank you.  Markus and then Jai.  Markus, did you have a response to any of those questions? 

>> MARKUS BEEKO: I think Paul has outlined it to a large degree.  I wasn't sure on the question about the implicit and explicitly as we also discuss transparency and accountability and given the point with the environment that also would be a part we would want to make expressive and transparent.  I'm not quite sure about that.  On the conflict, I think that in the area of lateral autonomous weapons, we run a good chance over the next years to get a binding international ban for those.  I think it's harder if we're not looking at totally autonomous weapons and also not necessarily letter weapon, so there is some discussion but I think at least in this area, we run good chances to see a binding convention. 

>> MODERATOR: Thank you.  Jai. 

>> JAI VIPRA: Yeah.  On hate speech, I think the political economy question is to what level we want private censorship to exist.  And actually, at this point we might want a little bit of it, although with transparency.  So you would want to know how the decision was made, and you would want to know what parameters were used.  And you would want to have control over those parameters as a society. 

        We also need to ask whether at this point it's profitable, actually, to the platform for hate speech to proliferate.  That's the important question. 

>> MODERATOR: All right.  Thank you.  So drawing it to a close.  I know there are a lot more questions.  We have many on the record already.  Just before we do draw things to a close, I'm going to say the thank‑yous now because if I wait till the end, you're all going to be out of the room.  So I'd just like to thank our colleagues at Amnesty, Elena and Sebastian Sweder Minda Moreira at the Internet rights and principles coalition to invite you to come and join us at booth 49.  We are down at the cool end above the water.  And you can help yourselves to charters of human rights and principles for the Internet in any language and offer to translate some other ones in other languages.  And that's been the work today just to get this on the agenda.  So to finalize and to ground ourselves before we head off, I'm just going to ask my panelists the difficult task of coming up with one ‑‑ what we like to call takeaway ‑‑ in the form of an action point.  Because our rapporteur here, Minda, we are required by IGT rules to have action points.  We don't want to get into actionism.  We don't want to be, you know, too quick to rush to solutions before we know what the questions are.  But I do think we've had a very, very productive discussion. 

        So for our thinking going forward, you may raise another question, of course.  It might be an action point.  So I'd like to ask Markus to open with his final thought.  Paul.  And then I'll give Jai the final word.  So thank you. 

>> MARKUS BEEKO: So if this is about everybody's future, the people in the room here and many people at the IGF are those who better understand and at least know the questions, I think one action point should be that we should all look at our possibilities to take this out of these spaces and to have everybody understand and at least think about your very personal aim you precisely might want to look at, how you take this discussion forward with others into the public debate and make this one where human beings together shape the way we take this forward. 

>> MODERATOR: Thank you very much, Markus.  And Paul. 

>> PAUL NIMITZ: Yes.  I think we cannot deploy democracy and the rising of populism and at the same time say let's maintain a light touch and let's have codes of conduct.  I think we need to show that democracy really can make a difference that have rules which are binding and can be enforced.  I think it's very important that people interested in these issues which we have discussed today and also those who have the technical know‑how, reengage with democracy, reengage with those instances which work for binding rules, which work for lawmaking in democracy, so we have to go beyond debate.  We have to go beyond programming and beyond being cool so that Congress, but we need sustained engagement in the rulemaking process in political parties, in big organizations which stick with parliament until the rule is through.  So that's my action point for everybody. 

>> MODERATOR: Thank you so much.  Much appreciated.  Jai. 

>> JAI VIPRA: The action point I think would be a legal declaration that data and digital intelligence are people's resources.  Because we know now that democratic control over AI is possible, and democratic control has consistently been the only way to ensure the existence and enforcement of human rights. 

>> MODERATOR: So with that, I think I'd like to thank you all for being such a brilliant and very attentive audience and for the great questions.  And to thank my panel, Paul Nemitz, Jai Vipra and Markus Beeko for their time, energy and commitment because I know they're going to move forward and make some of this stuff happen.  And they're inviting us all to take part, I understand.  So the invitation is open.  Thank you very much. 

[ Applause ]

 

Contact Information

United Nations
Secretariat of the Internet Governance Forum (IGF)

Villa Le Bocage
Palais des Nations,
CH-1211 Geneva 10
Switzerland

igf [at] un [dot] org
+41 (0) 229 173 411