You are here

IGF 2019 – Day 2 – Convention Hall I-D – OF #13 Human Rights & AI Wrongs: Who Is Responsible? - RAW

The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> MODERATOR: Ladies and gentlemen, good afternoon.  Very warm welcome to this open forum No. 13, Human Rights and AI Wrongs:  Who is responsible?

     It's my great pleasure to be your moderator this afternoon, and I would like to introduce the members of the panel that will lead the discussions.

     To my right Joe McMane, who is a member of the Council of Europe's Committee of Experts on Human Rights Dimensions of Automated Data Processing and different forms of artificial intelligence.  He also has had a lot of other responsibility, which you'll find on his CV.

     If you agree, then to my right, Mr. David R. from the Fundamental Rights Agency of the European Union, head of research and data.

     To my left, Ms. Clara N., Senior Director of IEEE, and Ms. Cornelia C., Senior Director at Microsoft responsible for EU government affairs privacy and digital policies.

     Artificial intelligence is having an increasing impact on all our lives and as experts, which you all are, I don't have to go into the details.  The council of Europe organized earlier this year a conference in Helsinki, which was entitled governing the game changer, and that title I think sums up the impact and the challenges we are facing.

     A whole series of issues arise out of the increased application of AI whether it be by companies, whether it is by governments, and these challenges relate to the design, the development, and the application of AI systems.

     At the council of Europe, we have so far looked at the matter from a vertical perspective, if you like, several sectors have produced guidelines on particular uses of AI, for instance AI and racism and antesemitism, AI and data protection, AI and culture, et cetera.

     Earlier this year the ‑‑ in May following the Helsinki conference our governance ‑‑ 47 Governments decided to start working on a legal framework for the development and application, design development and application of AI and the committee set up to do that, which is called KAHI, met for the first time in Strasberg last week.  It is a unique effort.  To my knowledge it is the first international effort to go beyond the ethical frameworks that exist and there are about 200 of them, and to try to establish a legal framework with, of course, rights and obligations to ensure that AI is a force for the good.

     I leave it here, at this here, and I would like to invite Joe McName to kick off the discussions.

     Joe, please. 

     >> Joe:  Thank you very much.  I had the honor to be a member state representative on the expert committee that Yan just mentioned on preparing a report and a draft recommendation on addressing the impacts of algorithms on human rights.  As usual for the council of Europe, it was a meaningfully multistakeholder process involving academic experts, civil societies, industry and member state representatives.

     It also included a successful and very diverse public consultation, which produced some very insightful and very constructive responses.

     The parts of that committee are of a very high quality and we believe that there will be an as says for future policy development in this area, and are obviously available on the council of Europe's website.

     I also had the opportunity to participate in a truly excellent council of Europe event under the Finnish presidency called governing the game changer.  I warmly recommend watching the videos, which are still online, together with the resolution that was adopted at the event, and a summary.

     It was unique in my experience, because I never heard one person say this conference was the best conference I've ever been at, and this was one event where multiple people said this is the best ever, so it's definitely worth taking a look at the videos.

     The one thing that has become obvious to me in the two years of discussions that I've been involved in at the council of Europe about AI is that AI isn't magic.  We shouldn't talk about it like it's magic.  It cannot do everything.  It cannot plan everything.  It is generally owned and implemented by the powerful.  Its victims are rarely powerful.  There are things it can do.  There are things it can't do.  This is why the recommendation underlines in its preamble the need to ensure that racial, gender, and other societal and labor force and balances that have not been yet eliminated from our societies are not deliberately or accidentally perpetuated through allege rhythmically systems.

     When it comes to policy making, I would like to borrow and stretch an analogy used by one of the other speakers, Clara, at the game changer event.

     Imagine if we talked about air travel the way we talk about AI.  Who should be responsible foreign conceivably complex passenger airliners?  An airliner could have millions of parts delivered by hundreds of suppliers.  Is it really possible to allocate responsibility in case of failure?  Should we really restrict airline innovation by regulating safety?  Of course, we should.  It would be preposterous to suggest otherwise; however, somehow under a cloak of techno magic this is how we talk about artificial intelligence.  Of course, airlines are able to avoid paying tax on fuel and they impose externalities on the planet in form of CO2 emissions.  Unlike big tech companies that are accused of avoiding paying tax and impose externalities in the form of pollution of democracy.

     We need to understand that these externalities of AI can cause real harm to real people, and those people are generally the least well able to defend themselves.  We have to identify and regulate the externalities, and we have to enforce the regulation.  It must not be profitable to cause harm.  It must not be profitable to cause significant risk.  And, we must radically accept the notion that some applications of technology are not acceptable in a different democratic society.

     We need to understand the peculiar economic of data to fight actively, energetically and successfully against monopolies and to promote competition.  We must avoid getting sidetracked by lobbying buzz words like innovation as if predictable legal frameworks were a burden.

     We need to avoid getting sidetracked by lobbying buzz words like ethics and its subjective buzz words are a meaningful replacement for a predictable legal framework.

     And, to quote somebody that I don't know called Natasha on Twitter, the future was supposed to be Star Trek.  Instead we're getting mad Max.

     Let's get back to Star Trek.

     Thank you.

     >> MODERATOR:   Thank you very much, indeed Joe.

     Now pass it forward to David from our partner and sister organization, European Union fundamental rights union.

     Please, David. 

     >> David:  Thank you very much.  Yes, I work for the EU fundamental rights agency which is one of the more than slightly 40 EU agencies and our main mandate and task is to provide the EU institutions and its member states with data and analysis on fundamental rights issues.  As fundamental rights you can read this as human rights, but fundamental rights in the chart of fundamental rights applicable to the European Union.

     So, we are also running a project on artificial intelligence and fundamental rights looking into concrete use cases, and only today I would like to take the opportunity to announce we published a paper on facial recognition technology, and its fundamental rights implications, which is available on our website, and this paper also addresses several of the questions at stake that are discussed in this session.

     The question was posed for this open session on who is responsible for human rights?  It was mentioned quite often but we have to repeat it again.  It is the state that is responsible to observe human rights, and this is especially so when the state is using AI for public administration, for example.  The state is responsible to lead to human rights to come up with safeguards and possible regulation.

     The most important tool we have to counteract human rights violation when using AI was also mentioned a few times, but this is my view is a human rights impact assessment.  Impact assessments are available in existing data protection law, like the general data protection regulation, convention 108 plus, and it's also recommended by several stakeholders.

     I think it's the best tool that we have at this time to counteract human rights violations using AI because the main problem we have these days is the lack of knowledge on the uses and the workings of artificial intelligence.

     It was mentioned by my previous speaker, and there is still a lot of science fiction talk around the use of artificial intelligence.  People think about terminate tore.  People think about the movie minority report and other Hollywood movies where what is happening and what is developed has nothing to do with science fiction.

     There is no auto non‑mouse AI or computer that programs itself, at least I haven't seen one.  So, it's important to really look into concrete uses of artificial intelligence of what it does and where the harms come; however, although there is no science fiction there is still quite some challenges in evaluating the possible violations of artificial intelligence.  One problem is that usual machine learning algorithms are based on training data, which means historical data, the computer use these data to find some rules that can be applied in real world; however, we're quite limit the Ed to go beyond this training data and often cannot evaluate what happens then on the ground when it is deployed in practice.

     One example is also that the percentages about performance, we often hear reported are just based on some training data and we do not know what this means in real life.  We refer to this, for example, when it comes to facial recognition, to the number of false positives and negatives, we always need to be evaluated on real numbers in a specific context.

     Another problem is, of course, copyright.  Often information is not available to do a proper human rights impact assessments.  When public administration would procure AI, good recommendation is to request the necessary information to be able to do a proper human rights impact assessments.  This is what we recommend in this paper on facial recognition technology, but was also recommended by others.

     Another problem is linked to the lack of knowledge the understanding of what algorithms do.  Often people speak about plaque boxes so we can't understand what is going on under the hood of an algorithm.  I think we shouldn't buy this as such, because there is quite a lot of research and opportunities to learn about how algorithms work.  First of all, starting by describing what are the training data that are used to build the algorithm.  Secondly, the many more issues of looking into the main predictors and so forth, and once we've done this, we increased our standing already quite a lot, however, one other point that was also discussed and was part of the draft recommendations on the impact of algorithms from the counsels I of expert group, for some cases limited opportunities to do real world experimentation, because we can not experiment on humans for certain use cases, of course.  And, this means that we do not know the exact impact of certain AI in the real world, and this means we have to invest more into the human rights impact assessment.

     So, I'm closing by ‑‑ yeah, reiterating the importance tans of the human rights impact assessment in context, because it's quite challenging to have an overarching statement on certain technologies.  We see this with face recognition technology.  There are so many different ways it can be applied and used, and it's so difficult to find a general statement of the impacts, but what is important when human rights are assessed to consider the full range of human rights, and discuss all the different balances against each other.

     Thank you.

     >> MODERATOR:   Thank you very much, David.  We've often heard that at least some pretend that setting standards for technical standard or otherwise would hamper innovation.  I think IEEE is a very good example of demonstrating that that is not the case, and very pleased to give the floor to Clara, the director. 

     >> Clara Neppel:  Thank you.  Thank you Joel.  Also mentioned in the very nice conference in hell syncy.  I agree, it was one of the best conferencees I was in.

     So, I think that it was at that conference that I introduced a concept of informed trust and in this presentation I would like to see what this means for AI.

     So, first of all, I would like to start with something positive.  First of all, AI can definitely do a lot of good.  So, I just read that AI can predict with very high accuracy epilepsy attacks.  Now, you can imagine to what extent this is really contributing to the well‑being of the people affected, however, as you can see now on the screen, there are certain cases where AI is not working so well.  Maybe some of you know already this example, but here the dog, the last one, the poor dog was not recognized as being a dog because here it was not quite evident it was a snow detector and not a wolf and dog detector.

     So, when we talk about AI, we also have to think about what are the measures to increase this trust, and what we say is informed trust is important because otherwise you have basically two options.  One option is that you don't have information of what is in this black box, then there is distrust and you will not use the technology at all.  As we saw now for the epilepsy example, that would not be a good solution.

     The other alternative is that you have blind trust, and that again is not a good solution because you might use the technology in a way that is not for seen.  So, we are a standard association, but we are also an association of engineers, technology, scientists, and we publish quite a lot, and now some of our stakeholders say they are uncomfortable to publish some data sets because they might be used in a way that was not designed.  So, what are the ways to disclose of what is inside this black box?

     Well, we already heard about the principles, and there are 80 to a hundred, I think, it varies.  I think it is important to start with those principles, but then it is important to define also what we mean by those principles.  Transparency can mean something completely different for me and for you of transparency can mean for one of us to have an understanding of why a robot recommended something, for instance, for elderly people to take their medication.  For others it might be really the technical details of how it was designed.

     So, definition of principle is important.

     Then, it's important to understand how to achieve, how to achieve what we want to, you know, as a transparency being one of the important examples, for instance.

     Ultimately, it's then also important to prove that somebody actually satisfies those requirements.

     So, one way to do this, it's not the only way, one way to do this is ‑‑ sorry, it's very small, but one way to do it is through standards.  Standard is basically nothing else but a consensus setting mechanism, and IEEE we started with these principles to practice quite soon.  We started with the principles three or four years ago, and almost in parallel we started developing some concrete tools on how to make, as if you want the recipe of achieving what the principles told us.

     So, we are now working on standards, which are technical standards on enter portability, but also on what you call ethical standards or impact standard which range from how to put ethics into the code.Ethical system design, if you want.  So, a system thinking.  To new measures on how to measure the impact of the AI solutions.  And, almost also very soon we started recertification, actually in Helsinke by public‑private partnership who was initiator of that, and for the moment those three certification threats, if you want serious transparency, account Billability and allege rhythmic bias.

     As you see we are involved with education and going more into verticals, for businesses, for artists, health, even parenting and so on.

     Since we are working ‑‑ talking here about AI wrongs, I think we have to talk also about accountability.  I just give here an example, which you unfortunately cannot read (Laughter) but this is more or less let's say the structure of one possible way of how accountability requirements can be set up.  We have some inhibitors.  As you can see some would be to rubber stamp for instance, and there are some evidence that can be audited, such as transparency, as a matter of fact, and clear procedures that are in favor of these accountabilities.  So, this is just an example of how we set up the certification program.

     Basically I just wanted to give you an example of how, let's say, standards can ‑‑ standard and certification can be a part of the solution to build this informed trust which is necessary for AI solution.

     Thank you.

     >> MODERATOR:   Thank you very much, indeed, Clara, for this presentation.

     I would now like to invite Cornelia from Microsoft to give her presentation. 

     >> Cornelia:  Thank you very much for the invitation and Microsoft a strong supporter of the council of Europe and I swear we have not made this up, but I also was very, very delighted to participate in the Helsinki conference, but I want to be a little more precise.  If you ever go to the website, my favorite intervention was from Paul Mahoney, European court of human rights who gave an excellent analysis how under the human rights convention artificial intelligence can be touched and how this human rights framework is technology neutral in a way, and I think this was one of the best interventions I have heard in how AI will be and is based on rule of law.  So, I want to give some insight in how we're doing and how the draft implications play an important role.  As David already said, of course the ad here answer to fundamental rights is a state matter and they have to put the laws in place, and I think now we are in a state where we have to review largely where there is gaps and where we need to fill in those gaps.

     Microsoft started around three years ago to develop its principles, and I think we have seen over the last two, three years across the globe the development of ethical principles and the context of AI, largely I think this is ‑‑ this was a reflection period where people really thought about what do we want this technology to do and what we want this technology not to do, but of course once we have established these principles, and they're fairly homogeneous across the world, certainly those that have been developed by the OICD or by the group on AI after European Commission, we have started to think in across group effort across the company at a highest level through an AI engineering group to actually go into details of how you can implement these principles into practice.  That is one of the reasons that Brad Smith, our President and chief legal officer has established recently the office of responsible AI, which is currently tasked to really do four things.

     First of all, it is developing a fairly encompassing responsible AI life cycle which really starts in how do you actually transform these principles into engineering guidance so that engineers that are starting to think ambition tools that they can incorporate the principles into this process, and it is a process.  It really starts with envisioning, defining, prototyping, building, launching, and evolving around those AI tools.

     Then, we also have, and you will see that there is a certain analogy to the draft recommendations.  We have also an escalation model so that when we talk to customers and when our sales organizations contract around tools and AI, when they see issues around sensitive issues we have an escalation models that goes back to responsible AI to make decisions when we believe that in certain circumstances our principles cannot be adhered to and what that means.

     Then, we also do believe that we need to empower our customers to think about the responsible life cycle, which is an end‑to‑end thought.  It does start with developing AI tools, but, of course, given the power that was already talked about of these tools, and how they evolve once they are used, customers also have to think about how to be responsible in the deployment of AI tools.  So, here we have started to also help customers and give guidance to our customers in how to think through the issues around AI.

     Largely, then, at the last ‑‑ and this brings me back to the council of Europe.  We are focused on helping to develop policy and engage with stakeholders such as this forum in what are the type of laws we need, and we have one particular area that we have where we clearly said we need laws, which is the use of facial recognition.  Not all uses of facial recognition, of course, are problematic.  You can detect images, people in images without identifying those people.  You can verify people in one to one verification systems, which are largely not problematic in the context of human rights, and then of course you have the identification which can, if misused, lead to surveillance situations.

     In all of those, you need to think through what are the responsibilities of the developers of the AI tools and what are the responsibilities of the deployers of the tools, and in particular in government context, we do believe that there are certain areas that require legal basis and I will stop here and then we can go into details in the discussion.

     Thank you.

     >> MODERATOR:   Thank you very much, Cornelia.

     I would like to invite the panelists to comments on each others presentations, perhaps starting again with Joe. 

     >> Joe:  I agreed quite energetically with most of what I heard, so I don't have many comments.

     I think the facial recognition topic is definitely one that needs much more reflection.  It's really good that the fundamental rights agency has pushed the debate forward, and it's good that we have an industry representative on the panel that also wants a legal framework within which we can operate.  So, all good.  (Laughter)

     >> MODERATOR:  David. 

     >> David:  Thank you.  Yes.  I think there is a lot of agreement, but, I mean, let me also highlight the importance, because we have different tiffs here also on the panel of working together on the topic.  I mean, it's really important to have interdisciplinary work when we want to understand the human rights implications of artificial intelligence, because there are so many different elements that play together.

     Then, just also want to add, because as a discussion, we have already quite good legal framework available also to apply on AI.  As I mentioned before, the GDPR has a very, very many good provisions in there that can be applied also to more than art fish will intelligence, for example decisions on automated decision making, but what we also need to learn in the future as we go along is how this legal framework will be applied in different use cases, and I think this is a very exciting and interesting time where we see how the current tools we have will be applied.

     The GDPR also mentions discrimination in the text, which is one of the most important human rights violations that can be done when using artificial intelligence.  We have a very strong anti‑discrimination legislation in the European Union, which also applies to the use of artificial intelligence, however, as mentioned before, it's not so easy to detect, for example, when discrimination occurs, because there are several different reasons how discrimination can happen, for example, through bias data.  Reflecting existing discrimination practices which are then perpetuated or reinforced through algorithms.

     Unrepresented data we learn owed facial recognition, if it is mainly trained on white male faces, then it will not work well for black women, especially, which was shown by some research.

     Then, even when the data are fine, there could still be differences which are difficult to interpret and in this way discussing existing human rights on the use of artificial intelligence helps us to push forward the human rights discussions.

     >> MODERATOR:   Thank you very much, David.

     Clara. 

     >> Clara:  So, I notice that almost all of us also recognize that it is important to work together.  So, I can just say that from technologists, we are building these bridges.  I think it is important to close the gap, basically between legislation and the technical development.

     One way to do this, which I think was discussed, is to sandboxing.  It is also through self‑regulation, but it is always, I very much like, you mentioned I use the analogy between AI and aviation.  I would say it's the same thing if we, you know, at that time there were a lot of accidents, so what did it take to arrive to the safe tee that we have right now?  And, basically it is combination of social norms, of education, of legislation, self‑regulation.  And I think we have to tackle all these.  And, for that, it is important to discuss with all stakeholders basically also the standards that we are developing is about recognizing the issues, the values of the users, and also of the people affected by the system.  It doesn't have to be a user.  It can be somebody who is affected by the system or society, in general.

     So, thank you for bringing, again, many stakeholders together at this table.

     >> MODERATOR:   Thank you very much.

     Industry. 

     >> Yes.  So, we are indeed a very interesting face currently, and this is largely my own thinking in how this will evolve now.  So, starting with the responsibilities, and I think there will be an obligation almost for governments now to basically look at existing frameworks, legal frameworks from liability, in particular, but also largely any legislative framework in which AI tools will be deployed to stress test those regulatory frameworks against the human rights that exist, the human rights convention or the fundamental rights under the European law.

     To give you a couple of examples.  So, if governments make decisions on school allocation, for example, and those will be enhanced by AI tools, then, of course, non‑discrimination will be a central part and when deploying certain technologies, they will need to understand the limitations of those technologies, how the data sets that were used play a role in the decision that is coming out of the tool.  The algorithms that were used, and how that might continuously evolve as they are using it in the system.  And, I think these are fairly new considerations for Governments when they use technology tools.  So, there is a ‑‑ we think that it's necessary.  We have looked at this, in particular, in facial recognition just to say that there are now starting decisions from DPAs, but also from courts across Europe in analyzing whether the use of the technology was legitimate, and here it is, in particularly, important to understand the broader sense of human rights that come into play.

     I think one last consideration, I think there is a bit of a stress test happening now with GDPR, in particular, around the way GDPR is described, and I would also caution in only looking at GDPR as the solution.  GDPR will play an important role, of course, but there are market safety consideration.  It will really be contextual.  So, there is a lot of work in front of us on the development side, but then in particular in the deployment side of technology.

     >> MODERATOR:   Thank you very much.

     A question I would like to put to the panel is innovation versus regulation.  Many of the governments we work with seem to be having an intense internal debate, not to say disagreement, on whether regulation would stifle innovation.  And, I would be very grateful for your comments on that, thinking about the aircraft industry which was mentioned, or for instance, the pharmaceutical industry, both of which are very heavily regulated but both seem to be highly novelty, as well.

     Perhaps to go in reverse order.  Cornelia, short comment from you, if I may. 

     >> Cornelia:  Yeah, I was never a particular fan of regulation stifle innovation slogans, so I might not be the best representative to defend that myth.

     Maybe in some areas regulation, the word innovation in a different direction more than anything where human rights are considered.  It is important to allow, and this is an economic category.  To have baseline regulation will help companies that want to do the right thing continuously do the right thing.  So, baseline requirements are also needed to help companies that want to respect human rights to be innovative. 

     >> Thank you.  I would say regulation, law is on, I think that is also citation.  Law is on the sea of ethics, isn't it?Ethics is about values, and we all know innovation is also about the value proposition.

     So, I would argue that if we take the values and the ethics of the people into account, actually the value proposition is going to improve.  So, I think that there are some companies who already recognize this, and put already in their thinking, in their design thinking, the ethics, the values, and also derive profits out of it.  So, I would say if we have the right regulation, which reflects the values and the ethics of the society, it will also help innovation.

     >> MODERATOR:   Thank you very much.

     David, please. 

     >> David:  Thank you.  I think that's an important topic.  I mean, looking from a European perspective, we have well use, which are agreed upon in the EU treaties, and these values are not negotiation.

     At the same time, I also think that, I mean, solutions that would challenge this basic values are also not sustainable.  We hear a lot of discussion about trust that we need to have trust in the technology, and trust can only be maintained if we observe and consider the basic fundamental rights.

     I mean, we had the case of Cambridge analytica for example.  If we see for example privacy is not taken seriously, people will lose the trust and this will not make any technical innovation sustainable.

     Having that said, of course I see there is still a lot of uncertainty as to the way some laws need to be applied, and in a way how data can be used, but this is part of the finding out of this new technologies and better understanding how this, for example, data can be used for research purposes in a safeguarded way.

     >> MODERATOR:   Thank you.

     Joe, please. 

     >> Joe:  Regulation works for creating a clear predictable, accountable framework within which everyone can operate.  Self‑regulation works when there is a vested selfish interest on the part of industry stakeholders to achieve a specific verifiable public policy outcome.

     Self‑regulation never works when it is imposed as a result of a threat of regulation.

     I think one thing that we've, as a society, monumentally fail to do is identify that self‑regulation works sometimes, in some contexts.  It doesn't work other times in other contexts, and we've never sat down to establish what are the characteristics of successful self‑regulation, and focus on self‑regulation in environments where we know that it's more likely to work, and avoid using it in environments when it is less likely to work.

     We tend to blunder in every time and make the same mistakes over and over again, or accidentally get it right.

     >> MODERATOR:   Thank you very much.

     I would like to open the floor to all of you.

     Please.

     >> AUDIENCE:  Yes.  Thank you.  Berto, AI.  I would like to go back to your comment on the on AI.  I will tell you that three years ago I will agree with you, okay, but what have we learned since three years?  We learn with analogy that with 300 likes I know you better than your spouse or your best friends, and then we learn with the book of (Speaking non‑English language) when you apply the technique of behavior scientists, like BJ Fog of Stanford University who teach to software developer, you can manipulate the behavior in a scientific way and a little story go like that, that the students were supposed to have an end of semester exercise to build the most unethical application and they develop Instagram and sold it to Facebook.  Okay.

     So, what BJ Fog took the student said in order to manipulate somebody, it's easy.  You have to be in a three dimensional space with your motivation.  Your easy ness of doing something, and the third thing is a trigger.  So, it is observed 24/7.  Machine learning will know exactly which trigger I will be sensitive at.  Okay.  So, that is to say that they can manipulate us likely to trigger by trigger.

     The conclusion that we have been so far that, in fact, this business model, which are based on manipulation, should be forbidden, period.  Because like you regulate ICOR, you regulate drugs, so what do you think of that?  Do you think you made any progress from three years ago?  You mentioned Cambridge analytica, but you should look at, I hope you read the book of McLamezok.  Here we are in terms of society where you have business model, we use all the science to manipulate us.

     >> MODERATOR:   Thank you.

     Who would like to react from the panel? 

     >> So, I agree with you that I think that's the most dangerous aspect of AI, and actually I think we are ‑‑ I'm just coming back from Helsinki, and there was a conference on data economy, and I think that there again, there was a lot of emphasis on the data, personal data, which I think is important, but I think last emphasis on what happens with the trained model, which basically build up the profiles that you just mentioned.  And, I think that this is something that we definitely have to discuss and look closer as a society and to see what are the red lines, because I agree with you that for the moment we don't have any control of what dimensions are set up in a profile for us.  We don't ‑‑ we cannot control if somebody would like to set up a psychological profile, and I think this is part of a human rights that we should have control of.  And, also, then to see if that is true or not.  So, there is no correction mechanism around it.

     So, I heard somebody tell either it is correct and then you are in a world of Orson Wells or it is not correct and you are in a world of Kafca, and I don't think we want either of this.  This is my personal opinion.  I think this is definitely something we should look closer at.

     >> MODERATOR:   Thank you very much, Clara.

     Before giving the floor further on, there is a microphone also going around, so if someone sitting somewhere else, and I will turnaround, too, wants to ask a question, please raise your hand and a microphone will come.

     Are you the first one, please. 

     >> AUDIENCE:  Hello, my name is Ega, and I am here because of the previous Guy who mentioned this about this algorithm and I'm a father.  Basically I'm here because I'm a father and I'm very concerned.  And, you are talking about IE, you are talking about face recognition, and what is happening the last five years already with Facebook is exactly what this Guy meant.

     The business model of Facebook is and they disclose this, they said they wanted to find a gap in human psyche where they can keep the guys as long as possible on the website, give them in between some Dopamine to stay there, and keep them busy to website.

     If you follow this development, you can see, which is according to the implementation of Internet, that the behavior of millions of kids are already changing.  They are developing diet problems, developing homework problems, and this is already happening, and we don't need to think about what we can do in five years.  We need to do it now.

     If we focus on IE or face recognition, we have already spoiled a generation, and what are we going to do for this? 

     >> MODERATOR: Who would like to react?  Industry. 

     >> First of all, I can't speak for Facebook.  I have myself children in a critical game addiction age.  There is, I think, work to be done in a number of areas, and I would like to think that we need to compartmentalize a little bit the issues in where we can actually address them and how we can address them.

     One area, for example, advertising.  The other one is privacy.  I'm not sure whether GDPR has actually been able to tackle the issues that you were mentioning in particular in relation, and there is good studies around this, in relation to inferred data, which is sort of what we learn from the data that is collected, which is sort of the core of knowing more than my partner knows about me, et cetera.  So, if we start to analyze very correctly and precisely where there are the issues, I think we will also have a better ability to find tools, and I do believe that there is sort of a third way of privacy that we need to respect, but it's not only in the context of privacy.  I do believe that we need to have more discussions around responsibilities of human computing interaction, and it's another area, and then of course there are some areas, I think will have to be done, and some are from a competence level done at a European level or a national level.  They are harder to tackle internationally, such as, for example,election legislation, which has been an important part on misinformation.  You have national election rules that are very different from one Country to another, and they need to ‑‑ we need to look at how they can help eventually to lower that problem.

     So, don't have a direct solution, but I think we can, if we start departmental lies a little bit, we can actually tackle each area in a much more concrete and impactful effective way.

     >> MODERATOR:   Perhaps in addition, I should add that one of the proposals at the moment being discussed by the council of Europe on application of AI is the question on whether a human rights impact assessments should be made obligatory, and when we speak about human rights impact assessment, that should certainly concern an impact assessments on vulnerable categories, and children being first and foremost amongst those.

     Perhaps the lady here, and then you. 

     >> AUDIENCE:  Louisa Clinga from European Commission.

     Actually, I would like to ask about exactly impact assessments.  This often being put forward as kind of way forward to ensure fundamental rights compliance of AI, and I would just like to know, because obviously the impact on fundamental rights has to be assessed in relation to the final use of a product.  So, in the actual legal compliance environment where it's going to be used.

     So, how do you see this being practically implemented given that AI is a technology where we see an industry or a sector that not always shows kind of full vertical integration?  So, obviously to do an impact assessment when you're very downstream close to the market would be quite easy, where as upstream at an early development stage you might not at all know the final use or the final features or the product might develop.

     So, how do you see this in practice?

     Thank you. 

     >> Thank you.  I'll take that one.  There is no clear guidance yet available on whether human rights and impact assessments would have to look like, and I think it's also challenging to say in detail, because there are so many different applications of AI.  I mean, everyone agrees AI is quite a broad term, and we also hear the different ways of application in different areas.  So, often depends on the context, also, if there is some harm or if there is no harm, but at least it's the only available tool we have now to have on the general level to look into transparently what is being different in a specific context.  For example, to look into the accuracy of predictions.  Pras was mentioned before how we deal with inferred attributes mentioned by Cornelaia.  We can only evaluate in a specific context and then we need to understand is this just a prediction on the training data or was there some experimentation that could tell us more about it.  But I'm afraid there is no clear guidance.

     What we provide, for example, on face recognition, just to discuss many different rights, starting from discrimination over data protection, freedom of expression, good administration is an important view principle, and considering all these rights, starts to give a bigger picture of where the problems might lie.  And, also, saying why we have the GDPR, as a very good tool to apply, there are also policy processes ongoing, as mentioned the KAHI the council of Europe, the counsel also says there will be an initiative coming up, so there is clear awareness that more needs to be done.

     We will see what comes up.

     >> MODERATOR:   Thank you.  We have six minutes before we have to clear this room.

     >> AUDIENCE:  Thank you very much.  I thank the panel for a very inspiring ideas, which they have given us.

     I am from the University of grants, I am an international lawyer, and my concern is how can we move from self‑regulation through soft‑regulation to regulation as such, meaning binding regulation?  And there is always the council of Europe is already in the forefront here, but what I mean is, it is certainly important to go for responsible AI.  It remind me of the slogan of Google when they started Do No Evil.  Certainly, one should avoid this.  Trust is important and so on.  But, in the end, all these companies are in competition to each other.  There is the stakeholder value.  There is the pressure from the markets.  There is the cost factor, and so on.  So, in the end, I think it is not only in the interest of the user, it also in the interest of the business to have clear rules, which are binding on all.  And, these rules should, if possible, be global, yeah, because we have a global situation.  And, they should apply on the one side to the development of certain forms of artificial intelligence and then also to the use, as we have facial recognition, here is the example, it has very good uses, but it is also used for surveillance, as we know.

     So, should it be the same for a company to sell it to a democracy or to all to to a government?  There should be a difference.

     Thank you.

     >> MODERATOR:   I would just like to briefly come back to the first two questions.

     In my presentation I said we must accept the notion that some applications of technology are not acceptable in a democratic society, so I agree with those speakers.

     I would also briefly like to, I think your two questions raise a very important point in terms of giving data in return for free services.

     That's not what is happening.  You are not giving data on the ‑‑ in order to allow Facebook to know you better than you know yourself.  You are not clicking like in order to give a kid a Dopamine hit in order to get them on the website.  It's ‑‑ that's not the transaction.  And, as long as we keep pretending that is the transaction, we will not make any progress.

     On the most recent question that was asked, I think we need to really think about self‑interest.  Generally speaking, it is in the interest of the market as a whole to have regulation that is clear and predictable.

     If there is a vested interest in the market as a whole to achieve a public policy outcome, by self‑regulation, then we can predictably expect that self‑regulation will achieve its goal, and we don't necessarily need regulation.

     As I was saying a moment ago, what we need to do is know which is which.  And, I think we need to be clear of what outcomes that we want and we need to be clear on the likelihood that industry will want to achieve the same outcomes, or will seek to exploit the lack of legislation.

     I think if we talk about self‑regulation as if it's the same in every context, then we will keep on making the same mistakes over and over again.

     So, interest is the answer to the self‑regulation soft law and regulation, in my view.

     >> MODERATOR:   Thank you.  Then, perhaps, one last question.  I think there was a question here behind. 

     >> AUDIENCE:  Hello everyone.  Glad to be here, and thanks for the very nice dialogue to be part of this multistakeholder environment is very good for the youth.

     My name is Elner, I'm coming from Syrbajan as the IGF youth Ambassador 2019.

     I am here to actually obviously representing the youth voice as before the IGF started we organized the IGF youth summit and we discussed different issues, including the AI and its impact in our lives, and we drafted together with the youths all from around the world 11 messages, and one of the messages is about how to regulate AI in our modern world.

     You know, all the discussions, not only in this session, but in other sessions are like we should regulate AI to protect ourselves, but another question is that can we really trust humans to regulate.

     So, our message is how human intervention can be regulated.  I will read it briefly, as it's a brief message.  Human intervention must guide AI driven decision making to ensure expandability, inclusivity, privacy, accountability, and the right to appeal.  It shall occur whenever the decision rendered had disruptive personal consequences, especially for vulnerable groups, such as the youth.

     So, this is the voice of the youth who we think that whenever there is a human intervention in the AI, it should also be regulated by certain principles, and these certain principles shouldn't be restricted to the human rights itself, but international principles, and what we mean by disruptive personal consequences is the application of the AI in the education and health and the other sectors that have direct impact on our lives.

     So, thank you very much.  That was just kind comment from the voice of the youth who are really glad to be part of this multistakeholder approach, and we hope in the future we will have direct impact in the IGF forum.

     Thank you very much.

     (Applause)

     >> MODERATOR:  Thank you very much for this.

     Before closing, I would like to give the panelists to give one more sentence before we end.

     One sentence, please. 

     >> Maybe I just comment on the last intervention.  Thanks a lot for that.  I think at the end of the day it shows that AI is there to support human geneity and not vice versa, and we are in a situation where we can reflect a lot on how our systems are made through AI, but it should not be the other way around.

     Regulation is definitely needed in certain areas, and we should start really the process in looking at where and when we need these safeguards. 

     >> Yes.  Again, coming back to what was just said, one‑third actually of the Internet users under 18, and while I can assure you that there are not many products who are designed to take this into account.  So, I think all future developments should first take into account to recognize, for instance, this issue.  And, then if we are discussing we have to take into account a whole life cycle in AI.  We have to ‑‑ the input data and all the issues around privacy, but that's not enough.  What we heard, the importance value is around the aggregated data, what we are associated with, and the goals for which AI is developed.  And, I think for all these we need transparency and ways to correct and audit.

     >> MODERATOR:   Thank you very much. 

     >> Thank you.  Very briefly.  I also like very much the last intervention, and I think it's important, especially for youth to increase the understanding of artificial intelligence.

     Was mentioned we also have the prohibition in the GDPR to the right of human review in terms of decision making, and here this is a very important one, and here it's important to also increase our knowledge about human machine interaction.  I mean, does a he was man review mean that the person is just signing off whatever the machine says, rendering the human review irrelevant, or does research show that the human overrides a decision from the machine and it could even go to the worst?  So I think the machine human interaction is an important avenue for future research, as well.

     And, as a closing sentence, sorry, a lot is going on.  A lot of processes are happening these days, and I just want to encourage everyone from her/his perspective to cannot tribute to this discussion to get it right.

     >> MODERATOR:   Thank you.

     Joe. 

     >> Briefly, I would like to go back to what Louisa from the European Commission cited by upstream diligence and awareness of possible uses.

     I think one thing that hasn't been mentioned, and tends not to be mentioned, is we're faced with very restrictive intellectual property and trade secrets rules that are going to get in the way, and are going to stop a balance being struck, and we need to think about how to ensure that legislation that was not intended for this purpose is not designed for this purpose could stop us achieving human rights objectives in relation to the application of AI.

     >> MODERATOR:   Thank you, Joe.

     In closing, let me stress that at the council of Europe in a legal standard setting, we will closely consult with Civil Society, with youth, with the industry, of course with governments.

     I thank you all for your presence, for your participation.  I would like to invite you for a round of applause for our panelists.

     Thank you very much.

     (Applause)

     (Concluded) 

Contact Information

United Nations
Secretariat of the Internet Governance Forum (IGF)

Villa Le Bocage
Palais des Nations,
CH-1211 Geneva 10
Switzerland

igf [at] un [dot] org
+41 (0) 229 173 411