You are here

IGF 2018 - Day 1 - Salle XI - WS98 Who is in Charge? Accountability for Algorithms on Platforms

The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> GONZALO LOPEZ‑BARAJAS:  Hello, everybody.  Welcome to the Workshop Number 98:  Who is in charge?  Accountability for algorithms on platforms.

My name is Gonzalo Barajas.  I work for Telefónica, so I am a Business representative, and let me first introduce the panelists who will be conducting this session.

Here on my left I have Fanny Hidvegi.  She's Access Now European Policy Manager based in Brussels, and she has a long‑time focus in privacy where she has worked before on the AU/US data transfer.  She participated in the fight against National data retention law in Hungary and has promoted privacy in technologies.  She has a strong focus on artificial intelligence so that's the reason why she's sharing this session with us.

Now to my right I have Phillip Malloc, Vice President, head of group policy, group public affairs at Telia Company.  He's also Chairman of the Board of ETNO, the European Telecom Network Operators' Association, who has more than 40 members and is based in Brussels, and represents basically all Telecom operators in Europe.

And we also have two panelists which actually are joining us a little bit later since they are coming from a different panel that is not ended yet but they will join us in brief.  They are Lorena Jaume‑Palasi.  She's Founder of NGO AlgorithmWatch and has also now started a project named Ethical Tech Society.  She has just participated on the opening session, on the opening panel on emerging technologies just early this morning, so where she had also a great contribution around ethics of algorithms.

And finally we also have Karen Reilly, the Managing Director of Tungsten Labs, building communication technology with privacy by design.  Previously she managed cloud infrastructure in the private sector and work on information security and censorship for NGOs.

Now Kristina Olausson, who has been coordinating this workshop, will go through the workshop and explain how we are going to be working on this Workshop 98.  Please, Lorena.

>> KRISTINA OLAUSSON:  Hi, everyone.  I'm Kristina from ETNO, one of the organizers of this session.  Thank you for coming here today.  The set‑up is a bit different than a normal panel session.  We would like to get you to interact more with the speakers and also up moving so what we will do is we will divide the audience, you, in two groups.  One will be on this side, so we'll have to ask you to maybe get up from your seats and move a bit closer to the speaker so you can all hear each other.

And the other group will be on this side so I would say we can split somewhere here in the middle, and please feel free to get up and go to one of the corners so you can hear what the speakers say and you can interact with them.

I hope that's fine for everyone.  We will do that now for half an hour.  Then we're going to come back, and the speakers will bring back the messages from your discussions here to the floor in a discussion with Gonzalo, our Moderator, for 20 minutes.

And I'm here if you have any questions but for now, please go to the two sides of the room, and we can start the breakout sessions.  Thank you.

Fanny is now on the left side, and ( ? ) on the other, so please join them and come closer, because otherwise it will be hard to have a discussion.

>> GONZALO LOPEZ-BARAJAS:  For those joining now, we are having two groups to do a discussion, so please come here to this side or to the other side to join the different conversations.

You can stay here on this side.

[ Off microphone ]

Sorry?  No, no, they are the same discussion.  It will not be the same.  We would like not to be the same discussion, but I mean, they are addressing the same topics, the same questions, so it will depend on the different groups, but, yes, it's the same discussions.

[ Breakout discussion ]

>> GONZALO LOPEZ-BARAJAS:  We have time for one more question, intervention in each group, and after that, we'll reconvene and do the joint session.

[ Breakout discussion continues ]

>> GONZALO LOPEZ-BARAJAS:  Once you finish the intervention, please we all come together here where the speakers will present a brief summary on what their discussion has been about and then we'll have a chance to ‑‑ okay, one more minute.

[ Breakout discussion continues ]

>> GONZALO LOPEZ-BARAJAS:  So...

[ Off Microphone ]

Thanks a lot for your contribution to the breakout.  So now we'll give a minute for the speakers to organize what they are representing as the results of the summary of the different breakouts.  In the meantime, if you could come a little bit closer or if you might want to move so that we can have a lovely debate afterwards.

So basically now we will have the speakers doing a brief summary, 5 minutes, presenting what was the discussion in the groups, and afterwards, we will have lively debate among all of us to see how can we move this forward, and what are the main messages, and of course, those that have not been able to participate will have the chance to participate, and so that we can have a more interactive session.

So we also have the Online Moderator, so we will be bringing questions from WebEx from online participants, as well.  And we will just give one more minute for the speakers to organize their summary, and then we will move forward.

So since Phillip had an easier job because he was ‑‑ well, maybe more difficult, but he has not to agree with everyone on his intervention, we will let him first to give a brief summary on the discussion on his breakout session, and then we will go with the other group.

So, Phillip, when you are ready.

>> PHILLIP MALLOC:  Hi, everybody.  Thank you very much.  We had a lively discussion over on our side of the table there.  We broke the topic down into its three respective areas which came through in the questions, so we had an important opening session on kind of breaking down the problem, so what we're trying to address here when it comes to the issue.  So we firstly got to the point of the question here, is for example:  How can we make this human to human process that is the creation of algorithms and the utilization of algorithms something that's really understandable for all those people involved?

We also touched a little bit about on how can we reconcile transparency with people's intellectual property rights, as well, in the commercial space as we go forward?

There's also an important discussion which was very active which I think crossed over all the discussions here, which is about, it's very actor‑dependent in many instances.  What is the role of a Government actor?  What is the role of a private sector actor and others?

One notion we kept coming back to as well time and time again was this notion of purpose, so in terms of what is being done and also who is doing the algorithms for example, what's the purpose for that initial ‑‑ the initiation of the process to utilize an algorithm for example?  I think we came back on the transparency point as well.  There was a very good point raised on how do we really work with the level of abstraction here?  If you're in incredibly transparent do you get to a point where for the average citizen it becomes a completely not understandable topic area?  Yet if you go too far the other way and you're too vague does it necessarily have the given impact that it's meant to do?

But having said that, we kind of got to the point that some disclosure is better than nothing and we really need to start somewhere.  There was a very interesting discussion raised by a few speakers here and we extrapolated on a little bit where we talked about the potential of some kind of moratorium, so given the purpose of what you would potentially use AI for and algorithm is their value on putting a moratorium on say war situations, weapons or other situations where there's probably a relatively undeniable human, strong human element, to the necessity of a decision?

But we're not starting from a blank slate either.  I think everybody recognized that we do have existing principles from the UN guiding principles on human rights, and how are we going to then kind of extrapolate those out when it comes to a future on AI and algorithms and others?

We also touch on the topic here really, the court system could well be the system which is best placed to act as an arbiter as we move forward on a case‑by‑case basis and the overall idea that there isn't a huge amount of value in rushing to creating very prescriptive pieces of legislation, although there was some debate whether the interplay between concentration in markets and self‑regulation lended itself to be a credible tool going forward, so this is where we saw the discussion about how businesses and also the competition elements come into play here.  So there's also a big question on jurisdiction, both geographical jurisdiction and institutional jurisdiction if you do choose to move forward here.

So how and in what fora and on what terms would you move to regulate in this area of AI and algorithms?  We heard that the European Commission for example has already kickstarted some work in this area.  The European Union of course already has a quite significant tranches of legislation when it comes from everything from Telecom regulation and is that kind of formulation going forward.  Other fora mentioned is the G7 for example the correct place to have these kinds of discussions?

I mean, I think I'd sum it up generally that one notion that kept coming back on to the table was that trust really is a parameter which all actors need to value, and will value, whether that's from a commercial perspective or whether that's from a Government perspective or whether that's from the perspective of those creating AI and algorithms.  But that's not to say that there's a free license.  There needs to be some level of oversight in how we get to that.

I think some people were quite clear that they prefer a process of iteration, that we confront challenges as they emerge, but this is not something to say that problems aren't very real and problems can be very strong when it comes to algorithms making errors and otherwise some examples of self‑driving cars certainly doing that.

So I'll probably leave it there as a summary.  I'm sure there's about 50 points I missed but I'm sure everybody will be willing to put their hands up when the time comes for discussion.

>> GONZALO LOPEZ-BARAJAS:  Thank you, Phillip.  Now so we can know who will be providing summary?

>> All three of us.

[ Off Microphone ]

>> GONZALO LOPEZ-BARAJAS:  I don't know, Lorena you want to go first?

>> LORENA JAUME-PALASI:  Yes.  We started with the idea of explainability and what does it mean?  It seems that we have different ideas of what explainability would be.  Some participants were thinking about data, just understanding what type of data has been used is good enough.  There was also conversation about well perhaps you might also want to know the parameters on how they are being weighed in although that has a, of course, a specific impact if you do that in a public level, if the explanation is meant to be an explanation for everyone, because of course, this means that you can game the system, that you can learn different alternatives how to fool the algorithms.

And there was ‑‑ the discussion continued farther with, and I think we didn't reach a common agreement on that, but one of the many aspects that were mentioned as an explanation are things that are less concentrated on data, on the system, and more on the output of the system.  So does the system discriminate for which reasons?  What are the reasons for discrimination or classification?  Which has a more slightly social approach to the explanation of an algorithm and it's more concentrated on the social impact of algorithms.

And the conversation went back and forth on that, and I think this is a pretty good example to show that explainability is important to understand, not only with regards of what is an explanation but also with regards to the addressee.  To whom are we giving an explanation and for which purpose is this explanation being given?

And with that I will pass over to Karen that can put more insights into it.

>> KAREN REILLY:  We also talked about understanding the impact of the output of especially large datasets, where you may not gather sensitive data, but you can infer things that become sensitive data with a large enough dataset, and so explainability should also encompass, what do you end up with at the end?  And this is something that not even the engineers maybe understand at the outset, but the impacts can be wide‑reaching and they can be severe, especially if you bring in intersections of health, race, gender, economic status.  There are real‑world harms that have already been done as a result of data collected for seemingly innocuous purposes like targeted advertising.

>> And finally I will report back a little bit on the General Data Protection Regulation which unsurprisingly came up in the conversation.  But let me start with thanking for all the participants to be open to address the challenges of the set‑up of the room, and I think we had a quite good conversation despite that.  So it came up a lot how data protection regulation is adequate or not to address some of these issues and we discussed the specificities of the relatively new EU data protection regulation which you might be familiar with and there's not a lot of differences between the former directive and the GDPR except for the really, really huge fines and the whole enforcement piece but there's one big difference and that's actually related to explainability and redress against automated decisions, and it will pose a lot of challenges when we look at the impact of artificial intelligence and algorithms on human rights because the redress mechanism which is to object to the decision is arguably only accessible when the decision was fully automated, and we had a conversation about how rare that is, that it's actually fully automated.

And that has also an explainability limitation, I would say.  So that was one aspect.

And finally, the other one was how ‑‑ what's the difference between the personal data and de‑identifiable data itself that the law protects?  And insights and conclusions that a companies or the private sector, anyone, can draw from that dataset which might not be protected by the law, and how in the future, this could be a challenge for data protection authorities to have proper enforcement mechanisms.

>> GONZALO LOPEZ-BARAJAS:  So this is a kind of experiment.  We see that we have some diverging approaches in the different groups so let's try to focus the debate for example on explainability and transparency which are issues that have been addressed by both groups.  On the one side we had for example if we wanted to have all the information that was used to come out with the results, I recently read that in a University, they did a case to basically explain the results of the works that were graded by different individuals so they did use an algorithm to solve the bias of the different persons grading the works.

And basically at the end, they came out that when they were giving full transparency on how this was implemented, basically some of the students that did not get their grade that they wanted, they did not really appreciate the transparency of the process.

So I don't know, the question here is:  One of the issues that we were addressing is:  Who is the transparency, who is the explainability, going to be addressed to?  Who is that going to respond?  And are we ready to deal with the reasons or to deal with the response on why the algorithm has come up with that result?  Any views on this?

>> Well, I think that it's important to differentiate between transparency and explainability, because an explanation, it's a different thing.  An explanation is reconstruction.  It's always a justification.  Whereas transparency is ‑‑ it does not try to justify anything.  And of course, both when it comes to transparency and both to ‑‑ it comes to explanation, it's always a subjective point.  Who is giving transparency to what?  Which factors are being used to provide transparency?  To whom?  Transparency to a developer is nothing ‑‑ it's a different story than transparency to a policy maker.  It's a different story to transparency to a user.

And it always depends on the purpose of transparency.  You want to have an insight on what specific factor of the technology.  And of course, you're right, there's this ambivalence in the technology.  All things AI, all things machine learning, are very good at pattern recognition, so of course when we human beings discriminate, we leave a pattern in our behavior and without technology you can have a better insight on the ways human beings discriminate.

So it's a ‑‑ there are two dimensions to this technology.  On the one side, you can amplify your own bias by coding and by using data in such a way that without noticing, you are using the technology to discriminate, but on the other side you can use that technology to learn more about the human nature and how human beings discriminate to each other and this technology might be very helpful to show you how subtle the way is in which we human beings are biased without noticing, even without wanting.

And that is by the way a good potential of this type of technology.  From a monitoring perspective having algorithms that look at things from an architectural point of view and look how institutions behave towards different types of gender, different types of culture, ethnicity, religious believings and all this stuff, that can give a lot of insight how the administration is going forward when it comes to people that want access to Social Security or access to specific services and the same goes for the private sector.  So I think it's important we address there's that ambivalence.  That on the one side ‑‑ well, sorry, to recapitulate, I think we human beings because we are doing that technology are just showing that we can create bias and discrimination on many different levels and that might show in technology but the technology might help us understand back how we can learn from ourselves and be more consistent and less discriminatory.

>> There's one component we haven't really talked about which should be the basis of the transparency requirement especially in the public sector use.  We didn't talk about transparency around contracting and public procurement and just the most basic requirements to disclose by Governments when they use an AI system, and what companies they contracted for and how that was developed and who manages it.

For instance, there's one example which is really, really not well known but the Hungarian Government maybe 6, 7 years ago started a pilot project in Budapest in the capital city, in the Józsefváros District, which is known to be populated by a lot of Roma residents, and they started a facial recognition pilot project there and they put the whole project under the operations of the National Secret service to avoid any transparency and access to information laws and requirements that they would have to disclose.

So nobody knows what's going on and what they use the information for, but we have evidence that there's a discriminatory finding and sentencing practice on the basis of the perception of someone being a Roma person in Hungary.

>> KAREN REILLY:  I would say the issue of transparency, you can have fully open source code, you can have access to all the academic papers that led to the algorithm, on a technical basis everything can be 100% free and open but the more important data to assess the impact will come from the communities that are impacted by discrimination in AI.  A community that is disproportionately affected by predictive policing, the people in those communities know what discrimination looks like and they should be believed when they say:  This is discrimination.

And the bad parts where they say okay, this contract is secret for National Security purposes or something like that, and we can't show you the algorithms, we can't show you any of these things, you don't even have to get into that.  Just believe people when they say bad things are happening as a result of this technology.

>> GONZALO LOPEZ-BARAJAS:  So it seems that we have two different approaches that was commented previously on that group.  One was related to the role of Government regarding transparency, and also it was commented that maybe business had a different role or different responsibility, and is that because of the impact that they have on the society of what they are doing?  Is that ‑‑ how could that be implemented?

And also it was mentioned before intellectual property so that it could not be provided full transparency of the algorithms, because for businesses there was an intellectual property associated with the algorithms, so is the role of Governments and businesses the same regarding transparency?  And what has intellectual property to do with it?

>> Just a quick comment before ‑‑ I would not underplay the role of private sector in human rights violations and the impact on our life, so I'm not sure I would differentiate between the responsibilities in that sense, but of course, there's an existing human rights framework globally and regionally that's applicable to State actors, but there's also the UN Guiding Principles which is applicable to the private sector, so the respect and the protection and promotion of human rights in my view should be equally applicable to all actors.

>> LORENA JAUME‑PALASI:  I think it's important from a legal perspective to make a difference between private sector and public sector, right?  Because of a good Democratic reason for that, that's right.  But of course when we talk about accountability, accountability is many cultures, not a legal concept.  It's an economic ‑‑ it's a private sector concept by the way.  And there's a huge difference between what the U.S. means legally when they talk about accountability and what is meant in Europe when we talk about accountability.  There's the very first time ever enshrined this concept in law with the GDPR and it's a problematic concept because from an ethical perspective accountability makes a lot of sense but from a legal perspective, making sure that proving that you have not done something wrong in advance is a weird way to proceed legally.

So I think we need to be clear when we talk about accountability, whether we are talking about it from an ethical perspective or from a legal perspective, and be also clear that accountability means in many different legal cultures different things.

Now, going back to the concept from an ethical perspective, I think that I'll depends very much on the context.  You cannot say as a general rule company have less stakes, less higher stakes, than Governments.  If we take a look at Facebook and how Facebook operated in Myanmar and Bangladesh with the Rohingya it was a problematic situation where a company was helping Government to operate in such a way that we had a genocide, and it's not, I wouldn't say that it's an easy situation to decide because if you're a company there operating in a country that is an autocracy and you need to decide, do I keep providing the system?  Do I need to abide by the law of the country?  Or if I do not, by which law do I abide, or which type of ethics do I abide?

And how do I operate in such a way that is both legal, but also legitimate?  It's not an easy issue and I want ‑‑ I wouldn't say that companies are per se devils that only want profit.  That's not true.  And I don't believe that.  I see a lot of companies that have lots of engineers and people working there that want to just shape and operate with the future.

But I think that it's important to have an ethical conversation about that.  I think that when we talk to ‑‑ when we talk with companies about what do we mean with accountability, but also what do we want from you as a company to be accountable for, companies are still thinking that they want to have the ethical feedback from society.  And that's good.  That's important.

But a company should also create their own ethical profiles, their own virtue ethics and say we're a company that has decided to have this specific ethical profile.  This is our understanding from ethics and they also have to come up to this conversation because ethics is a whole societal conversation and it's not only Civil Society that should be having the conversation but also the companies within there.  This also means to show your inner virtues as a company or show your inner ethical principles.

And I think I haven't seen many companies saying that.  I see many companies that say:  Oh, we do this partnership with Facebook and Google and so on, and pledge to follow the human rights.  But that is a simple commitment.  That is not an explanation of who you are as a company and what are the values you stand for.

>> PHILLIP MALLOC:  That's an excellent point.  Hopefully it shows the commitment of private sector and business that we put forward this topic to be discussed today and arrange these types of debates around this topic, so it's equally a pressing a topic, and I agree that a multi‑stakeholder environment of these debates is absolutely pressing for both business and all stakeholders.

I just point to a couple of examples of ETNO members who published publicly their own take on some of these ethical standards, so Telefónica to my left very recently published their guidelines or principles towards AI, as well.  Another ETNO member Deutsche Telekom published last week for example a document which is now open for public scrutiny.

So I think that you probably will find ‑‑ the European Commission for example who's gathered its expert group on AI also I think involves a rather broad cross‑section of both stakeholders from Civil Society, Government, business and otherwise.  So I think there is work going on there.

Is it perfect?  Probably not.  Is it work in progress?  Absolutely.

>> FANNY HIDVEGI:  Just to build on that because impart of the experts group for the European Commission, just to be clear, it's around 6, 7% Civil Society, more than 60 business, and the rest is academia, so it's definitely not a proportionate representation of stakeholders, and I agree that there's a role for ethics frameworks, and I'm not questioning that, but I think we have to be very careful, because there's also a tendency to develop these principles to avoid compliance, and there are existing human rights frameworks that should and can be applied in the first place, and then on top of that, there can and should come the ethics and principles.

Just one example for that.  Google published its AI principles and it has an interesting section on redlines, when AI should not be developed and deployed, and one human rights implication they mentioned is it's a redline when the intention is to harm human rights, but they don't mention that should be a redline when the impact is violating human rights.

>> GONZALO LOPEZ-BARAJAS:  An intervention?

>> Thank you.  My name is Charlotte Altenhoener from the Council of Europe.  I wanted to just point in this context to an initiative at the Council of Europe to develop through work an Interdisciplinary Committee which is public‑private companies and Civil Society independent experts really to develop recommendations to Member States on how to address the human rights impact from the deployment of algorithmic systems and to do that through two different lines of work really.  One is to make very clear what the obligations of Member States are.  What do they need to demand?  And what do they need to ensure in order to comply with their own human rights obligations when it comes to safety and security from algorithmic decisions, when it comes to data quality when it tums to transparency and accountability, also when it comes to effective remedies and then at the same time also to make very clear what standards private companies and private actors engage in the design, deployment development of algorithmic systems what they should do.  And the purpose very much is to go beyond ethics.  Ethics are wonderful and important to promote trust but maybe at this point we do not just need trust.  We also need trustworthiness.  We need people to actually be able to rely.  We need perhaps more auditability, more clear standards also in terms of a company so they understand what they should do and what also through what type of innovation can be incentivized to address inequality in a way such as Lorena mentioned, rather than reinforcing inequality, et cetera.

So this is an initiative that is ongoing.  It's longer term activity, and we are hoping to adopt recommendations in early 2020, I'm afraid.  So we're working on this now.  We will have public consultations over the summer.  We want this to be a very open process.  And we will then have politically binding at least standards for Member States and for companies.  Thank you.

>> GONZALO LOPEZ-BARAJAS:  We could try to see if there is any intervention from online participation?  No?  Okay.

So since ethics is not enough and maybe recommendations are needed, maybe this goes to another ‑‑ could go another section about regulation.  So is regulation needed?  Talk of regulation on two sides.  Here you mentioned GDPR.  Here you mentioned competition law.  Maybe you could elaborate more on how this regulation could be applied to the algorithms, or what you're referring to exactly.

>> PHILLIP MALLOC:  I'll do my best but I'd encourage people who were part of our discussion who are far more qualified than I am to intervene at this point.

I think the point we got to was that ultimately, responsibility is responsibility, and the choice of technology you use to fulfill an action, be that through AI or others, there is still a human element to the initiation of a process.  And so I think that leads to some level of accountability hopefully.  One thing I would personally point to on regulation, and we have many people from the Brussels fora here from discussions, the only issue you have with regulation is it tends to be painfully slow, and that's why I enjoyed one of the points made here about working in an iterative way, if we can.

And so I think there's some value in exploring as much as this technology is going to revolutionize society around us, hopefully it gives us a little bit of a free rein and a scope to try to interpret ways in which to manage a policy process which is slightly more innovative than the one we've had for a very long time.  What that looks like in its entirety, I'm not exactly sure, but hopefully we can think a little bit more laterally and so we don't kind of stamp on a nascent technology too early. 

>> FANNY HIDVEGI:  First I just wanted to say I think it's sometimes painfully slow and sometimes it's painfully fast.  For instance the European Commission right now is considering a law on terrorist content regulation and that's going to be passed before the elections because there's a political impetus to pass it and whenever they don't want, then it's of course it's painfully slow.  But on regulation, I already talked about this report to this part of the room but I want to mention Access Now published a comparative report on all the EU Member States' proposals and strategies that are already publicly available, and some of the regulatory ones from Regional bodies and one interesting overarching theme in all of them is that it's too soon to regulate AI as a whole.  At the same time, all of them acknowledge that we have existing legislative frameworks that are applicable to artificial intelligence and the very interesting thing is when we mentioned this to the European Commission that we're doing this scoping and mapping exercise, they got extremely excited, because they had that they don't have that overview what all the Member States are doing, which I think is quite interesting.

And what Access Now is arguing for is a human rights based approach instead of an ethics‑based approach for all the Member States, and we understand that there might not be a need for an AI regulation on the European level but there's definitely a need for a harmonized approach to do it, to avoid the patch work, a patch work of regulations and different types of exceptions and roles and sandboxing and I would be really curious to hear where Spain is at in the process.

>> PHILLIP MALLOC:  Just one quick point that fits into this conversation.  It seems like we jumped down a very European path which I think is often the case, we do in general.  One of the points we raised in this discussion is how do you create some kind of global international context and comparability across the Board and which fora do you choose for that essentially?  Is it this fora or some other G7 or whatever?  What does that look like to make sure it isn't a kind of global patch work.

>> LORENA JAUME-PALASI:  I'm a bit ‑‑ I'm a bit always concerned about the concept of harmonization because it sounds so good, harmonization, but in the very end, what we have is some sort of legal text that is common to ‑‑ and it's always, like, minimal common denominator.  It's not the maximum that you get, it's always the minimal common denominator in the first place but second you have a legal text does not imply you'll have the same legal interpretation of the text and we see that already within the European Union, with 27 memberships, and very different interpretations from Spain to going over Germany, until Hungary.

That's the first thing, so I'm sort of cautious, because sometimes, I think it's good and it's important to acknowledge the legal culture and the political expectations of specific legal culture all around the world.  And one of the things that we never discuss ‑‑ we discuss always the U.S. hegemony over the export of technology but we're not discussing the European Union hegemony or legal experts.  We are exporting and enshrining our law to other regions of the world that are very interested in having commercial exchange, and therefore are bending their own legal cultures and tradition to just have equity in order to be able to cooperate economically with the European Union, and that is not right either.

So I'm concerned about these type of approaches.  I like the approach of the Council of Europe because it's open to many other countries from other continents, and it's a possibility to enter a conversation, and it gives ‑‑ but still it always gives you a minimal common denominator and I think it's good to have a variety of law, to have just a form to accommodate the different cultural expectations about law.

But what I think ‑‑ I totally agree with Access Now, is that right now, there's many, many issues with this technology which we don't know exactly what the real conflicts are.  We don't know how far this technology is creating path dependency in human behavior.  We don't know that.  We don't know much about the factors in technology that lead human beings to believe in the software or not believe.  We don't know under which circumstances specific forms of discrimination are happening.  And in many legal cultures which are very individualistic but from the law point of view democracies are really individualistic.  They only understand individuals.  They go by individuals rights approach.

They are having legal struggles that they are addressing in my opinion wrongly.  Why?  Because this technology is not about individuals.  This technology is about collectives.  It's about creating infrastructure.  And with that, we are seeing already many effects on a collective level where we see that specific types of collectives are being treated differently than other collectives but no individual harm, so there's no way to redress that and there's no way to prove an individual level and to sense an individual level that there is a specific impact on individuals' life that is not legitimate.

So these type of challenges are one of the things where democracies from the Western countries can learn a lot about countries from other regions that are less individualistic and have more societal approach, more collective approach, to society.  And this is one of the things that I would like to see addressed in this type of fora where we learn from others, where we, Westerners, learn from other societies, what is their take and how do they apply these technologies?  What is their fairness idea at that collectivistic level?

What is their collective rights approach?  This has been a whole conversation on collective rights coming from the Global South, being always addressed at the UN level.  At the UN level has always been very cautious on trying to address that issue because we thought that with human rights, it's enough but this technology is showing that no, it's not enough.

>> GONZALO LOPEZ-BARAJAS:  Okay, so we have time for a final intervention for all the speakers so you can make it brief.  So we could have Fanny, for example?  Okay.  Karen, please?

>> KAREN REILLY:  So whatever regulations come into play, and this has been touched on by the other speakers, the things I would like to see are consultations from the people affected.  If you are making medical technologies, if you're doing medical research on a given population, people with a specific disorder, you should ask them where they can be discriminated against.  How has data collected about them being used in the past currently?  Because there may be some ongoing harms.

If you're using a system such as Centrelink in Australia, robo‑debt, there are also systems being used to deny benefits to single parents, to people with disabilities, and they have resulted in people going without insulin and dying.  They've resulted in suicides, because of people losing benefits.  Then in that case, the regulation should be swift.  The program should be stopped.  Once people start dying, the program should be stopped.  That should be regulated somehow.

And when it comes to people from other countries, the fact that Silicon Valley companies like Facebook are debating, should we hold ourselves accountable for facilitating genocides, should tell us all that Silicon Valley should not be the arbiter of social good with technology.  They failed.  They need to step back and listen.

And so whatever regulations come into play, one founding principle should be:  Nothing about us without us.  That people making decisions about technology, the people coding the technology, should look like and think like the people being affected.

>> FANNY HIDVEGI:  I would make two final conclusions, one of them is absolutely not new but we almost had no technology experts in the room, at least ours, and it's really not a new demand to have lawyers, policymakers, affected communities and tech, tech people, in the same room but I think that this specific conversation really needs them to be involved.  That's one.

The second is, to connect to what you just said, I think lots of the tech companies are complaining now that all the policymakers and lawmakers are looking at them to solve all the issues, but for the last 10 years or even more, they've been feeding that line to all these how makers that technology will solve it all, so now we see how the failures happened, and I think we need to act swiftly to stop those failures and violations.

And finally, even if it's not enough as a starting point, at the moment for AI, it would already be a really huge gain if we had a basic understanding of human rights based approach for AI, that it must be respected and there should not be a question for any stakeholders.

>> PHILLIP MALLOC:  Yeah, I'm not going to discuss the wheres and wherefores about moralistic things, because I'm not as well placed as the speakers, but I'd try to bring the point that there's an enormous amount of potential in solving some of the big questions out there from a global context.  If you look across the UN Sustainable Development Goals, the increasing use of technological innovation and digitalization lies as a key enabler for all the solutions put out there.  It's not to undermine the fact there are significant questions, but I think it's also pertinent for us to bear in mind that the opportunities out there are also incredibly great and if we don't embrace those opportunities we may be doing ourselves somewhat of a disservice.

>> GONZALO LOPEZ-BARAJAS:  Okay.  We are coming to an end, just to wrap up, basically we have been discussing on transparency and explainability which are different issues.  We also commented on the role of Governments and businesses relating to human rights, which both of them being affected and being part of the equation.

Regarding regulation, we mentioned that it is too early to regulate, so that we do not hamper innovation, but at the same time, we have seen that in any mechanism that starts discussing the regulation or cultural ethics has to imply and to count with the persons being affected by it.

And to finalize I would just like to end with a positive note on all the possibilities that artificial intelligence algorithms are bringing us.  Thank you very much for your attendance, and thank you for all the speakers.

Contact Information

United Nations
Secretariat of the Internet Governance Forum (IGF)

Villa Le Bocage
Palais des Nations,
CH-1211 Geneva 10
Switzerland

igf [at] un [dot] org
+41 (0) 229 173 678