IGF 2021 – Day 4 – OF #40 The challenges of AI Human Rights Impact Assessments

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> We all live in a digital world.

>> We all need it to be open and safe. We all want to trust.

>> And to be trusted.

>> We all despise control.

>> And desire freedom.

>> We are all united.

>> EMIL LINDBLAD KERNELL: So, hi, everyone. Just bear with us a few moments. Since this was I think a little bit delayed because of the previous session. So we're just trying to get everything in order. Please just bear with us.

So, again, we will start very soon. But there were I think some difficulties. Please, just bear with us another minute.

Again, apologize for this. We will soon get started. But, unfortunately, our moderator has been locked out of the session. But we're handling it. And we'll be there very soon.

>> ELISE LASSUS: Hello, everybody. Can you hear me? Can you see me well?

>> EMIL LINDBLAD KERNELL: Yes, we can.

>> ELISE LASSUS: Okay. Just change videos. I am really sorry. There have been some technical glitches. But I think we are all used to it. This is a new way of beginning each meeting now.

So with a few minutes late, I am very happy to welcome you to this session today. Let me just try to put my screen correctly. Because now it's my computer it is fighting me. Here we go. It's working.

So my name is Elise Lassus. I am a researcher at the UN Agency for fundamental rights. I am very happy to be welcoming all of you for this Open Forum session on the Human Rights Impact Assessment of Artificial Intelligence.

So I am very happy because I am joined today with great experts that have dedicated time and effort in researching this topic from different angles. I'm also very happy that this is a topic that is crucial in order to ensure that artificial intelligence system or related technologies are used safely from fundamental rights perspective for all individuals. Yet, it's a topic still under discussion that still presents a lot of complexity and is not enough discussed at national and international levels.

So what we've seen for the past decades is, indeed, international organisations and public authorities and experts from the academic field, from Civil Society Organisations, that are calling for Human Rights Impact Assessments. It's not simple. Compliance with fundamental rights can't be automated or hard cut into computer software.

On the one side, there's a need to ensure that AI is safe for everybody. On the other side, the multiplicity of AI system and the multiplicity of technologies complexify the design of effective impact assessments.

At the EU Agency for Fundamental Rights, we conducted interviews with around 100 public and private sectors in 2019. When we asked users about prior testing, the majority of respondents referred to either technical prior testing or to data protection impact assessment. Fundamental rights were rarely addressed.

So what we see is there is a lack of understanding on how this Human Rights Impact Assessment can be developed. Also, a lack of awareness from both individuals and developers on the potential impact of artificial intelligence systems on individuals.

But thankfully, with us today, we have experts that will be able to enlighten us on this these very complex questions. So it is my pleasure to welcome with us today K, a digital rights researcher at works at the intersection of human rights and technologies often working and serving with the most vulnerable members of Myanmar society. She has worked with journalists, digital rights activists, Human Rights Defenders, Civil Society Organisations.

We have also with us today Emil Lindblad Kernell. Emil is an adviser of the Human Rights and Business Department at the Danish Institute for Human Rights. Emil leads the department's priority on Digital Technologies and Human Rights. He also has experience working directly companies within the department's responsible Value Chains Programme on the human rights due diligence efforts. He also worked on other projects that focus on the corporate responsible to respect human rights.

Our discussion today, as part of his day, Emil created Human Rights Impact Assessments and provided other human rights advisory services.

I also have the pleasure to welcome Lorna McGregor, a Professor of International Human Rights Law. She's the Director of the ECRC Human Rights, Big Data and Technology Project at the University of Essex. She's a Co‑chair of the International Law Association Study Group on international responsibility under international law. She has previously held a position ‑‑ the position of Commissioner of the Equality and Human Rights Commission in the UK – apologies. And she's a Trustee of the AIRE Centre. Her current research focuses on data analytics and new and emerging technologies including artificial intelligence and human rights.

Last but not least, we have the pleasure to welcome Alessandro Mantelero, Associate Professor of Private Law and Law & Technology at Polytechnic University of Turin. He's a scientific expert on AI. On data protection and human rights in the Committee on Artificial Intelligence, Convention 108. He has served as an expert on data regulation for several national and international organisations. Including the United Nations, the EU Agenda for Fundamental Rights. The American Chamber of Commerce in Italy. Finally, Alessandro is Associate Editor of the "Computer Law & Security Review" and member of the editorial board.

Before we begin, one point with regard to the structure of our discussion today. I will ask each of our speakers some questions so that we can set the discussion and we can identify what are the challenges for Human Rights Impact Assessment of AI systems. But I would like to invite you to put questions in the chat. This is an opportunity for you to point out what are the challenges based on your expertise and your experiences. And we are very much looking forward hearing from you and discussing this topic with our expert today.

So my first question will go to you, K. This has been demonstrated by an increasing number of cases that the use of artificial intelligence can have direct and yet a nonimpact on individuals. In cases where public authorities and the citizens are in conflict, are in confrontation, the use of AI‑based surveillance tools can directly interfere with individuals' rights and freedoms.

From a rights holder perspective, could you give us some example of the implementation of said technologies that have resulted of violations of rights and freedoms for individuals? I would be very interested to learn based on your experiences and your work as a digital rights activist the types of AI system that have resulted in interferences with individuals' rights and freedom. So the floor is yours, K.

>> K: Thank you so much. Thank you for inviting me as well. This is my first IGF. So I'm so happy to be here. So before I start, I just wanted to mention that today's actually the (?) for the lives we have lost since February. Thanks for all of your question. Everything, all of the good questions. I'll make sure that I cover all of that in a short period of time.

So when we talk about these AI‑based technologies, it is often masked behind the narrative of developing a community or a city. With this, I'm going to give an example such as the smart city. And when we talk about smart city, there are technologies from GPS‑based waste manager, traffic control, smart billing, to improved street lighting and other general city improvements. But of all of the AI and database projects, CCTV with facial recognition is part of the main conversation as well.

So when we talk about the AI systems, it is also important to take the context and how the countries operate into consideration. Especially when it comes to the government and the authority states. What we are seeing in Myanmar is not only the violation of individual's rights and freedom, but rather the whole population's rights and freedoms are violated. I'm not trying to say that privacy is expected. But the severity of the violation is really immeasurable since the coup.

When it comes to the policies, the concern that we have, we are seeing on the ground, is the restriction on freedom of expression. Freedom of movement. And freedom of assembly. So the CCTV, you know, with or without facial recognition systems are being used mainly to target the pro‑democracy activists who have been organizing peaceful protests and about the military groups since February.

I mean, when we talk about the AI systems and surveillance technology, this is not just, like, an issue in countries like Myanmar. So, for example, in U.S., the authorities have used facial recognition to track down the, like, activists. In 2020, we have seen that. The point I'm trying to make here is how the violation happens. Whether it's a developed country or a developing country. So the countries' values and principles toward human rights really matter. And the level of severity is different. And people, you know, sometime I'll come across people simply say it's a matter of adding a data protection law or, you know, including, you know, legal frameworks to protect citizens. It is not that simple in the majority of the world where these laws and legislation are, you know, either absent or not implemented as they are supposed to be.

So context really matters and how, again, how they operate also matters. These should be considered before there's any push for AI‑driven or, you know, surveillance infrastructure. For example, in Myanmar, people losing their loved ones overnight. You know, at the safe (?) the pro‑democracy activists, and human rights defenders, could face at least 10 to 60 years of imprisonment.

So just going back to AI and surveillance technology, there are a lot of, you know, surveillance technology that already exists such as interception system and facial recognition. You know, CCTVs. And one we think about what can the individuals do, we can only stand up for the rights for privacy and demand the authorities not to (?) intrusive technologies that would exist in our daily lives. Just to really, you know, so together with Article 19, the organisation that I am preventing on behalf here, digital rights collaborative, we'll be publishing this research that we focus on, the technology we are seeing on the ground. Mainly, three major cities. That will be coming out early next year. So please be on the lookout for it. And thank you. And back to you.

>> ELISE LASSUS: Thank you very much for this overview. And for your words. Really point out the fact that depending on the context, the type of ‑‑ depending on the context and depending on the type of survey and softwares that may be used, the user of AI technologies can have very large impact on all fundamental rights.

And on this, I would like to turn to you, Lorna. Because you conducted research, the legal frameworks with AI users. What was the main challenges that you came across that were voiced by public and private organisations in order to conduct proper impact assessment of AI systems? Do you think that there is too much focus on data protection? We've just heard from K that it's not only data protection that may be impacted. It's a wide range of fundamental rights.

So from your point of view, based on your research, what are the shortcomings? Where do you see shortcomings for the use of artificial intelligence? Thank you.

>> LORNA MCGREGOR: Thank you so much for the invitation to join this interesting panel. With the research we carried out including with the Danish Institute, what we see as the starting point is the question around what is the impact we're actually trying to capture through impact assessments and other regulatory and governance initiatives? I think it's already been highlighted by you, and by K, that we're only really beginning to understand the full human rights implications posed by emerging technologies including artificial intelligence.

And so what we can understand is that they're complex and they are not only related to the nature of the technologies, but they're shaped by the actors, the context, and the purpose of use.

And so what we know is that there is a lot of opaqueness, lack of transparency, around the fact of use and how integration of these technologies lead to wider systems by public and private actors, are impacting on human rights.

So what that does when we think about impact assessments, if they are tools to help us identify potential and actual human rights harm, and as a tool in accountability and prevention of human rights violations, what we then have to ask is, well, how can these impact assessments actually identify potential or actual human rights harm? And what is the expertise that we need to be able to carry out these impact assessments? And can we actually capture this full human rights impact through the types of impact assessments that already exist. Or are being proposed.

And I think when we look at the tech ecosystem right now, we can see that many of the existing forms of impact assessments that exist in other sectors also apply in the tech sector. So there's not a clean slate in relation to impact assessments. So we've got ethical impact assessments. Quality impact assessments. Environmental. Privacy. And DPIAs. Data Protection Impact Assessments, as you already said. And Human Rights Impact Assessments drawing from the UN Guiding Principles and business and human rights.

While they're all using the term, impact assessments, they differ in legal bases and scope. Whether they're mandatory or not. Their purpose and their methodology. And what I would say is I don't think that we yet have an example where we can say here's an impact assessment that really captures the full Human Rights Impact Assessment. Full human rights impact of UN emerging technologies. Or where we can point to a way in which these impact assessments are integrated and correspond to each other so that we can really see the full impact.

So we've been looking at the relationship between data protection impact assessments and Human Rights Impact Assessments. Really using this as a case study to think about how the impact assessments relate to each other and try to understand whether they should be integrated or whether they can co‑exist in a more effective way to really try to capture the full human rights impact of the use of UN emerging technologies.

Now, why it's important to look at data protection impact assessments is it's absolutely right that they cannot cover the full tech life cycle. They're only about data processing. But why it's important to look at their role is because they are required in legislation already in a number of states including in Europe. And other states are considering similar forms of impact assessments when they're developing their privacy and data protection laws. So they are an important part of this ecosystem. But, of course, not comprehensive.

But what we see with DPIAs, these are some of our preliminary findings, is that first of all, they are supposed to cover fundamental rights. And freedoms. So they are supposed to be about more than privacy. But what we find in our empirical research is most people we speak to think of them as a privacy tool. So about identifying what the impact on privacy is. So they're seen as much narrower tools than, in fact, they are supposed to be. Where they are supposed to look at fundamental human rights. And in the tech ecosystem, that becomes even narrower because then there's often a discussion about what are the salient harms. And so we end up very focused just on privacy or privacy, freedom of expression, and discrimination, but without thinking about that full impact and the context in which different tech is used. Which will likely be broader.

What we also find in some of our interviews is that DPIAs, people often speak about them as compliance exercises. Or tick box exercises. And they have spoken about, well, do the teams that carry out these have this human rights expertise able to capture this full human rights impact as I mentioned at the beginning. They often talk about the human rights team as maybe based in another part of an organisation.

So there are questions there about wider human rights expertise. There's also issues that seem to come up around scale and how frequent DPIAs are. Are they one‑time exercises. Are they revisited over the lifecycle of technology? And so what we sort of find from this, just to finish up, is that there is a lot of potential to strengthen DPIAs to make them stronger in terms of what they could capture. In terms of human rights. But there are some substantive and structural challenges to making them more effective human rights tools. There does seem a lot of scope to strengthen them so that when they are conducted, they maximize the impact that they're able to capture with human rights. But importantly, again, remembering that they only cover certain dimensions. All of the human rights and ecosystem. Of the tech ecosystem. It's very important to think about what their relationship is to other types of Human Rights Impact Assessments. To try and see whether there can be bridges to strengthen these relationships.

>> ELISE LASSUS: Yeah, thank you very much for pointing that out. This really echoed the research that we also conducted at the EU Agency for Fundamental Rights. This focus on data protection. Nondiscrimination. And the need to have a global overview of the impact when Human Rights Impact Assessments are conducted.

And here, I'd like to turn to Emil for my next question. Because you conducted research on Human Rights Impact Assessments. And I'd like to ask you what can be expected from companies in practical terms when it comes to human rights due diligence in relation to AI. What are the knowledge and guidance gaps that still remain to be filled in order to ensure full respect for the full spectrum of human rights in the development and in the use of artificial intelligence.

>> EMIL LINDBLAD KERNELL: Thank you, Elise, Ka. To anyone listening to what I'm about to say, the guidance that we put out quite exactly a year ago on human rights impact systems of digital activities is what we call it, but, of course, that includes AI from our perspective. Was not looking at regulation of this kind of methodology. But rather, looking at added as a methodology and a tool, let's say, in the human rights due diligence toolbox. Really focused on the UN guiding principles on business, human rights, and looking at sort of what is the expectations and requirements on businesses. With the current frameworks. Notwithstanding what will come out of EU regulations.

The first question you asked, what is it that we can expect in terms of companies when it comes to human rights due diligence when it relates to AI? And I think what was very noticeable in developing our guidance is there was a lot of the, let's say, demands or movements to push companies to do impact assessments which is framed as the tool in and of itself. It's something you do very clearly. Not this abstract process. You should do impact assessments. Actually, what we found is in this case, we had to produce a very long introduction, essentially, because we really had to put this in where does this impact assessment, whichever kind you're talking about, how is that situated in relation to your ongoing processes of simply identifying whether there are risks with the technology that you're developing.

I think, you know, I think K's example is a really good one. I mean, of course, if you are developing facial recognition, and it's meant to be used to identify protesters, perhaps you are quite aware that there might be some risks here. But a lot of these others, you know, traffic control that was mentioned, can probably be used in the same way. Also, to monitor demonstrations or whatever it might be. But I think those developers are not very mature in their general due diligence requirements of sort of simply understanding what are the contexts that we imagine that this can be used.

So also, Lorna was talking about the tech ecosystem. I think what some of these impact assessment methodologies maybe miss is that they ‑‑ we want the developers to think about human rights. But if it's too narrow, then it's like you haven't even considered how will this exactly be used in what context.

So what we proposed in our Impact Guidance Assessment, in order to conduct a Human Rights Impact Assessment according to our methodology, you have to decide on a context to look at. Because it is impossible to do, I think, almost impossible to do meaningful consultation with affected stakeholders or rights holders if you don't ‑‑ if you have not decided which context that we're looking at. Smart city in Montreal will have one issue and another city will have others. We cannot create smart cities, a smart city. We have to think where is this meant to be developed. That gets me to another point and I'll stop there and hear from Alessandro.

Another piece we spent a lot of time of is also on stakeholder investment. We need to challenge ourselves in the human rights community to ‑‑ we can all demand and we all should demand proper meaningful stakeholder engagement. We also then need to see when that is very difficult. If we want very early assessments, I think they will be naturally more abstract and they will be more difficult to do this very targeted stakeholder engagement. If we want assessments at a later stage that might be easier. So we need to think about that when we think about regulation also. At what stage should this happen. Is it only first initial risk assessment that then can be scaled to something else in the case that was identified? Maybe that's the way to go about it.

But I'll stop there.

>> ELISE LASSUS: Thank you very much. That was very concise and to the point. And really, I think to our discussion until now. So we have looked at ‑‑ can you hear me? I had ‑‑

>> EMIL LINDBLAD KERNELL: Yes.

>> ELISE LASSUS: We have had an overview of the seriousness of the impact on fundamental human rights for individuals. Then, you know, Lorna recalled not to look only into data protection and to conduct really in‑depth assessment that is not on a checklist exercise. And to cover all fundamental rights. And you're adding to the discussion the importance of the context. That AI systems in one context, for one purpose or another purpose, are not the same and should not be assessed in the same way. And these also include the type of stakeholders that will be depending on the expertise. Will be involved in the impact assessment exercise.

So now I'm turning to our last speaker. Alessandro. You have worked very extensively on these questions trying to solve the different issues that were raised by the speakers until now. So based on a lot of the work that you've conducted, what is, in your view, the most challenging of the criterias of a Human Rights Impact Assessment. And also what do you think are the elements that constitute absolute minimum criteria that any AI developer should take into account before launching a new AI system. Alessandro, floor is yours.

>> ALESSANDRO MANTELERO: It's a great pleasure to have this discussion of Human Rights Impact Assessment. Because as you mentioned at beginning, it should be at the centre of the ongoing debate on the AI regulation. And, unfortunately, it's not so debated.

I think that as already mentioned by Emil, there's not a perfect solution for Human Rights Impact Assessment. Human Rights Impact Assessment is by (?) and I can say is even more contextual in the context of AI. Because as also mentioned by Lorna, there's a difference between the impact assessment and Human Rights Impact Assessment. This is something that we also have seen in a project, 2020 project, on which we investigated a different model of data protection impact assessment adopted by several national data protection authorities.

And in all this models, there was a blank space to say, you have to check also if there is a potential impact on human rights. It's not exactly the idea of impact assessment.

And the other end, there are very robust model of impact assessment. For instance, Emil and his organisation work a lot on this topic. And there are very concrete application. I think the main problem is that the human rights impact assessment that we are used to apply to technology are in many cases based on specific kind of risk. Risk that are many (?) are created by a certain kind of industry. And the impact is on population. The impact is on houses. The impact is on our society. Very contextual society.

In the context of AI, it's a bit different. For this reason, I think we cannot merely extent existing numerous impact assessment to AI but have to partially reconsider this model. Because when we talk about AI, we talk about tools that are developed at global scale. And also distributed at global scale.

On the other hand, many case, smaller scale compared to are impacting industrial plan in a specific area. So a new system of video surveillance and new (?) with AI and new smart toys and new smart locker is something that, of course, may have several different impact. For instance, in the work that we carry out on impact assessment. Human Rights Impact Assessment.

In AI, we analyzed (?) by AI services. AI abilities. But at the same time, the impact is more limited. It may affect the interaction with the kids. May affect the manner in which the kids learn. But it's limited to the kids. Nothing about the impact data as big industrial plan have in a specific region.

So I think this different kind of scale is also reflected in the main goal of Human Rights Impact Assessment. Traditional Human Rights Impact Assessment as a main goal to support policy awareness and to create a new policy in order to have a better compliance with human rights. But when, like, in Artificial Intelligence Act Proposal, Council of Europe proposal on the future ‑‑ when we create compliance, create a specific obligation related to the manner of risk, we need more specific tools. Tools that are sort of ‑‑ provide as a source of quantification of risk. With all the limitations that are in this quantification, of course. Because it's not exactly a contest in which we can quantify properly when we talk about human rights. But it is something that is necessary. Because if we talk about high‑risk and there's some obligation for high risk, you have to define what is high risk. You have to assess the level of risk. The same for the legislator. When the European Commission say this application is high risk, why high risk? Which kind of assessment. It's not only a political statement. Or a personal feeling. Should be demonstrated.

So I think that the model that I've designed in this article that if you want to, again, also share in the chat. In the model, try to figure out the level of risk considering the likelihood, the severity. So the traditional variables that are used for risk assessment but contextualized for AI.

As mentioned by, Emil, then I'll close. An important role is participation and experts. Because the expert by themselves cannot fully understand the context. The interaction with the stakeholders and with the organisation that in the context are potentially affected by the AI application.

>> ELISE LASSUS: I thank you very much. This is extremely interesting. I see that you really pointed out two key elements. The global nature of the use of AI systems that required to rethink the way Human Rights Impact Assessment have functioned up until now. And the importance of being able to quantify the ‑‑ these are two key elements

We have some questions in the chat. And I can see that the second question, actually, direct to what Emil and Alessandro was saying. I'll begin by the first question in the chat from Allan Ochola, I hope didn't mispronounce your name, directed to Lorna. What model do you think can serve as practice in artificial intelligence impact assessment? Please, Lorna.

>> LORNA MCGREGOR: Thanks so much for the question. So I think if we were imagining that we were going to start with a clean slate, and we could just construct an impact assessment now for the tech sector. I think we would be thinking about one type of impact assessment where we could really think through the methodology and exactly how we could identify human rights ‑‑ the human rights impact. But I think we're not in that space. What the challenge is that we have to figure out how we maximize and use the existing types of impact assessments that we already have. How we think about the critiques of them. Particularly, these ideas that they are tick box exercises. That there's no not the participation or the expertise that Alessandro has just spoken about. We need to think about how to integrate those types of features into what we already have. Then I think we need to think about the bridges between them. How when we take everything we have, public organisations and private organisations are doing what they should be with their existing obligations and responsibilities, does that get us to a place where we have sort of an effective ecosystem of impact assessments? Or do we need some new ones on top of that? But it's a very complex question. Because they all have such different features. They're regulated by different actors. Or they're not mandatory. So we really need to try and look at that stream and think about how we work with what we have then question whether we need new ones.

I certainly think it's really critical to be thinking around participation. To be thinking around oversight of the conducting of impact assessments. The type of expertise we need for the tech sector. Which is not just technological expertise. But is the kind of context and understanding harm, which is, you know, other types of social science expertise.

But I think what cuts across all of them is regularity. My impression, if you look at the UN GPs and how they've been impacted and the work by the Danish Institute, what we're thinking about is impact assessments regularly. So businesses in the UN GP context are revisiting them, you know, at the point when they conceptualize an activity or entering a market. Or designing a new product. There's an impact assessment undertaken at that point in time. But then it is sort of repeated. To see, actually, what is happening as the business continues in this new market. Or as a product develops and it is taken to market. You have to keep revisiting to see what happens. Either stopping what you're doing or changing it depending on the human rights impact.

But my impression of when we have been carrying out research is that impact assessments regardless of their type are really seen as one‑off things. At the beginning of an activity. And so I think that that's crucial for all impact assessments. Ensuring that they're revisited. And that the activity that is being assessed is reevaluated.

>> ELISE LASSUS: Thank you very much. This is a crucial point we didn't discuss yet. That is absolutely fundamental. That AI is constantly changing. The impact that it can have an the individuals is also constantly changing. So there is a need not to conduct an impact assessment once before it is launched. But regularly to assess whether new impacts on individuals can emerge.

You also talk about the importance of having the right stakeholder and right expertise coming into play. This is a link to the question from Herman Ramos, who must be responsible for conducting a review of the existing law, human rights and life in the context of AI systems.

So Herman is asking, should a data protection authority be capable of that? Should they have this among their mandate or be other stakeholders? So maybe I'd like to ask Emil and Alessandro to say a few words about this. Then if there are not other questions, I will have also a question for K. So please, Emil maybe first, then Alessandro.

>> EMIL LINDBLAD KERNELL: Yeah, very quickly, this goes back to discussions I've had with Lorna and her colleagues in the past why we thought it was a great idea to do this research and see when you read the GDPR, it looks like it could be great for human rights. I don't know that we have, you know, all the information to say that it really doesn't serve that purpose very well. But it definitely seems that it doesn't. So, I mean, that's a piece of trying to have that discussion.

And then I think there's a discussion which maybe the DPA then should have is should the data protection regulation even aspire to do this to also cover more things. Or should it not. But I think, you know, it at least should not pretend to cover a full range of fundamental rights. And then not do it. So I think it's a discussion to be had where I think all stakeholders should be invited to respond to your question. And where we should sort of pay attention.

I also just to finally, I think this is true for now the mushrooming of regulations that will, you know, that AI regulation, that AI, perhaps, convention. Mandatory human rights solutions. How does ‑‑ is there any coherence here? I think that discussion does need to be had. Then who is ultimately responsible, I'm not sure.

>> ELISE LASSUS: Thanks for this. Alessandro, do you want to bring some further clarification?

>> ALESSANDRO MANTELERO: Yeah. I think that's ‑‑ if the question focus on data protection, and the level of protection, human rights, in the context of data processing, I think that according to Article 35 of the GDPR, of course, data protection authority are entitled of this power.

Another problem, if this authorities, practice, skills, and confidence, to do that. Because as mentioned, your research, there is also evidence that human rights is at the core of data protection impact assessment.

And moreover, this works in the European Union context, and although in a quite different manner in different countries. Because we have to also remember that the data protection authorities are not always the same. There are different kind of structure. Different kind of powers. Different kinds of organisation. Et cetera.

And outside of Europe, there is not this kind of approach. Outside of Europe, there is not reference in many of the data protection laws. As mentioned also by Emil. Another element is the future overlapping between the AI regulation and data protection regulation. And so AI authorities and data protection authorities. So I think that's something that they can do. But not necessarily in a position to do. We don't know in the future whether it will happen.

>> ELISE LASSUS: Thank you for this. You're absolutely right. And it's actually a perfect link to the question that I wanted to ask to K. Because, you know, we're talking about Human Rights Impact Assessment. But for specific tools that are being used for surveillance purposes, do you think that Human Rights Impact Assessment could be sufficient in order to prevent from fundamental rights violations? You in the introduction of this panel, you've been referring to very serious violations in Myanmar for human rights activists for Civil Society organisations. And for individuals in general. Do you think ‑‑ and linking, as Alessandro was referring to, the draft AI regulation that is currently proposing to complete ban of certain use of AI. Do you think that for specific AI tools when potential impact on individuals' rights and freedom are too important, Human Rights Impact Assessment could not solve them and they should simply be banned. So, K, if you would like to say a few words about this.

>> K: Thank you. This is also a really important question. Before I get into the Human Rights Impact Assessment, I just wanted to talk a bit about the data protection and about the stakeholders.

I just thought about a perfect example that I have seen in Myanmar. And so before the coup, you know, we had some sort of, like, legislation. Somewhat working authorities, government bodies. Came into our country around 2014 after we passed the telecommunications law. And I think the Human Rights Impact Assessment ‑‑ I might be wrong on this, but we didn't have any data protection. You know. Legal framework in Myanmar. But when, for example, the authority requested the data, they all have their own internal data protection policy, for example. When it comes to dealing with police requesting data, et cetera. And that's a kind of, you know, practice that I have seen. How the business trying to have the kind of, you know, policy internally to protect the data of their users.

But, of course, you know, if the authoritarian state or situation is changed dramatically, even having the internal policy is not sufficient anymore.

So going back to the Human Rights Impact Assessment, whether it is sufficient, it is really important that it is sort of, like, the first basic step that the enterprise, the business, or all of the other stakeholder do the due diligence before implementing all the testing of these kind of, like, technologies.

In terms of whether the AI technologies should be banned or not, I'm not sure if I have enough knowledge to say, you know, what should exist and what shouldn't. But when we talk about these technology, it is also really important that we need to demystify about the AI and, you know, the intrusive technologies. Because most of the time, it is also really important for the Civil Society to better understand so they can advocate on these issues. But what I am seeing is because it has been all wrapped around, like, high‑tech technology and, like, the word, AI, itself. So I feel like we really need to demystify so that people can simply understand and not only the Civil Society, but, you know, a general ‑‑ a public person can have a say in this. Hey, you know, this is my rights and I have the rights to the privacy and set it for themselves.

>> ELISE LASSUS: Thank you very much for this clarification. I'm also looking at the time. I see there is also a question from Veronica Stefan in relation to the pharmaceutical industry. So I'm just going to read that out. I think that Alessandro, you already put your very short reply. Maybe you want to clarify this. Do you expect any future where AI technologies could be regulated such as the pharmaceutical industry? Maybe having different approaches, such as in medicines versus supplements, assuming that AI technologies that are identified to have a higher risk might be regulated/assessed the way we do medicines? So to regulate new medicines and devices, in AI, as we have seen. Maybe Alessandro then Emil then I see there's a last question also for K.

>> ALESSANDRO MANTELERO: This is an important issue to look at. In my forthcoming book, I have a section based on the experience, for instance, at this committee in the context of medicine regulation. And how this experience can be also used for AI.

So in terms of risk and the assessment, of course, this is an important context to consider. But there are some differences. I think the first difference is that there is a different technology context. In the pharma, we have hospitals, clinics, research centre, et cetera. AI is developed by every guys that have enough background or access to tools. And sell to a local municipality.

The second point is that in the pharma, do they have investment necessary? We have a few products for very specific purpose. AI is cheaper as a technology now. You can have many products for very, very different purposes.

And the point is that if the pharma industry is very regulated in terms of testing phase, and trials, et cetera. I don't know if it can imagine to (?) because you have very, very basic AI application. A very, very risky application. So it's not so easy. Please consider that also for the pharma, the European Union had some challenges in overregulating this sector.

Finally, the pharma product and the pharma industry is, per se, global, in many sense. Wide product of AI and the impact on human rights is very contextual. Because the same product that is used in one context, for instance, predicting policy or also tools for education, do not fit well in another context. So it's not easy. Because pharma is a global product in many cases.

>> ELISE LASSUS: Thanks for this. Emil, you raised your hand. You wanted to ‑‑

>> EMIL LINDBLAD KERNELL: I wanted to say, a very quick point. Maybe it's lost in the updated draft regulation. Officially, in this wide paper from the EU, very clearly segmented sort of risk sectors. Risk applications. And risk kind of contra context, let's say. I think here, of course, AI is a technology. Pharma is a sector. To think if you develop any AI system for the health care sector, maybe there are more requirements on explaining why you don't conduct certain assessments, for example. I mean, that might be a way to treat it. That, yes, if you can explain that the AI system in the health care sector to help with scheduling of appointments, perhaps you can say there was no need to really focus much more on this. We can expect the right to health might be in the cross hairs and, therefore, maybe the attention level should go up. Yeah. Just a short point.

>> ELISE LASSUS: Thanks a lot for this. So because we are at the end of our panel. I think we can take a few more minutes because I had some technical issues. So I will take the advantage to prolong a little bit the discussion. I see that there is a last question for K. It's a good one. Because it's asking you, ideally, what do you think can be expected from companies? We talk about public health authorities and the rollout of legislation. But it is true that companies also have been developed, you know, to some internal checklist in order to ensure that fundamental rights are protected.

So, you know, in an ideal scenario, question by Rui Pereira. What do you think?

>> K: Research done by the Danish Institute for Human Rights and other organisations. I just dropped it in the chat. What we really want ‑‑ this is not limited to Myanmar's situation. What we really want to see from the companies when they're entering a country is to really understand the countries' context. Because, you know, the countries, all of the country, have, like, complex history and different stakeholders. Sometimes, it can lead to conflict. Even for the conflict, there can be numerous multistakeholder in it.

So I will say the first part, understand the countries that they are entering. Also, I'm going to refer the ICD sector wide impact assessment that's been done. What we really want to see from the companies also to have the kind of, if there's no protection. And if they decided this is ‑‑ we will be bringing more impact by entering this market in this country. Then I will really expect to see, you know, data protection that really, you know, the internal policy that respect the users' rights. And also the transparency.

I will say, you know, did a lot of report every year. But we didn't see any of those similar, you know, actions taken by other tech‑like companies. And even when we talk about ‑‑ I know that we are talking mainly about the kind of ‑‑ I've been talking about the ISV companies. If we look at social media companies, even though there's a transparency report, how transparent are they? It's also really important to ask these questions. They will say the government had requested this number of amount. If you work on platform accountability and these issues in your country, you'll know the government did not just ask to request data.

Just like when we talk about companies, we need to look at a broader range of companies. And I really want to see, you know, them really respecting the transparency and also, you know, like, remedy, oversight, and, you know, yeah. That will be something that I will really want to see from the companies. And, I mean, I'm just going to ‑‑ everything from ICD Impact Assessment. Please do go there and then read it. It's a really, really amazing and very important report, indeed. Thank you.

>> ELISE LASSUS: Thank you very much for this. Indeed, we have a few links and references in the chat. So we all have our reading for the weekend.

So to wrap up, because a lot have been said today. And, you know, we will do our own work and we will put in writing all of our discussions and everything that was said today. It will be available on the IGF website. What I take from this discussion is really three key outputs. Is there is a crucial need to move away from the focus on data protection in order to look into the scale of the seriousness of impact for all fundamental rights of individuals. That's the first point.

The second point is that there is a need to rethink traditional Human Rights Impact Assessment tools. Because of the scale of the impact of AI. We need to be able not only to qualify but to quantify the risk in order to make Human Rights Impact Assessments effective support in identifying, assessing, and preventing impact on individuals' rights and freedoms.

And finally, the key words that I think have been mentioned by all speakers today is context. The context of the use of AI is really key in order to define case‑by‑case Human Rights Impact Assessments.

So on this note, I would like to thank all of you for joining us today. For your questions in the chat. I'd like to make a special thanks to our speakers that have provided us with very good insights so that we can reflect a bit more on how to progress on the question of the Human Rights Impact Assessment.

Enjoy the rest of the IGF. I think today's the last day. And have a wonderful weekend. Many thanks, again, to all of you.

>> EMIL LINDBLAD KERNELL: Thank you, bye.

>> Thank you, everyone.