IGF 2021 – Day 3 – WS #196 Human Rights Impact Assessments throughout AI lifecycle

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR: Good morning, everybody, welcome to our panel.  It's a pleasure to have you here with us today after a very long and interesting, very, very interesting event.  I'm excited to be moderating this panel on human rights impact assessment throughout the lifecycle of AI systems.  My name is Alexandru Circiumaru and I am at European public policy lead at the AWSS institute, a research based institute with a mission to ensure AI and date work for people in society based in London.

It's my pleasure to be moderating the panel today.  We have a very, very interesting panel, but before starting to introduce all of the speakers, I will do some housekeeping rules.  So for everybody to be aware of the structure of the event, I will start by asking each of the panelists a broad question to get the debate started.

They will each have five to seven minutes to respond to this broad question, and once all panelists have taken the floor, I will open up the discussion to everybody else and I will take questions both from the chat and from Twitter, where I also think you can ask your questions using the session specific hashtag.

So there are two hashtags.  The first is IGF 2021, #IGF 2021, and the session specific hashtag is hashtag WS for workshop 196.  And I'll also leave it in the chat for everybody to be able to use it.

While one of the panelists is speaking, I will ask everybody else to mute and turn the camera off so they can have the floor completely to address the question that I ask them, and as I said, I will open the floor to everybody else and we will have one big discussion.  I would suggest being ready to take notes.  I have my paper and pen next to me because it will certainly be a very, very fruitful conversation, and everybody will have a lot to learn.

That being said, enough about me and the session, I will start by introducing all of the panelists one by one, and then asking each one of them a question in turn. as part of the work, as part of her work she coordinates the Joint European Commission Database on National AI Policies and conducts policy analysis for national strategies policies and regulatory approaches on AI from over 60 countries and the European.  Thank you, Laura, for being with us today and looking forward to ask you a question in just a minute after introducing the rest of the panelists.

Next on the list is Professor Frederik Zuiderveen Borgesius, who is a Professor, and I do apologize if I got the surname incorrectly, Professor of information, communication and technology and law at Radboud University in the Netherlands.  He is affiliated with the Ihub, which is an interdisciplinary research hub on security, privacy and data governance.

His interests include privacy and data protection and discrimination especially in the context of new technologies and in 2019 the Council of Europe asked him to write a report on discrimination, AI and decision making.  And I have to say I have used his work extensively in the TDs that I finished writing in the summer so it's a pleasure to have you here and very excited to ask you a few questions.

Speaking of the Council of Europe, I will use that as a segue to introduce Mr. Kristian Bartholin, who is the head of the digital development unit and the Secretariat to the CAHAI, which is the ad hoc Committee on AI at the Council of Europe.  And I know there are quite a lot of developments coming out from the work of the CAHAI, so very excited to hear you speak in just a minute.

Then moving on down the list of panelists, it's Daniel Leufer.  Daniel works as a Europe policy analyst at access now in the Brussels office and he works with issues around AI and data protection with facial recognition and other biometrics.  During his fellowship, he worked with Access Now to develop AI meets.org.  A resource to tackle the most common miss perceptions about AI.  Daniel is somebody who I also had the pressure to know personally and I know about the work that he is doing, so also very, very excited to hear more about what he has going on just before Christmas and before that the year end.

And then Cornelia Kutterer, who is EU Senior Director, Senior Director of EU Government affairs at Microsoft, privacy and AI policies, corporate and external and legal affairs.  In her role, she is responsible for privacy and AI policies in the European with a focus on how new technologies impact society and laws and regulatory frameworks will evolve to meet expectations of society.  She is currently leading a team working on corporate affairs and regulatory policies including competition, market and content regulation, responsible data use and privacy as well as telecommunication.

I'm happy to call Cornelia a mentor, so it's very nice to be able to be in the position to ask her questions.  I have learned a great deal from her, and I'm sure everybody that is present today will learn a lot from her speech and from answer to the questions.

Having introduced everybody, and potentially mispronounced the names in the process, I am in the position of a long complicated name so I know how that feels.  I will give the floor to our first speaker, Ms. Laura Galindo, and the question to get everything started is that the OECD has released its principles for Artificial Intelligence over two years ago now.  I think it was back in May 2019.

So I'm curious, and I think everybody else is curious is where do you see things going right now?  What has happened in the past two years and where do you see human rights impact assessment popping up in the work of various nations and in the European Union?

>> LAURA GALINDO: Thank you to the organizers for putting together this great panel and to my fellow panelists.  At the OECD, by now over 46 countries have adopted the OECD principles.  We have been developing a plethora of initiatives.  First and foremost is the OECD policy observatory which gathered data and evidence‑based policies, trends to inform countries but also inform anyone because it's publicly available, on how countries are following their AI policy journeys with data and with policies.  And this is a collective effort thanks to, of course, the contributions by countries, but also insight from experts.

Ever since the adoption of the principles and the launch of the observatory, we also launch a group of experts convening more than 200 members from all over the world with expertise in AI, and we put them in three different groups and one of those is developing a classification system, a classification framework for AI systems which basically looks to how to understand the different types of AI systems as they raise different AI policy considerations and legal issues.

We hope that together with the experts, but also to see how this works with other intergovernmental organisations on how to assess risk for different AI systems.  So we hope to launch the classification framework available to all early next year, so something to look out for, because we are going to calibrate and hope to keep informing these developments and how to understand different AI systems.

The other work that the Working Group is doing which is relevant for this discussion is mapping the different categories of tools for accountability of AI systems, including human rights impact assessments.  We have seen that different AI actors, companies, trade Unions, civil society have developed a plethora of tools for accountability.  We see also requirements by the current legislative proposals to understand which are the different tools, how can they be combined to inform emerging regulatory developments.

So my fellow panelists would commend more of these, but one question is how these non‑regulatory tools or regulatory tools will be informed together the implementation.  So this is something to look out for in the following months.  Last but not least, this year we launches our report informing how states are informing the OECD principles.  And we found interesting insights.  I invite you to look at the report.  One of the main insights is that countries have moved from principles to practice through their national AI strategies by implementing, for instance, policies to invest in research and development and research and development also foundational research for trustworthy AI for more transparency, for exploitability of AI systems.

Countries have implemented policies for AI making it more accessible to AI compute capacity.  Countries are implementing a series of experimentation initiatives to promote regulatory sandboxes.  We are at early stage but this is something that is coming and is going to help inform also all of the developments as we learn more from AI breakthroughs.

And, of course, one of the priorities for countries is AI skills and education, the future of work.  Here I would like to focus on because when we look at human rights impact assessments and the need for social technical expert, human capacity comes first.

And this is a priority for countries.  There is much more work to be done.  We are very keen to move forward and what's going to happen with Impact Assessments.  There are some countries, Canada that implemented in 2018 the directive for automated decision making but there is still much work to be done.  It has been implemented just recently, and there are a few Impact Assessments and enforcement cases that we are seeing so we are really at a very early stage on Impact Assessments and we are looking more for what's going to happen.

The EU, the EU AI Act legislative proposal, a bill that puts human rights and civil liberties at the heart of proposals.  In the U.S. there are sectoral approaches and initiatives, and even recently another legislative proposal coming from Brazil for AI systems.  So a lot is happening, and, again, navigating the complexity of policy, regulatory with AI is the aim of the policy observatory to hold countries, regulators, civil society to navigate the complexity of issues.  I will limit it to that.  Thank you so much.

>> MODERATOR: Thank you so much, Laura, that was very, very interesting.  I took some notes and looking forward to ask some questions later on.

Moving on to Frederik Zuiderveen Borgesius, Laura has mentioned the EU AI Act, and you have written expansively about the European Union legal framework and how AI is going to impact it as well as other emerging technologies.  So the question for you is where are Human Rights Impact Assessments in EU law, and what can we learn from how they are currently used if they are used there at all?

>> FREDERIK ZUIDERVEEN BORGESIUS: Thank you, Alex.

I'm going to first answer under current law and then suggest a bit what we could consider for upcoming law.  So currently there is not really a hard requirement for Human Rights Impact Assessment, however we do, in Europe, we have the best data protection law in the world for the moment, GDPR, with a  requirement for Data Protection Impact Assessment.

So currently, organisations for almost any AI project are required to do such Data Protection Impact Assessment.  So the advantages that is self‑regulation or ethics guidelines or suggestions by NGOs which are all useful, this is a hard requirement.  And the GDPR suggests that an organisation doing such Data Protection Impact Assessment should consider all fundamental rights.

So I would say for the moment unless there are other hard requirements, let's just enforce the laws that we have and, of course, for organisations just comply with what we have, data protection impact assessment requirement and non‑discrimination law, et cetera.

There is some experience now, because before the GDPR already it was possible to do privacy impact assessments and there is lots of literature and experience, et cetera, with impact assessments.

And when you do such an impact assessment, you also take technology into account and also very importantly domain specific knowledge.  So if it is about databases used for immigrants, then people working in that field should be involved and ideally also the affected people, and if it's an impact cement about predictive policing then the main specialists from that field should be involved, et cetera.

So I would say at least for Europe, this probably goes for the world, start with enforcing the rules that you have.  That is a good starting point.  And in the future, because I'm not saying the data protection impact assessments are ideal for AI projects because the focus is still not exclusively, but the focus is still on more personal data related questions.

In the future, we probably need hard requirement in law from AI Impact Assessment.  Did we decide already to call it AIAA?  And when drafting such a requirement, we can take inspiration from other countries, from NGOs, and also from countries outside the European Union, and I think it should be written in the law that there is a requirement to involve legal specialists, technologists and people with domain specific knowledge.

So the AI gives some hints in this direction.  I'm not completely convinced that AI is the best way of regulating AI, but the train seems to be on the rails, and it seems not so likely that we start with a clean document again.  It seems to be a political reality, so I guess the best we can do is trying to improve that proposal.  I will leave it at that for the moment.  Thank you.

>> MODERATOR: I like the AIAA, not to be confused with the AIA.  It's a lovely idea.  All of us will have to get used to this, but I like the AIIA, and we will have a discussion about what you mentioned about AIA, and about how we can improve it.

Moving onto Kristian Bartholin and to the importance to Human Rights Impact Assessment by the CAHA.  I hinted earlier that there are some developments coming up, some important developments coming out of the work of the CAHAI and some are related Human Rights Impact Assessment.

>> KRISTIAN BARTHOLIN: Thank you very much.

And, yes, I can confirm that the CAHAI that finalized its work on the 2nd of December and has submitted its final report to the Committee for the time being.  It is not a public document, but it will become once the Committee of Ministers has discussed it in the beginning of next year.  I can give you some insights to it, obviously.

The CAHAI has been asked as you know already which is public document and available in the Council of Europe's website to produce feasibility study, and that was done last year.  Then this year the focus was on identifying the elements that could become part of a legal binding and non‑legally binding instrument on the use of, I'm sorry, on the design, development and use of AI applications or AI systems in relation to human rights, democracy and the rule of law, the three areas in which the Council of Europe is the competent international organisation.

We have indeed also as part of that exercise proposed to the Committee of Ministers that legally binding instrument on AI, a kind of a framework, if you like, on AI should include a legal basis for the establishment of human rights rule of law and democracy Impact Assessment.  And then the impact assessment model would be fleshed out in another instrument in order to have the two not in the same instrument, but to ensure that we can easily update the model itself.

As you know, I don't want to go into the details of international law, but as you know, treaties are very difficult to amend once they have entered into force.  It requires the parties to go through certain motions, and in order to have a better and more flexible approach, we have decided to keep the actual model.  At this stage this is the idea that we keep the actual model apart from the legal basis, but the legal basis would be a binding one.

So for those states that would like to join our Convention, and I would like to say that the Council of Europe Conventions are open to all states more or less around the world, this is, they would then actually be able to, or rather be obliged to ensure that in the domestic law they have the basis for creating an impact assessment of human rights, democracy and rule of law.

As Professor Frederik Zuiderveen Borgesius mentioned before, I think the real issue is that we know that there is a need for impact assessment with regard to human rights but we do not necessarily have the legal basis.  This is one instrument that would potentially provide this legal basis for those states who are parties to it.

So that would be an improvement.  Then what would impact this human rights democracy, impact assessment, so I think in the end we will have the last RA but insofar as we are working on now, I think what we are looking at is something which has to be done.  When there is clear, when there are clear and objective indications of relevant risks emanating from the application of an AI system.  So we would not have to do it if an AI system is dealing with your toast or your coffee brew.  Coffee can be a human rights issue.  But it is, when AI system is used in a setting.

Where it is sensitive to human rights, there we should, all of the functioning of democracy, or the respect for the rule of law principles, then we should apply this, then there should be this formalized impact assessment taking place, and that requires that all AI systems undergo an initial review in order to determine whether or not they should be subjected to the formalized assessment.

And it is also recommended that indications as regarding the necessity for more extensive assessment should also be further developed.  That is also one of the recommendations of the Council of this ad hoc Committee on AI that you mentioned, and finally, we should also look into whether the losing of an AI system in a new or different context or for a different purpose should not, all other relevant changes to the system should not mean that we would require a reassessment of the AI system.

So in that sense it would be a rolling undertaking not just something you do at the very beginning or at the very end, but throughout the lifecycle of an AI system.  And, of course, if the AI system changes in some ways, then it may actually go from being, for instance, not relevant for human rights, become very relevant for human rights so, or vice versa.

So this is, this is the approach.  Finally, I would say that the model should at least contain four main steps that could be part of it.  And these are risk identification, so identification of the relevant risks for human rights, democracy and rule of law.  The second one is the impact assessment itself.  The assessment of the impact should take into account the likelihood and severity of the effects of those rights and principles that I mentioned before.

The governance assessment, there should be an assessment of the roles and responsibilities of duty bearers, rights holders and stakeholders in implementing and governing the mechanisms to mitigate the impact and as a fourth step there should be mitigation and evaluation, that is identification of suitable mitigation measures and ensuring a continuous evaluation.

When you look into the impact assessment step, you should see that there are a number of elements the CAHAI has identified, that could be the context and purpose of the AI system, the level of autonomy of the system, the underlying technology, the complexity of the AI system, the transparency and explainability of the system, and the way it is used, human oversight and control mechanisms for the AI provider and AI user, data equality system robustness and security, involvement of one of the groups or persons, and the scale in which the system is used, its geographical and temporal scale.  That is also something that should be taken into account because, of course, scale has an impact on let's say the size, if you like, of the, of a potential breach of human rights, and then finally also we should look into the assessment of likelihood and extent of potential harm, and the potential reversibility.  Whether or not it concerns a red line prohibited system or prohibits use of the system as established by domestic or international law.

So these are the main, let's say the main features of the model that the Council of Europe will likely, well, not likely, well develop it in the coming years.  Negotiations on the binding legal instrument that I mentioned before will probably begin in May 2022, and at the same time, we will also look into the establishment of this human rights democracy and rule of law impact assessment model.

The big question is, of course, we have many actors in the field of AI and Professor Frederik Zuiderveen Borgesius mentioned before the AI Act of the European Union.  We are working closely with AOCD, UNESCO on AI issues.

Let's say where would our HUDERAI model be different from the kind of impact assessments that the AI Act, I think there is one major difference between our legislation, proposed legislation and the AI Act of the European Union, and the AI Act of the European Union is primarily with the market of AI systems.

So this is about safety and other requirements for placing a product on the market, but our approach is not related to the system itself.  Our approach is related to the interplay between the system, human beings, and their rights and obligations under international law.

So this is a slightly different approach, and also hour approach is not, we have no, as you know, the Council of Europe is not at all competent when it comes to market issues or economy issues, so we are only looking at it from the point of view of human rights democracy and rule of law.  So these are the main features, the main difference between our approach and the one that is the AI Act.  I hope this is somewhat clearing up the matter.  Thank you.

>> MODERATOR: Thank you very much.  That was very, very interesting.  I took loads of notes.  I will be looking forward to May 2022 then, and very excited to see how the HUDERAI model looks like.

You mentioned the AI Act proposal of the European Union and it has been part of the conversation since the beginning, and I will pass the floor to Daniel Leufer who will talk about how do human rights assessment look like in the context of the AI Act.  And having done extensive work on the file, it will be interesting to hear what place for the impact assessment.

>> DANIEL LEUFER: Thanks for the invitation.  It's been a fascinating discussion.

Just to begin with the kind of commiseration about the acronyms, also I will, one, point out that even though the AI Act has been referred to as AIA, in the U.S. AIA has usually meant algorithmic Impact Assessment.  So there is already confusion.  We have a report where there was a discussion of the Canadian AIAs and AI Act, so serious acronym difficulties ahead.

Also just a general point before I get into the AI Act about Human Rights Impact Assessment because I think everybody has been working heavily on them.  An issue we have run into with the chicken and egg situation.  Everyone, I think, agrees that a system that posed a high risk to human rights should have an impact assessment, but you have to do an impact assessment to figure out if that needs to happen.

There are lots of different ways of approaching that, and Kristian outlined the idea put forward in the CAHAI.  And access now has been an observer for the CAI of having an initial check to see if there is a risk posed and there could be a fuller one.  In the AI Act, we sort of start from a different point and I think we have to approach that differently.

So the difference is the AI Act is that it sort of predecided that certain use cases are high risk.  So that question of which systems need to have the full impact is moot in the context of the AI Act because there has been this discussion about which systems pose high risk.  We have another risk category where we have prohibition, so unacceptable risk, but they are neither on the market nor deployed so there is no need for assessment.

And then in Article 52 we have systems that pose a risk of manipulation.  We have this risk categorization already and we can talk about what needs to happen with those systems.  Now, as was pointed out, there is some form of impact assessment in the AI Act as it stands which is called a conformity assessment.  And just to not assume that everyone knows the details, the AI Act basically focused primarily on two secretaries of actors, providers, so that's companies, for example who are putting systems on the market, developing them.  And then what they call users kind of confusingly which is entities that are going to deploy a system, and that does not refer to end users or affected people.

The conformity assessment is something that the providers do under the AI Act so that's a company who is developing the system, has to go through a series of primarily quality checks.  There is some mention of fundamental rights, but not to the degree that we would want.  And, you know, it's, again, this is a product safety legislation.  So as Frederik pointed out it may not be the ideal way to regulate AI, but what we are looking at the moment is how to make it as good or as less bad an instrument as possible.

So kind of being pragmatic at this point, but we can have a discussion about whether there is other models to approach things.  The big issue for us, and I will just drop something in the chat here, which is joint statement that Access Now and 115 other civil society organisations signed that calls for certain amendments to the AI Act to make it protect fundamental rights.

One thing that we pointed out there is by placing all of the obligations on providers, you really fail to protect people's rights because it's definitely the case that providers, the entity developing an AI system has some capacity and there is a need for them to assess how this system will impact people down the line.

You know, if you are looking at the data it's trained on, the design decisions, these all do have a downstream impact on the people who are affected, but the entity deploying the system is really, really central to how the system is going to impact fundamental rights.  If you are using a system in a particular cultural context, in a particular city, in a particular neighborhood of a city, if you are talking about a facial recognition system, for example, or an entry, you know, a system that uses biometric verification to control entry to a venue or something like that, if it's in a particular neighborhood, if it's in a particular place, all of these things are going to have a massive impact on how that system will impact fundamental rights.

So the idea that the provider at that level could already anticipate all of those impacts is flawed.  That is not to say that the obligation there should be removed, but it should be complemented by an obligation at the user level.

So the question of DPIA pops up.  Is the Data Protection Impact Assessment enough at that level?  The observation has been made that any system that processes personal data and falls under the annex tree of systems would have to do a DPIA.  The DPIA obligation is not totally clear when it needs to happen.

That is good, but it's not enough.  Because as well, Frederik mentioned, personal data is probably a bit of a loophole there.  There will be some AI systems that do not process personal data, but they could still fall under the high risk list.  There will be others one that's due process personal data but the provider says that they don't.  We see this often with the motion recognition systems and other forms of biometric categorization analogous.  I have seen ridiculous things like companies who capture facial images, make biometric templates and process those for inferences, claim that they only collect anonymous data.  So really crazy stuff.

So if that's left to the volition of the provider or the user to determine whether it processes personal data, it's going to create a massive loophole and we are just going to be in the same situation we are in now.  So what we are demanding is that all users of high risk AI systems under the AI Act have to do a fundamental rights impact assessment.  I'm open to other acronyms.  I'm not fixed on the acronym.

The important thing is that there is an assessment of the impact on fundamental rights at the user level.  What goes along with that is a transparency obligation.  As you might know in Article 60 of the AI Act there is a provision for this EUI database.  What that currently lists is what high risk AI systems are on the market in the EU.  That's good, but what we need to know is where they are being used.

It seems to me completely arbitrary to only list what's on the market in that database.  What we need to know is what's on the market and you look and can see all of the places it's being used.  This is not also, I wouldn't see this as a restrictive measure as well.  It's definitely a useful measure for civil society, for example to find out about risky practices, but we do want to promote this ecosystem of excellence and trust.  We want to know where the systems are being used.  If there are positive use cases, then other public authorities or companies can learn and say this company is using that.

I think unless you have something to hide, then I don't see the reason not to include, you know, list the use.  And if we are promoting an ecosystem of excellence and trust we shouldn't have anything to hide about the systems we are using.  I will wind it up there because I think that we can have a discussion about what the fundamental rights impact assessment should entail.  One thing we are proposing that maybe a bit novel is that the user should have to identify affected people by the system.

And I mention that there is only two categories of actors dealt with by the AI Act, providers and users, we are saying that any user when they are procuring and about to deploy high risk AI system should identify affected people and the fact that they are using the system and the identification of affected people should be included in the database as well.

The reason that we are asking for this is it's good practice, they should think about who they are identifying, but it's been pointed out that the AI system provides no rights to people who are affected.  And what this obligation to identify affected people would do is it would create a potential rights holder.  And we are also asking con jointly to have rights to redress and other rights to be accorded to people who are identified as affected communities.

So we can get into all of this later, but that's an initial contribution.

>> MODERATOR: I'm sure we will get into that later.  I'm sure there will be a debate about that, and I see that the conversation in the chat is getting very active, which is great.  So please be ready to prepare your questions for all of the panelists, and I'm sure that a very interesting debate is going to ensue.  Moving to Cornelia, I'm going to ask is do this mythical creature that we call Human Rights Impact Assessment exist in practice and is Microsoft doing them?  How do they look like from the perspective of the industry?

>> CORNELIA KUTTERER: Thanks, Alex, for the question.  Thanks already for a really interesting conversation.  Bringing all of these different elements together will probably occupy us for quite some time.

I'll say first most generally that Microsoft follows the guiding, the UN guiding principles for business and human rights throughout its processes overall.  We also do have transparency with annual reports on this topic.  I will share the link once I'm done with speaking so you can have a look how that looks like.  That covers, of course, a broad range of topics from supply chain to privacy, lawful access, content moderation, et cetera.  More specifically on AI, as you probably know, we have been developing a standard based on the principles that Microsoft has committed to in, for responsible AI.

I think it's important we do this, of course, in, if you like, in a time where we have at the same time and in parallel the legislative conversations that we equally follow very closely because at one point in the future, of course, what we are currently doing ideally aligns to what the law might require us to do.

So we have look the at how we can actually operationalize these principles, accountability, transparency, inclusion, and that's not as easy, and I think what Frederik and Daniel have pointed to, this is socio technical.  It depends on the context.  It might depend on the technology itself.  It may be different from when you think about facial recognition technology versus text translation.

For example you have different issues that might be more important in the one than the other, or accuracy, for example is something that you might be able to achieve in one of those systems, but cultural diversity is something that plays a role in others, in particular in language.

So the variety of how you actually achieve these goals becomes really complex, not only by the different scenarios, but also eventually by the algorithmic models you are using.  And that complexity makes this or necessitates to make these processes really agile.  So we have, we have been working on this to formulate for our engineering groups if you like general requirements that they are all now mandated to apply.

I will also say, and this is, I think, beyond before I go into the details on how these look like, just to say that in Microsoft we are now obliged, we have a mandatory training on responsible AI, so all Microsoft employees have to do this.  And we have a system in place in order to scale the work on responsible AI, so in each organisation there is one person appointed to make sure that these requirements will be understood and implements.  So there is work going on to sort of shift mindset so that people understand, you know, what it means to responsibly develop AI.

So going back to the standard, these are channel requirements that are formulated as objectives and processes towards specific goals such as accountability, transparency, fairness, reliability, safety, privacy, security, and inclusiveness.

I will say that the specific standard doesn't specifically look at privacy as this is already covered by our privacy standard that implements basically laws such as GDPR not only and inclusiveness goals are also already covered by our accessibility standard as already existing.

Of course, as we saw with the DPIA, there is an interplay, so when we look at how we comply with AI, we look at how we comply with GD PR, this is obviously something we do in all of the systems in which personal data is processed.  So then what does it actually mean to have these requirements?  We have sort of tried, specified specific goals under each of these commitments when you think about accountability, for example.

One objective, and one process that in the standard is in more detail is impact assessment, and the impact assessment, I think that's also relevant in comparison to GDPR is broadly looking at impact not only on the data subject, but, you know, it's this question about what, why are you building the system?  Who will be negatively impacted?  Who will be positively?  Who will the system serve?  So we get a broader view on this.

The requirements will oblige the engineering team to have an oversight of significant adverse impacts, whether it is fit for purpose, it requires a data governance and management and human oversight and control.  When it comes to transparency, there is specific goals described to make the system intelligible for decision making, how to communicate to stakeholders, so this is, for example when we issue transparency notes to customers in order to understand the capabilities, but also the limitations of the system, and disclosure of AI interaction, something that you also find to certain extent in the AI Act.

The third commitment is to fairness, and this really tries to achieve quality of service, thinking about, for example that different demographic groups will receive the same quality of the service.  How do you think about allocation of resources and opportunities?  And how do you minimize stereotyping?  Are we raising outputs?  So we are looking at this really towards what are we trying to achieve and how do we get there.  This is how the standard is sort of describes for the engineering groups.  Then this is, of course, and I'm not going to all of the other objectives and roles otherwise we will sit here for quite some time.

I wanted to say that we oblige, so these standards are obligatory for all AI systems.  It reminds me a little bit of what Christian was saying that, of course, in order to identify the high risk, you have to do the impact assessment.  When we identify high risks, which we call internally sensitive use cases, we have them separated or divided in three categories, and I think they are recognizable, again, when you think about the AI Act and the high risk categorization.

The one is AI systems that might have an impact on people's lives, such as, of course, allocation of, for example, social benefits, or loans or in the context of education or justice.  The sensitive uses that relate to physical harms in particular, but not only, and then, of course, the third category is the human rights framework, so potential risks to fundamental rights.

Here you would think about use cases for use of facial recognition.  When engineering groups or sales organisations are confronted with this type of potential risks, then the system has to go through specific reviews.  So there is specific scaling so that a special group of people that work on these ethical issues will have a review and define mitigations for those special uses.

So it almost looks to me that it aligns with the thinking that we just heard on the process, but these are sort of what we are internally doing.  And then I will leave my comments to the AI Act maybe for a little bit later because otherwise I have taken too much of my time already.

>> MODERATOR: Thank you very much, Cornelia, and thank you very much, everybody.

I'm now going to open the floor for everybody to be able to ask their questions, and I see, again, that the chat is very active.  Perhaps to kick start the open conversation, I will identify big themes I feel are coming out, taking notes from all of the speeches.

So other than the name, which I don't think for all intents and purposes and, I don't think we will manage to decide on an acronym today, but attempts have been made in the chat, and some are very, very good.

The other questions that we get are when do we get to do them?  Who does them?  What are we assessing?  And who reviews what has been assessed?

I think these are very big questions, yes, exactly, when?  So I don't think we will manage to get financial answers on these, but perhaps putting all of these into perspective, I feel that all of you have agreed that they are if correctly done and if properly done good ideas.

So I will kick start the conversation myself by asking, given that they are such a good idea, what are the barriers that we see in getting them adopted?  I think Frederik has mentioned the lack of a legal basis as a potential barrier, and some others have been potentially identified, you know, if we talk about the EUAI Act it was potentially the legal basis and being market focused, but maybe if you would like to just draw those out more clearly and maybe starting with Laura once again since she started the panel and she also has looked at many, many countries and how they look at AI strategies, Laura, if you would like to speak on that.  If not, I'm happy to pass the floor to somebody else.

>> LAURA GALINDO: Thanks Alex, yes, I can just jump in quickly with two points.  Actually with the first issue of the acronyms, which thinking deeply, it has impact, and when we think about what Canada did with the automated decision making, automated is not the same as AI and could pose problems when we look at, for instance, whether a hiring system, I think it was a case in Canada where the hiring tool that was used by the Department of Defense could be, the rules of the directive could apply to this case, and the argument there was, well, the fact that there was a human looking at the application of the tool render it not automated?  So then, therefore, the directive wouldn't apply.

This is interesting because just a matter of an acronym of a name has an impact on whether it would be applicable, so perhaps something to think of later as we move on with different tools.  And with regard to Human Rights Impact Assessments, yes, as you mentioned as we are developing here, it would be interesting to see limitations that already many experts, I see here, and it's the limitations of Human Rights Impact Assessment.  What is the appropriate scope.

I think Christian mentioned the elements that HUDERAI will have, but it remains to be seen what other elements need to be included or what's going to be the practice that will develop out of it as at the moment what we are seeing is that the elements that it should contain remain to be seen.

The other thing is with regards to the scalability and the scalability because, of course, most, say, large enterprises would have or be able to conduct a Human Rights Impact Assessment, but what about small and medium enterprises?  What about small companies where this could be costly?  You need social expertise for conducting proper AI impact assessment.  So something to think on.  We need a lot, as I said, capacity building and learn from experiences and practice, and maybe last but not least, timing.

There was this paper by Mark Lotomara on Human Rights Impact Assessment conducted by a company in Malaysia and it posed the question, back to the chicken and egg question, first we need to define if they are high risk in the first place, or whether it should be post or during.  And his comment is it should be ongoing.  This is something that to explore, but it will add into cost, into the scalability problem, and I will leave it to that, just more questions than answers, and I hope that my colleagues, particularly fellow panelists, but over to you, Alex.  Thanks.

>> MODERATOR: Thank you very much.  I see a hand raised, and you mentioned Vanya, and I hope you don't mind me addressing you by the first name.  I think I will do this with everybody who wants to participate.  And then I see Daniel's hand being raised.

>> VANJA SKORIC:  Thank you for organising this panel, and thank you to the speakers for excellent presentations and I love that this conversation is not too technical, but it's going to the heart of the issues, so it's not on a broad level of, you know, principles, gut really going into the details of what can be most useful, I'm Vanja Skoric, and we are part of the group of observing organisations at the CAHAI process.

We represent the Conference of INGOs, so we were lucky to contribute to the draft of the future CAHAI document on the Working Group on impact assessment.  So I am very, very happy that the CAHAI's work is currently the most progressive and most detailed one on this topic, and in particular, it answers two important questions that I think EU and other jurisdictions could look at as a good example to include.

The first question is what do we want from trustworthy AI.  If the trust in AI systems and applications is the proclaimed goal, which it is by EU and other jurisdictions, then this instrument of Human Rights Impact Assessment or broader HUDERAI is really the key instrument to achieve the trust because there will be no trust by the public, especially affected groups, if they don't feel reassured that everything has been done before the deployment of the AI system to mitigate any harms.

We see that ongoing working on the ground with communities, with organisations, a grouping frustration swelling up from all of the cases and situations of AI fallouts that we have witnessed and scandals we have heard in many, many countries including western democracy countries.

So this is one point where impact assessment can be useful instrument.  The second point is going to what Laura mentioned, which is the agility.  We, of course, don't want the rules and legislation that will not be applicable, that will not be implementable, and there must be agile enough instrument to complement different sizes, different sectors and so on.

So I really like the idea that was included in CAHAI draft, if I may just reveal that part without revealing too much of the content, which is actually to have stages of Human Rights Impact Assessment.  The full impact assessment would be reserved for those high risk AI systems, but as Daniel noted and a couple of other people, it's a chicken and egg problem.

What is missing from the EU approach, and nobody yet managed to answer that from any institution is what is the methodology for populating an extreme high risk AI applications?  What are the actual steps that determine how these high risk listed systems impact human rights in order to be put on a high risk?

So this is the missing piece.  This is the missing link from the EUAI regulation.  So the question is answered in CAHAI document in a way that for all of the AI systems being develop the, designed, even, there should be small scale initial review, initial scanning, if you want a triage of impact assessment on human rights, democracy and rule of law.

That will help categorizing the risk level, and then after the risk level shows that there is a need for a more robust Human Rights Impact Assessment, they go into more robust.  If it's really high risk, they can go into even more robust process.  And this is where, and then it goes back to Laura's question in the chat.  This is where we ask how can standardization bodies that are working on standardization on some of these issues and topics, how can they play a role to help us devise these methodologies, scaling them up from a very basic triage to a mid-level, let's say I would even add the third level, not only two levels, is but the mid-level Human Rights Impact Assessment, to a full complex one for really intrusive systems that are really almost on the border of being most likely bad.

And to go back to the questions of limitations, there are two key aspects we address in our papers that I put in the chat to help with HUDERAI being meaningful.  The first one is oversight.  There must be some kind of accountability or oversight mechanism that checks and then provides possibility for appeal or redress how these instruments and processes have been conducted.

So if a company conducts the initial triage level Human Rights Impact Assessment, that must be in some way the findings of it must be published in order to be able to contest it.  And it goes higher up as the level is increasing, the oversight should increase.  External oversight.  The second one is meaningful participation of external stakeholders.  I completely understand the company, an especially small size one cannot have knowledge in‑house to do this type of assessment.  There must be methodologies developed and we are trying to work on that from our side as well with civil society and human rights groups to include meaningfully stakeholders and all potentially affected groups and communities to give their voice into these processes.  How to do it is a really open question, but I think it will be a make or break question for these instruments.

And this is something we are happy to collaborate on with many of the people on the panel here already.

>> MODERATOR: Vanja, if I ask, thank you very much, that was an incredibly useful contribution, and sigh see hands started going up quickly as of raised some of these issues.  I think Daniel's hand was first up.  Thank you very much for your contribution, and definitely you will see reactions from everybody.  Daniel.

>> DANIEL LEUFER: Yes, just to pick up on a few points, I think I would stress, again, what Frederik, and I said that we are approaching the AI Act not as an ideal instrument here, and we are looking at fixing something that's maybe not entirely great and getting the best we can out of it.  So the proposal I mentioned about fundamental rights impact assessment on users, I would stress that we are not seeing this as something is actually going to really stop harms.  I mean, if someone is deploying a, you know, biometric categorization system to determine access to education based on facial recognition.  This thing needs to be burned.

Any kind of fundamental rights impact assessment at the user level is not going to remove or be able to mitigate the harms of a system like that because it's so problematic.  There will be cases and I think it's a good exercise for users to go through, but this is not a silver bullet.

That's why I mentioned it has to be tied to transparency.  The situation we are in is that we don't know what systems are on the market.  We don't know where they are deployed.  We rely on algorithm watch to do investigative journalism to find where things are being deployed and we have to go to litigation.  So the idea of this user level fundamental rights impact assessment would be to give basic information, and I would push back on the idea that it's too complex for small businesses.

I think the conformity assessment is perhaps more complex, but a basic idea of, like, what is the system, what is it intended to do, what rights does it impact, you know, a list of information that any user who is deciding to deploy something that's clearly labeled as high risk AI system in a context has to know.

The idea that they don't have that information and that they haven't gone through that process is laughable.  I mean, if they haven't, then they shouldn't be deploying it, but it's simply a matter of just making public information that you have to have thought about.  So that that, and, you know, we can discuss how to design impact assessments to ensure they are not overburdensome, but I think it's a basic process of responsible, not even development.  It's just responsible thinking and behavior to have thought about these impacts.

I think on the idea about having a fuller impact assessment, Article 64 of the AI Act is really interesting.  It allows human rights enforcement bodies to both have access to all documentation created during the conformity assessment, and where that is not sufficient for their investigation of a potential harm, to actually test the system.

I think we could beef this up a bit, and have the possibility that, say, you know, as a potentially affected person or a civil society organisation, we see that a public authority is deploying the high risk AI system.  There has to be a mechanism for people to trigger that Article 64 process, to allow this more internal investigation.  That's where we are looking at a third party, an independent assessment of the impact that can really look at and test the system.

So having that ‑‑ that's the level, I think, of impact assessment that can really have an impact.  This other one that would have to be applied across the board is just providing basic transparency, and it also, if it turns out on the line that serious obvious impacts were missed, were not considered, if certain groups were likely to be affected were not considered, I think that also builds a case that the entity deploying the system was neglect.  So that can feed into things later on.

And just a final point on the methodology for what we are referring to as risk designation, whether unacceptable risk, high risk, I think it's very strange there is no criteria for what constitutes acceptable risk.

Under Article 64 a data protection authority investigates a system and uncovers that it poses far higher risks to fundamental rights than anticipated there should be a mechanism for the risk level to increase.

For Article 5, so the list of prohibited practices to be updated.  It can't be the case that a closed category where Annex 3 can be open.  So we need to have explicit categories and we need a mechanism for systems to be reclassified based on sessions.

>> MODERATOR: Thank you very much, Daniel, I think Frederik was the next in order of raised hands.

>> FREDERIK ZUIDERVEEN BORGESIUS: Thank you.  I hear so many interesting thing, and on top of that in the chat it's going to be hard to not talk too long.  So Alex, stop me when I am over.  I hear very many smart remarks, so I agree.  Small firms it is said if they have to spend money on an impact assessment, but they can do loads of damage.  In real life it's probably too expensive for a startup to start a chemical plant because they cannot higher the expertise needed to protect the neighborhood around them, and the wood and the lakes.

So at least partly we have to accept that certain levels of impact assessments may be expensive, but if you can't do it, we accept it in a lot of situations, like any hobby person cannot start selling cars without going through the safety checks.  Also I do the same, we talk a lot about AI, but if you are the person whose pension payments or stopped or whose child benefit payments are stopped it doesn't matter if they are stopped by an extremely simple notion in an Excel sheet or in a super high tens of millions of Euros, machine learning AI system.

So perhaps it would be better, but I make the same mistake to talk about automated decision making or partly automated decision making to skate through problems that Laura mentioned in Canada.  And one more last quick remark, I think we need legal requirements, like not hints, not suggestions, requirements, but we have to think of some smart way to combine hard legal requirement, for instance, in the European Union in an EU regulation, but, of course, treaties, regulations and outside Europe national, or it's time consuming to amend them so we need smart combination of a hard requirement in law with guidance by regulators that can be updated more often without going through the whole circus of adopting a new regulation.

I realize there is a bit of, there is a risk of democratic legitimacy lacking for such soft law, but it's doable.  We have to think of some system with different levels of adaptable legislation.  I have more points but I will leave it at this.

>> MODERATOR: Thank you very much.  It's great to hear the optimism.  It's very, very encouraging to hear that we can do it.  I really appreciate that.  Cornelia, you were next.

>> CORNELIA KUTTERER: As with Frederik, there are so many good points but it's hard, sort of where do you want to focus in on.  I wanted to say something on the legal basis, and I'm glad that Christian is the next speaker because it's a half a question to him as well.

Under European law, we do have directives that implement some of the fundamental rights very specifically, such as non‑discrimination or gender equality, so I'm wondering whether it wouldn't be possible to be more specific about the fundamental rights that the act is trying to protect.  In particular, it's mentioned in the prescription of the AI Act and in some of the recitals where being a little bit more precise would help.

I also, Daniel, I think you mentioned that looking at the product safety approach the commission has taken, some of the transparency to the potentially impacted citizen/consumers/customers would be avoidable if we had thought about this responsible AI as a lifecycle which starts with the provider and ends with the deployer of the system.  So focusing on transparency towards potentially impacted citizens will have to have in the context in which it is deployed to think about the potential redress mechanisms.

We have been a little bit struggling with seeing actually ourselves.  In which role do we actually fall in, because at times we are component providers to systems where very often.  We are also very often cocreating with our customers or we have the general purpose AI systems that potentially will be fine-tuned by customers for their specific uses.

And the AI Act, even with the compromised test from the Slovenian President is confusing in particular as roles can change depending on what the deployer does.  So thinking backwards in having obligations at the very end closest to the potentially impacted user and then making sure that they have to use components or AI systems that can fulfill these requirements because, of course, even now we are in the Slovenian presidency, the user becomes the provider, he will not always have all of the technology and will still require the chain in the stakeholders to provide and support the potential necessary documentation throughout the text.

So that's sort of something that I think can be fixed in the AI Act in particular if you are more specific.  And then there is something else, so I agree with Frederik, by the way, on the SME piece and I think about the many small companies that have developed or smaller companies that are in the medical space, for example and they have to go through very stringent processes in order to get new medicine through to the marketplace.

So and last is really how do we actually, and it goes a little bit to what I started at the very beginning, how do you actually understand how, what the legal requirements listed in the AI Act are supposed to achieve?  Because it's not clear in the text.

The error‑free data is clear.  We believe it aims at having a system that is less biased and doesn't discriminate, but it doesn't really say so.  And so I think if we were to say which of the requirements serve accountability, which of the requirements serve transparency, we would actually already be in a better place to make them more impactful.  I leave at that.

The question on the legal basis something that I still think the commission could have focused a little bit more on fundamental rights in the text as well.

>> MODERATOR: Thank you, Cornelia, I think that's an opinion shared by many people present here or otherwise.  You said in the beginning of your speech very rightfully that Kristian is the next to take the floor.

>> KRISTIAN BARTHOLIN: Thank you very much.  To start with the important question raised by Cornelia about the legal basis, I mean, I just want to say obviously I cannot speak on behalf of the European commission, I work in the Council of Europe, so it's a different organisation, but I used to work before in the European Commission, so I know how it goes about.

I think that the reason for the human rights impact having no strong legal basis in the AI Act is probably that the Act is legally speaking based on the internal market regulations and not on the parts of the treaty which deals with fundamental rights.

So that may be one explanation why this is sort of not so clear that there is a human rights legal basis for the impact assessment in the AI Act.  To come back to the question of the legal basis and the rights and where do we put them, I think it's very important when we talk about the rights here in the context of AI, especially human rights impact, that we do not actually need a new set of human rights for dealing with AI.

What we need to ensure is that the existing human rights are seamlessly translated into an AI context.  That means that, to give you an example, if I am a civil servant in the state and I am supposed to decide on whether somebody is allowed to have a certain, should be granted a certain allowance or not, and I decide that based on biased, and I'm biased in my decisions, do I give it to people who have a certain background or people who look like me or something like that, then, of course, it's obviously illegal.

And we know that.  It's illegal under administration law, it's illegal under the Constitution.  It's discrimination so it's against the European Convention on human rights in Europe.

I think that that's it, why should it be different if it's an AI system that does it or helps you do it through assisted decision making by, through an AI system.  There is no difference.  The issue here is the accountability.  What we have, what is specific in my view to the AI is that we have accepted in a certain way that we have, we all have a black box up here.  Thanks come in through the ears and they come out there you the mouth and we do not necessarily know why we have made certain connections in our brains, but we can at least, we are used to that, we are used to dealing with humans, we are used to knowing that humans can be biased.

The problem with AI is that we also have black boxes in AI systems.  We don't know why a certain data set sometimes may be developed into a certain result in the other end.  But with the AI we have a tendency to believe it is less valuable than humans because it is a system and it is automated.  We have an idea that it must be less failable.  That is, of course, not the correct assumption to make here.

This is why, you know, if you apply AI systems which alone through the way of functioning can achieve a scale of negative impact that is not possible for a human being, then we need to make sure we have the measures to mitigate those risks, we must bend them if we cannot mitigate them, we put them under a moratorium, et cetera, but we need to make sure that we are not defenseless towards the AI system any more than we would have been defenseless towards humans taking these decisions alone.

I think these are, I just wanted to point this out, because I think it is sometimes we have a tendency to discuss very much the system itself, and the system is interesting for the engineers, but it's not so interesting in a human rights context.  There it's really the interplay between the system and the people that, the humans that use the system to make decisions, and those who are in the end impacted by those decisions.

That is really the issue here.  This is why I say we don't need specific rules for this, we don't need specific new things for this, new regulations for this.  What we need there is to ensure that we don't have a legal vacuum because we are using AI.

>> MODERATOR: Thank you very much.  That was very insightful.  I think Daniel has put his hand up again, so I am going to give the floor to him and after that maybe if each of you can think of a final remark that you would like to transmit to everybody before we finish.  Unfortunately we have five more minutes.

>> DANIEL LEUFER: Very quickly on the general purpose AI systems, I don't know if all of you have seen the most recent, or the first compromised text from the Council, they have added a new title to deal with general purpose AI systems and I don't think they have done it the right way.

There are two questions, there is does the AI Act as it stands deal correctly with general purpose AI systems and how should it deal with them.  I'm not opposed to the idea of adding a new title or Article to have a different approach to providers or companies like Microsoft, Google who place general purpose systems on the market, but what the Slovenian presidency text has done is basically give, like there are no obligations now.  You are basically off the hook.

And I think that's based on a flawed assumption that there are no risks visible or that come from the general purpose system and the way it's designed.  If you look at large language models, for example tend to be trained off databases, linguistic data sets that contain very, very problematic language which tends to be reproduced in the text.  So we have seen this with large language models that they will replicate biases from the text that they were trained on.

They will say incredibly racist, sexist and problematic things.  So I'm up for them being treated separately, but I think, again, we need an impact assessment at that level to see how the design decisions, the data they were trained on can also lead to problems regardless of whether they are implemented in one of the use cases that falls under Annex 3.

>> MODERATOR: Thank you very much.  We have three minutes.  So perhaps if I go again through the panelists just after saying thank you very much to you all for participating and thank you to all of the panelists for the very, very interesting contributions and then answering the questions.

I invite everybody to read the chat because there are very interesting things there and everybody has shared various documents and things which are around to check.

Going again in the order which we began and starting with you, Laura, quickly if you have any final thoughts.

>> LAURA GALINDO: Thanks, Alex.  I took a lot of notes.  Just to say that risk‑based approaches to AI including tools such as Human Rights Impact Assessment are very likely to play a role in how AI governance is developed.  I think we should look positively at what Human Rights Impact Assessments, the work of the Council of Europe is doing particularly that will play a significant role in translating all of these complexity of requirements for AI systems and streamline with human, international human rights framework.

So there are a lot of reasons to remain positive, but also with the considerations that were discussed here, there are a lot of things still to be addressed.  This is something, the good news is that we are doing it together, and ideally with the work of the different international organisations, civil society, multi‑stakeholder approaches to I-governance are the way forward.

So last but not least, I would like to invite you, there is another workshop tomorrow, global policy.AI.  So the discussion will continue on how international cooperation would be key for translating all of these efforts into better AI governance.  I will leave it to that and put the link in the chat.

>> MODERATOR: Thank you.  Frederik.

>> FREDERIK ZUIDERVEEN BORGESIUS: A suggestion for a new panel sometime, so different people highlighted, so repairing the AI Act is difficult.  I think they are well meaning, by the way, the European Commission, but there are serious problems with it that made the me realize that NGOs but me too as an academic, I only wrote about it after the proposal was there.

Can we do better to publish more before proposals are there?  Like perhaps that could be useful in the future, but this is too big a discussion so we will need another panel another time.

>> MODERATOR: Thank you very much,.

>> KRISTIAN BARTHOLIN: Thank you very much.  I will follow up on what Frederik just said, I think an inclusive process on legislation or something as important as AI is crucial.  We need to have a discussion broad social societal discussion about the uses of AI.  We cannot just leave this to industry.  We cannot leave this to the technicians or the politicians or the administrators.  We need to have a real discussion about AI.

>> MODERATOR: Thank you.  For the risk of the Zoom meeting ending before we manage to get through Daniel and Cornelia, very quickly.

>> DANIEL LEUFER: Thank you, everyone, for the great discussion. 

>> CORNELIA KUTTERER: Thanks a lot.  Same here, as the host., organizer of the discussion, I thank everybody for participating.  And I will say maybe we, you know, we need not only to think about the AI systems in responsible AI lifecycles, but also the humans that use them, and deploy them, which is part of that, and then whether we just want to avoid the biases that exist be replicated or an objective is actually to get better in how AI can eventually help there too.

So there is open questions for certainly other discussions, so I was excited to be with you on this panel.  Thank you very much.

>> MODERATOR: Thank you very much all.  Thank you, Cornelia, and Microsoft for organizing and allowing us to come together and hopefully see you at another panel soon to keep the discussion going.