IGF 2021 – Day 2 – Town Hall #19 Paving the road for the European Regulation on AI

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

     >> IMANE BELLO:  Good morning, everyone, good afternoon wherever you are in the world.  Welcome to this session, which is focused on the AI regulation in Europe.  Paving the road for the European Regulation on AI.  It's my pleasure to moderate this panel with great participants and I'm glad to be part of this virtual IGF

     I'll start by introducing myself, then participants.  Then we'll move on to the substance, which is how, exactly, do we go towards a, an AI act within the European Union that is efficient, that works, that enablers work to use systems in the European market.

     My name is Imane Bello.  I'm a lawyer working on business ethics, compliance and work on crime, cancelling, in terms of policy and international human rights law. 

     I also lecture ethics and policy systems for about five years now.  It's my pleasure to moderate this panel with Nicolas, a senior researcher and lecturer.  Patricia, Daniel, as well as Claudio, head of international office and professor. 

     So, Nicolas, you are, your research stands at the intersection of human computer interaction and privacy engineering.  If I understand correctly, you're particularly interested in the application of AI systems and digital matching to cybersecurity decision‑making processes and hold a Ph.D. in computer science.

     You're here today to tell us about the technical perspective on the AI regulation.  Thank you so much for joining. 

     Patricia, you're a post‑doc at the Fletcher School of Law and also visiting fellow in information study project at ITL law school and your research, again, if I understood correctly, focuses on the legal and policy frameworks for privacy protections and the use of metadata and the technologies. 

     Before I ‑‑ you also hold a Ph.D. in information science and technology from the School of Information Studies at Syracuse University, as well as a law degree from the University of Peru.  You also used to practice law before your Ph.D. 

     You're here to tell us more about international law perspective on the European AI.

     Daniel is a policy analyst at Access Now office.  You work on issues around AI systems and data protection with a focus on professional recognition. 

     Prior to working at Access Now, you develop AI work and hold a Ph.D. in philosophy in Belgium. 

     Claudio, you also hold a Ph.D.  You're head of international office, professor, researcher for Government Agency for Science and Technology, research center for the future of law from University of Portugal and hold a Ph.D. as well.

     So, let us start this session on paving the road for the European regulations on AI systems.  I'll ask you some brief questions and hopefully you can answer them in five minutes, that'd be great.  Then we can discuss one another and then have some time for Q&A.  I'm sure the audience will have interesting and delightful questions to ask, to ask all of you. 

     So, maybe, you know, on the draft regulation, it may be best to put the context in place. 

     We all know that the, the draft regulation, the first draft dates, dates April 2021, the objective of the proposal, of the commission, is to ensure that systems are placed on the union markets and used in the union are safe.  Respect the existing laws on environmental rights and the draft aims at ensuring legal certainty, in view of facilitating investment and innovation in AI systems, as well as ensuring effective and I'm sure we'll discuss this, affecting enforcement of existing laws. 

     So, maybe we could start with the definition of AI systems.  Not in general, but within the draft act.  Daniel, could you please tell us how the AI systems are defined within the act? 

     >> DANIEL LEUFER:  Sure thing, yep.  Quickly, for those who may not know Access Now, we're a global human rights organization. 

     I'll start a little bit before the definition to give the context for my own opinion on the definition.  When we heard that the commission wanted to propose a risk‑based approach to AI, we outlined some issues that we had with that, because we thought that that would run into some potential problems. 

     One was that there, you know, in many risk‑based approaches, there's an assumption that all risks can be mitigated.  Early in the process, we said there could be some risks that can't be mitigated, any regulation on AI would need to have the possibility for prohibitions. 

     Thankfully, that does exist in the act and maybe we can come back to that.  There's lots of questions about whether the prohibitions in there are actually effective or not.

     But, one of the other things that goes along with a risk‑based approach is that you inevitably leave out money systems from the types of obligations.

     So, you know, as opposed to GDPR, which is, you know, grants rights to people who are going to be effective and this applies to, you know, across the board.

     If you're only focusing on certain sectors of systems and certain systems that have sort of been predefined as high‑risk, then, you run this risk that there'll be other systems which also pose a threat that are left out.

     If the update mechanism for adding new systems is slow, this could be a problem.  How does that relate to the definition? 

     It relates to the definition because if you take a risk‑based approach, it's clear what you need to do is a broad definition of AI systems.  What you're looking at in the risk‑based approach is the impact in certain use cases.  The regulation focuses on use cases, whether that's in policing, whether that's in access to education, migration context.

     What we really care about here is the impact that systems can have on fundamental rights.  What would be problematic is to have the risk‑based approach which only places obligations on a small set of systems and a narrow definition of AI because then you'd only focus on some very particular types of systems that do thank you thing. 

     So, if you're talking about systems that control access to education, if you had a very narrow definition of AI, that only, say, focused on machine learning, you would leave out, probably, the majority of you know, software systems, computer systems, that are being used to control access to education.

     And from a fundamental rights perspective, there's little to no difference between using the most advanced deep learning system to do that thing and using a more‑rudimentary ‑‑ sorry about the baby crying in the background, she's home sick. 

     There's no real difference there from a fundamental rights perspective.  A lot of these obligations apply.  We want transparency, we want all of these things to apply.

     There've been some movement ‑‑ yep?  Just one minute, okay.  Unfortunately, we've seen some movement to call for a narrowing of the definition.  I'm not sure if all of you have seen it, but the Slovenian presidency of the council published their first compromised text and their talk about and propose to narrow the definition down, we've also seen this from some lobby groups, which, are, you know, heavily‑funded by people who would benefit from such a narrowing calling for this.

     Now, we think it's complete nonsense, one for the reason I said, it makes no sense from a fundamental rights perspective, but the ‑‑ this is the last thing I'll say.  If you narrow the definition, you necessarily narrow it upwards in complexity.

     You're not going to narrow it downwards and exclude machine learning.

     If you narrow it up to more complex systems, you will totally undermine one of the pillars and aims of the regulation, which is to promote AI innovation because you would place obligations on cutting edge systems and not place them on more simple ones.

     So, I think from the Commission's and Council, Parliament's perspective, narrowing the definition upwards in complexity, the only way you could narrow it, I think, would undermine both aims of the regulation to protect fundamental rights and to promote ecosystem of trust and excellence around AI development. 

     >> IMANE BELLO:  Thank you, Daniel.  If narrowing is such an issue, but discussing the risk‑based approach that the draft has taken, would you say, Nicolas there, are challenges in the way, you know, we assess risk that could be posed by AI systems? 

     >> NICOLAS DIAZ FERREYRA:  Yes, thank you very much for this discussion.  The regulation draft was embracing a risk‑management approach, I felt relief because I work in risk‑management and risk‑assessment shows worthiness.  You should be aware of the risk of AI assistance and limitations.

     I saw the unacceptable risks, high‑risk, limited and minimal risk.  Then I became skeptical.  So, my major concern is that some AI applications, this has been shown throughout the recent history that might seem harmless.

     For example, we've seen how misinformation can jeopardize democracies over the last years.  We saw it during BREXIT, the COVID‑19 pandemic and during U.S. elections. 

     Who would have thought that a simple AI‑based recommended system might end up turning upside down in the democratic foundations.

     How could we have imagined that it ended up spreading hate and doubt? 

     These are defined within the current literature as the back fire effects of AI systems.  And in terms of, in terms of risk assessment, the risk community is still inconceived when it comes to back fire effects.

     I think as Daniel very‑well summarized, the spirit of this risk assessment approach is based on case studies.  Some case studies are more trivial than other case studies.

     I'd also add that the case studies that are proposed or treated or seemingly discussed within the regulation are very much centered on the individual harms or harms for individuals and not on a collective basis.

     So, I really think that the problem or the major problem with AI is that it can have a very big impact on society at large.

     This is why I think that this human rights impact assessment, it's really crucial.  The problem is that in the community, I have the feeling that we are not even ready for conducting that in a proper way.

     And this was actually from a technical perspective.  As an engineer, as a computer scientist, I was wondering, oh, okay, we are completely against the wall because we have to act now and we don't have, we don't have much resources for conducting this. 

     I think that, I mean, we've seen it, we've seen something similar happening with the data protection impact assessment within the GDPR, right? 

     So, when I had to work in a project that was intended to provide engineers with the toolkit and frameworks for conducting data protection impact assessment, we had a very intense discussion about definitions, actually.

     And to actually, how can I say?  Find a common ground between what the regulation was saying, what the, actually what the engineers were looking for to have as instrument for conducting data protection impact assessment, was extremely challenging. 

     So, we even found ourselves running in circles and I would say my ‑‑ this is maybe where my major concerns within this regulation draft will come into place.

     So, I really think this is going to be challenged. 

     >> IMANE BELLO:  Thank you so much, Nicolas.  We have a challenge in terms of scope, defining AI systems, making sure that all applications are embedded within the regulation and therefore, all impacts can be taken into account.

     We have another challenge regarding the risk assessment difficulty from the technical, from the technical community.  Practically, the risk assessment actually take place, even though they're crucial. 

     Also, to take into account, individual, but also collective consequences of AI systems.  Claudio, do you foresee other, you know, challenging or surprising aspects to the AI activities drafted now? 

     >> CLAUDIO LUCENA:  Thank you for the opportunity, good morning, all.  I think I'd like to build a question on the aspect that Daniel has Ft. Worth raised.  Which is prohibition.  It's not a surprising issue, but I think it's necessary.  If you're taking a risk‑based approach, it is necessary to draw red lines.

     I was very much skeptical if we'd see those red lines.  If we take on, from the last, let's say, powerful regulatory way, the GDPR about data protection, we do not see a prohibition as such.  We do have one, let's say, stronger take on prohibition, which is on special data.  The world, semantically speaking, the word prohibition is there, with ten, large, widespread expectations.

     So, it's not, you don't ‑‑ legally speaking, you don't have a strong prohibition if you have ten exceptions for that. 

     Then, when we take the, the AI regulation proposal from the EU, there is a title named after, constructed around the prohibition. 

     They are somewhat stronger.  They are generally construed, which we ‑‑ where we see space and that conveys a message that we do understand there are limits that are not to be passed, as of now and that's interesting. 

     The fourth prohibition, the one that refers to wide surveillance in open spaces, it has exceptions that aren't comprehensible in the context where they are.  But it's interesting to see and maybe, if we're looking for a standard that spreads globally, which was the case with the last regulatory wave, that sends a good message.  There are red lines that we do not ‑‑ we cannot cross because, as Daniel said there, are risks when we deal with AI, because of what Nicolas said.  Widespread facts that cannot be mitigated.

     I think that's not, that, that is the take I would highlight from here as a surprising one.  We do not ‑‑ and I repeat ‑‑ we do not, currently, have in many of the legislative initiatives that we have around the world, express prohibitions.

     And the fact that we have them, now, sends a good message that we don't understand, yet, the impact, the full extent of the impact of those technologies and we are to admit red lines that are not to be crossed. 

     >> IMANE BELLO:  Thank you so much, okay, it's interesting, we just started the discussion where we already are drawing key elements from the draft.  Which are implementation within the AI value chain from the technical community, definition in terms of policy, how do we ensure the draft actually has the impact that we would want it to have.

     And then, in terms of prohibition as well, where do we choose to draw the line in terms of what is feasible, what should be happening on the ground or not.

     And also, because, you know, the very aim of the, of that is to ensure that union values are respected.  It's also clearly working to see whether or not and where we shift to draw the line.

     Patricia, when we look at the pipeline of the creation of AI systems, we start with people that are transformed in data, that is, process, and at the end of that pipeline, in terms of value chain, we have the algorithm, which is, most of the time, the endpoint of the whole process. 

     Going towards other body of laws, Patricia, could you maybe, you know, give us some element as to what provisions in international human rights law could be helpful?  Could be used in terms of algorithmic accountability for that very end of AI creation, so to say.

     >> PATRICIA VARGAS:  Hello, everyone, can you hear me?

     >> Perfectly.

     >> PATRICIA VARGAS:  Good morning for me, and for the rest of you, wherever you are.

     In terms of ‑‑ I'm going to take you out of the European regulatory new framework to talk a little bit about, in terms of human rights law. 

     So, in terms of human rights law, related to accountability, I'll be brief.  First I'll talk about international human rights law, then international law, and hopefully, if we have time, we can touch on what has been done within the U.S. legislation, for purposes of comparison.

     From an international human rights perspective, in terms of human accountability, this international framework may be helpful from three aspects. 

     First, it can be helpful classifying the distinct responsibilities of the different actors through the algorithmic lifecycle and in such condition, it can help to define the harm in a more‑precise way, than just the claiming of existence of a bias. 

     It also can impose obligations on governments, governments acting on behalf of the nation‑states and that, in the world, can help to set some expectations over the private sector, which is, for the most part, the ones who develop and handle the artificial intelligence technology. 

     Why do I say there are obligations in one and expectations in another?  Because, well, as we know, the international law is the law of the nation‑states. 

     And the nation‑state that commits to follow these international provisions can establish some national studies.  That are being forced over the private sector.

     That's why I say it's obligations for one and potentially expectations over the other. 

     The third point, which the international human rights law can be helpful is by integrating an accountability framework.  The international human trade law has tools that have been tested over the years, over decades, which, after a lot of development, have become very helpful in establishing responsibility and liability.

     Now, from the international law point of view, which is different, we have three variables that have some implications over the way that the international law has been handled.

     As we know, AI has brought a lot of changes to our life.  Yet, international law is the viable law, more‑resistant to change.  Why?  Because it's based on the powerful model of the nation‑states.

     The first element that we need to take into consideration is related to automatization in terms of autonomous systems.  Here, we ensure the limits of the meaningful human control over autonomous systems.

     But challenging the use of artificial intelligence and algorithms.

     Second, probably the most‑important one, is the establishment of a liable entity.  You know, in international law, it's clear what a human is, what a nation‑state is, what an international organization is.

     But, when we talk about the, an artificial intelligence entity, the international law is empty.  What is the level of its responsibility?  How should it be addressed?  How is it liable? 

     It is for the, to, to kind of solve the situation, academics have established that is necessary to examine what the national bodies have to say about these artificial intelligence entities.

     The problem, of course, is as most‑likely, we're going to have ‑‑ we're inevitably going to end up having a clash of jurisdictions.

     Which is the internal unsolved problem of international law.

     Now, why is this important?  Because the degree of responsibility is different when these entities take decisions on their own.  Or, it's different when they're controlled by a human, who can actually be held liable under international law or national law.

     The final aspect is the attribution.  When are the instructions sufficiently precise to warrant attribution?  In international criminal law, only humans are involved.  Right? 

     Because humans are liable.  Humans can be indicted and declared guilty.  When an artificial intelligence entity is involved, then, we have humans interacting with a machine. 

     The situation is complicated, not also because it's not clear to what extent humans can be actually said liable.  But also because it's very‑likely and we'll see this, that humans, in their executing a decision that has been taken by a machine or by an artificial intelligence system.

     >> IMANE BELLO:  Thank you, of course, it is this dichotomy that we hear a lot, between human supporting ‑‑ decision‑supporting systems and decision‑making systems.  Most of the time, we've seen, we've read the papers that tell us that even when systems are decision‑supporting, humans tend to follow the "advice" that has been given by the system and therefore, it's difficult to answer these legal questions of liability.

     So, we've, in terms of scope, we've seen that there are systems, so, we've understood the definition, there's this risk‑based approach and then there are systems that are prohibited and there's the issue of liability and then there's also the issue of harms. 

     How do we ensure that A, the act is really enforced?  And that it's implemented and that the risk approach is respective? 

     And therefore, before any harms happen, those are mitigated.  The risks are mitigated.

     And when the risks do happen, when they transform themselves into harms, are they, or are they not ‑‑ I'm sure we'll discuss this, taking into account, an effective person's afforded ways to find remedy. 

     Nicolas, maybe to start the discussion, before we discuss the consequences of AI systems, let us take, first, to when they are created.  And when the risk‑based approach is taken into account.

     What issue do you, first, see in terms of implementations?

     >> NICOLAS DIAZ FERREYRA:  Thank you for the question.  In terms of, in terms of implementation, it's, it's quite‑related to what I posed earlier, at the beginning of the panel.  Is that, we need instruments that would help the developers of AI systems to assess the risks of whatever they're creating.

     Of course, having some use cases as a guideline, it's quite promising or at least, it's better than nothing, I'd put it in some way.

     But, as I said, assessing the impact that the systems might have at large, it's quite difficult because we basically see the negative consequence of quite a long time.

     If I, if I remember correctly, the draft includes, or at least, suggests that AI systems of high‑risk should be monitored on the runs.

     So, one should constantly monitor the, the goal that they are pursuing is actually not being, I'll say, not being, provoking any harm or is actually on its right course of action. 

     Still... they propose it only for high‑risk AI systems and not for the other ones.  So, I'm quite wondering why we would monitor any AI systems.  Backfiring effects can happen at any level.

     That's on one hand.  On the other hand, what I have, my experience with the data protection impact assessment with GDPR was that some developers, when we were actually conducting experiments and we were giving them a method, and we were giving them some guidelines for conducting the data protection impact assessment, they had like a quite interesting interpretation of what risks were.

     So, for example, they were telling me "okay, the risk of not having a consent form..." no.  This is not a risk.  You should focus on the user, what could happen with the user.  Which negative consequences might this person suffer?  Part of the human rights threats or threats in terms of human rights.

     Thinking in this way, I find it, I have seen that it's quite hard for average developers.  Maybe because they are, they're, they're [breaking up] too many ethics.  It's not so high.  But, I think that, this is something ‑‑ it's kind of like implicit culture.  We are fighting against that.  I think we should be aware of that.

     So, I think that having this, having said that, we need to keep in mind that we might have to best‑draft, we might have a really interesting approach, but in the end, putting implementation into practice, there are forces that we will have to fight against, inevitably.

     >> IMANE BELLO:  Thank you, that's very clear.  It's very interesting that you mentioned that, in order to better‑assess the risk methods could be to focus on the user. 

     Daniel, maybe you'd like to react to this, to the questions of whether or not the draft, as it is, focuses on the user.  Or not. 

     >> DANIEL LEUFER:  Yeah, exactly and just for people who are maybe not totally familiar with the terminology of the act.  It specifies two actors.  The provider, the developer, the company developing the system and the user is not the person affected by it, but the entity that wants to deploy the system.

     So, if you had a facial recognition system, maybe the provider is Microsoft, Clear View AI, someone like this and the user could be an individual police department.  A local actor, something like that.

     I think one of the issues with the act as it is, there are not enough obligations on users relative to providers.  I don't think there should be fewer obligations on providers, but we should have additional obligations on users.

     Precisely because of what Nicolas was saying, are the developers really the best place to assess all of the risks to fundamental rights? 

     Obviously there are some, some risks to fundamental rights occur at the design stage.  Based on the types of trending data and are foreseeable.

     What we know is that it's really the context in which you deploy a system that's going to create a lot of risk.  And actually, the entity that is deciding we need an AI system, we're going to deploy there for this objective, is the entity that should be assessing the risk to fundamental rights.

     So, what we're actually asking for, and I can maybe share a link if I can do that in the chat with everyone.  Access Now, European Digital Rights, Algorithm Watch and 160 Civil Society organizations have published a position asking for changes to the act.  We're asking for more obligations on users.

     And more concretely, asking for users of high‑risk AI systems, deploying high‑risk AI systems to do human rights impact assessment.  We have a list of things we want in there, but one of the things we want them to do is to identify affected groups. 

     We also want, at the moment, some of you may know, it's one of the most‑interesting things in the act.  There's a database, publicly viewable database of all high‑risk AI systems on the market in the EU.

     That only focuses on providers.  We'd just know the clear view AI recognition in EU.  We need to know where these systems are in use. 

     So, users should also have to register their use of high‑risk AI systems.  What that would do is give people the opportunity to know my local police department is using this system, my local council is using it, my university is using this system.

     If you combine that with the idea that the user has had to identify affected groups, you're actually creating the possibility of a rights holder within the act and then we want, you know, we can get into this later.

     Additional rights to be according to the affected people, as well as the ability to contest whether you're affected or not.  As Nicolas pointed out, it's not always such a clear individual effect.  There's often externalities to how these systems operate.

     And you know, we need this transparency to make up for exactly the deficiencies that Nicolas pointed out.  You can't assess all the risks in advance, and we need to have that base level transparency, what's being used, what's the intention, who's being affected to allow Civil Society and different communities to contribute and point out risk.

     So, these different elements, I think, coming together, if we can add these to the act, throughout the legislative process now, I think we can make it a much‑more effective instrument.

     >> IMANE BELLO:  Thank you, very clear.  So, we have this difficulty around the implementation, what it is about assessing risk at the development stage and another difficulty in terms of assessing risk later in life, in the context in which the system is being deployed.

     And that difficulty touches upon, not only users, in terms of entities that use and deploy the systems that are being created by others, by providers, but also, end users, in other words, people. 

     And sometimes the persons that become affected persons, which means that they're affected pervasively, in a negative sense, by the AI systems that have been deployed and used. 

     Maybe so, Claudio, if we take a concrete example of a method or tool that we all know of, facial recognition, if we discuss this for a bit, what would you say about the way the AI act is drafted now and treat facial recognition systems?

     >> CLAUDIO LUCENA:  Thank you, again.  Let me give you a temperature of the context of the issue.  As of now, as of today, the Council of Europe maps 506 regulatory/governance initiatives concerning an AI regulation of governance and it doesn't even map all of them.  All the initiatives that there are out there.

     We have a Brazilian bill that has passed and has gone to Senate now.  I don't think it's mature yet as an initiative.  Let's assume that many of those initiatives, many of them are proposed in good faith by governments or by multistakeholder arrangements or by legislative houses. 

     Each of them might have a model to assess risk, because I think the risk assessment base, is, in itself, a good approach.  The problem is, nobody has put together a framework to assess risk or to assess general‑posed risks for systems that automate important aspects of our life.

     That's interesting, because, for cybersecurity, which is a little bit more, let's say, established area of human action, we do have a set of precise aspects that we consider when we're going assess risks.

     We have never done such things for an automation system that, that automates important aspects of our life, as Daniel mentions.

     And some of those aspects are not there.  Risk assessments have become the only ‑‑ the ‑‑ I'll say the most‑appropriate tool for us to face new technologies, but, as they have, from my perspective, as they have come into the digital realm, helping us do so from data protection up until now, we're dealing with them for very little time.

     We're not very sure, we're not very ‑‑ there's no general standard on how to conduct that.  Case is likely off Guam, in the U.K., which is not really a fully automation system. 

     They clearly show when more stakeholders are heard in an issue, there's more‑likely for us to understand that there is a risk that wasn't there, when people, in good faith, design the system.

     So, transparency lies on the basis of all these discussions.  And I'm not saying about ‑‑ I'm not mentioning transparency to address the explainability of the dilemma, which is something that has been, has to be considered within a technical context, I'm saying about the transparency of what we are using the system ‑‑ things that we can clearly describe in human language, we're using systems to help us achieve that task. 

     You're a public service, you're a private company, I understand that, there are trade secrets, economic interests that are legitimate and have to be protected, but if we're willing to tap into the advantages and I do believe that automation may bring us some advantages.  May make us scale some degree of equality that otherwise wouldn't be possible without these tools, we have to start from the basis of transparency. 

     Transparency is also the basis for any legitimate and healthy use of data.  Having said that, for the last minute, there is no, I believing the community doesn't have any opposition to the facial recognition technology as such.

     The issues that, is that, examples, instances that we have show that they work very poorly.  Very poorly and they may cause a lot of damage, in spite of the fact, in spite of the fact that they're ‑‑ when we talk about facial recognition technology, even if, even if you talk about face detection, which doesn't exactly process personal data, but has been the object of judicial decision here in Brazil.

     Even if you're not touching on the personal data aspect, they pose risk because it touches upon a very sensitive data and it works very poorly.

     So, again, it's a matter of us trying to find a more‑global and meaningful way to assess risks and to assess risks in a way that makes the technology really useable.  Useable in a concrete case. 

     >> IMANE BELLO:  Thank you, so, obviously, this question of transparency is related to the question of trustworthiness.  If you don't know what you're discussing, it's hard to trust.

     I wouldn't say, if I may, that ‑‑ so, how do I put this?  Obviously, facial recognition systems don't really work, especially with people of color.  Before we ask the questions of whether they work or not, it's important to take a step back and ask whether we want them to be part of the public space or not. 

     I don't know whether citizens agree about facial recognition systems.  It's not solely about their performance and effectiveness, if we make the discussion about effectiveness and performance, we forget to ask ‑‑ I'm asking all of you ‑‑ maybe Patricia or someone else wants to react to this ‑‑ it's not solely about performance effectiveness, but also about whether or not we want them in our public space and as you said, very rightfully, Claudio, because of the data that they use, those are systems that, per se, I'd say, have some risk or transfer some risk in themselves.

     Maybe, Daniel, really shortly, then we'll move on to ‑‑ so, Claudio, really briefly, Claudio mentioned this lack of global center, global set of rules, that would be practical, also, also, you know, related to what Nicolas was saying and that is, a rules or practical rules, translational rules for the technical community.

     I'd like us to take a step back and see whether or not we could learn from our U.S. counterparts.  Daniel, quickly on facial recognition.

     >> DANIEL LEUFER:  To really just + 1 what you said there, Imane.  From the start, we've been saying it's bad when they don't work well, but you also don't want to perfect instruments of surveillance.

     And actually, a perfectly accurate facial recognition system in our operating spaces is incompatible with fundamental rights for us.  It's not all about live facial recognition.  All of the focus seems to have gone onto live facial recognition, but what's called in the act, post remote metric identification, is equally, if not more harmful in some cases.

     Take investigative journalists.  Say a journalist publishes a huge expose that implicates public figures or law enforcement, they could go back through footage and find out who that journalist spoke to, identify their sources and this is a threat from post remote biometric identification, not live.

     So, the idea that there's this kind of "live is the worst" is completely unfounded and we think that both types need to be fully prohibited in our public spaces.  Facial detection, things like this, we need to ensure they work well.

     The ones that are not going to be prohibited, need to be classified as high‑risk, so we are sure they work properly. 

     >> IMANE BELLO:  Thank you, so far we've seen that.  The impact of the draft couldn't be as high as we want it to be.  We've also seen an issue with implementation, especially at the development stage.  Because there's, A, this given culture, within the technical community and also, B, lack of, lack of training, maybe, in ethics, both human rights and risk assessment methodology.

     To really assess risk.  There's also this issue of which risk should be assessed and related to which systems.  It so really high‑risk or all of them. 

     And we've also seen that, as Claudio was saying earlier, in order to ensure transparency, which is really crucial, we also need to know that's related to what Daniel used to say, we need to know which systems are used for which purposes and where. 

     So, maybe, moving on to you know, how do we pave the European Regulation moving onwards, is there anything we could learn from the approach of the United States on the subject.  What would you say, Patricia? 

     >> PATRICIA VARGAS:  The problem with the United States, a lot happens, but in the end nothing happens.  The U.S. follows, has a federal system, what do I mean by this?  The country lacks a federal mandate to regulate AI as a country. 

     But, yet there, are multiple state legislations on the subject and there are multiple attempts and proposals to regulate AI within Congress. 

     So, general artificial intelligence bills or resolutions went into use in at least 17 states in 2021 and 13 in 2020.  Just in the last two years.  Right? 

     And these legislations have been enacted in at least five states.  Different in all of them. 

     The main purpose to address or the main points they try to address is the creation of agencies that are supposed to advise the authorities about the risks of AI.  Some of these status proposals take particular interest in protecting children.  The prohibition of using models that use algorithms that further discriminate based on race, color, national, ethnic region, gender, disability and data privacy.

     In particular, there is a lot of discussion about facial recognition.  It has been acknowledged and accepted into some state legislations, but it's under constant questioning because of privacy concerns, very similar to the one that Daniel just mentioned.

     Plus, the police is constantly accused of overusing them.  And of lacking the proper training to handle these systems. 

     Now, on the side of the executive branch, the White House, in July, specifically, the White House Office of Science and Technology Policy and the National Science Foundation, two agencies under the U.S. government, announced what is called the newly‑formed National Artificial Intelligence Research Resource Task Force. 

     Which is supposed to write the roadmap for expanding access to political resources and allocation of tools that are supposed to increase, to spore AI innovation and economic perspective. 

     Now, the task force is supposed to provide recommendations including technical capabilities, governance, administration and assessment, as well as requirements for security, privacy, civil rights, and civil liberties. 

     That's one attempt on the side of the executive branch.  Now, in July, a couple months after, another agency within the U.S. government, this is the U.S. Department of Commerce National Institute of Standards and Technology, NIST, informed the development of the AI risk management guidance.

     The main goal of this guidance is supposed to be to help technology developers ‑‑ users and evaluators to improve the trustworthiness of the AI systems and make the new AI technologies more competitive in the marketplace. 

     The mandate of the NIST, which is different from the White House, seems to be more‑oriented to analyze similar variables to the ones contained in the new European Regulation. 

     So that's something to pay attention, when they produce a report or whatever outcome they decide, but they're not there yet.  This is a new initiative from July of this year. 

     Now, this is at the side of the executive and local states, but, the Congress also is going through its own path, right? 

     The Congress is unfortunately, particularly interested in using AI for military purposes.  They're highly focused on intelligence, surveillance, reconnaissance and to some extent, the semiautonomous and autonomous vehicles.

     And a lot about lethal autonomous weapon systems.  There've been multiple builds within the last two periods of work at the Senate and none of that has been enacted. 

     And up to this point, it's not clear whether a final agreement will be achieved or not.  And what happens is the, in the U.S., it's kind of, well, we always see in the, in this dichotomy between the U.S. and Europe.  Europe tends to regulate, while the U.S. tends to not regulate. 

     So, at some point, if the U.S. is able to achieve, to consolidate a federal study, at some point, we're going to see a clash of utilizations and up to that point, a final outcome is very difficult to foresee.

     >> IMANE BELLO:  Thank you, that's very interesting.  To me, that raises two different set of questions.  The first one is related to prohibitions and maybe, I'd like to hear from Daniel on that and then, the second set of questions raised, maybe Claudio, you want to react to this, is how do we foresee ‑‑ if foreseeable, obviously ‑‑ how do we seat impact in terms of jurisdiction that AI could have? 

     Like the GDPR did, since a global standard can be applied within other jurisdictions, even though the scope is limited to the European Union. 

     So, maybe on prohibitions that are currently, currently written, within the, within the act, Daniel? 

     >> DANIEL LEUFER:  We already talked about the one on remote biometric identification.  Very quickly, we'd delete the exceptions, also delete the word realtime and have prohibition in publicly accessible spaces.  That's the only way it can be done to protect fundamental rights.

     You've, then, got three other prohibitions, which are quite‑broadly phrased.  In the recent text from the Slovenian presidency, they've been improved.  There were strange definitions about subliminal manipulation of people that could be proven to cause psychological or physical harm.

     That's kind of silly, because if you're subliminally manipulating someone to do things against their will, that's enough, that's bad.  Like, that, that should be prohibited.  It doesn't matter if it causes them harm.  Subliminal manipulation for our own benefit is also not fundamental rights compliant.

     Beyond the four that are there, you know, Civil Society has collectively called for digital prohibitions on some uses of predictive policing. 

     Particularly problematic ones, and motion recognition, as well.  We think, this was backed up by the European data protection supervisor and Data Protection Board, in their opinion that, yeah, they also believe that AI systems which report to detect our emotions, AI lie detectors should also be prohibited.

     But the meta point I'd like to make is that the list of high‑risk systems in the AI act can be updated.  There's a mechanism to do that.  There isn't one to update the list of prohibitions.

     And that seems like hubris to me.  It's very short‑sighted.  The idea the commission has acknowledged that some systems pose an unacceptable risk to fundamental rights, but claim to have captured all with those four.  That doesn't make sense.

     There needs to be a mechanism to allow updates of Article 5, we can discuss what that mechanism should be, but it's clear the regulation will not be future‑proofed if it doesn't have that.

     >> IMANE BELLO:  If we combine what you were saying and what Nicolas was saying earlier, we have only high‑risk systems, whose impact needs to be assessed and underrun on the short and long run.  Their impacts, it's a one‑time risk assessment, even though their impact might change over time. 

     That's the one pitfall and then we have the fact that it's solely the high‑risk list that can be updated, but not the list of prohibition.

     So, if I understand correctly, independently, from the consequences, it doesn't really move.  Okay.  Interesting, thank you. 

     Maybe, Claudio, really shortly and then we'll try to take some questions from the audience, if we can.  Claudio, do you want to react to what Patricia was saying earlier?  Notably about the future clash of jurisdiction, based on the current use approach.

     Would you say that, just like the GDPR did, we've seen, for instance, we've seen big digital companies just you know, applying the GDPR standards for all of their end users, independently from where they are.  Would you say the scope of the act has a chance to do the same? 

     >> CLAUDIO LUCENA:  I do think we have a fairly good starting point.  Not every single decision in the text is well‑taken.  It surely sends a message to the rest of the world that, that, let's say, a far‑reaching universal framework for regulating or managing or governing AI within a certain jurisdiction is somewhat possible.

     This goes ‑‑ in the past years, we were very much looking at issues of sector industry approaches.  We talked very much about different frameworks being required to answer to automation issues in the country.

     So there, was, for example, an initiative or thought to address education, health, public uses of automation.  And I think, having a standard as the proposal, might be an interesting development. 

     What we don't have anymore, is the leadership position of that GDPR represented.  That was, when we had the GDPR as a global standard, there were not 506 going initiatives.  506 initiatives for governing for legislation. 

     And I think we do have other, now, because of GDPR also represented in geopolitical landmark, was like, European values guiding the development of data protection throughout the world. 

     And, it is strategic, but I think AI represents a much‑more strategic issue in the world.  So, we have for example, China, clearly reacting with a very different approach, from the one in the EU regulation for AI.

     China doesn't base its approach on algorithmic, sorry, it's not a risk‑based approach.

     It's a, a provision to address recommendation systems, in general. 

     Let's say, it's used in our every day tools.  It does address user needs.  Of course, there are issues of implementation, how the country and the political culture and legal culture will implement that.

     We cannot be naive about that.  But it does address user needs.

     It does address the fact that users can more‑actively interact with the platforms and demand remedy, which is not present, currently present in the text.  In the European text and so, I think we have a good starting point.

     There's a possibility we will represent, again, a standard, but it's not alone anymore.  There are other initiatives that have to be considered at this point.  And the developments, as of now, it's a bit difficult to foresee.

     >> IMANE BELLO:  Thank you, very interesting and very clear as well, in terms of putting the draft within its geopolitical context.

     We discussed it earlier, right now, as the draft is written, there are no remedy for end users.  Maybe you, Daniel, you want to react shortly to this and I was wondering, if, later, we have five or ten minutes for questions.  If you do have questions, put them in the chat, we'll read through them and discuss together. 

     Daniel, really shortly and the question of remedy for end users?

     >> DANIEL LEUFER:  Yeah, I think that's really lacking in the current draft and a lot of people immediately pointed out to this, rights aren't conferred on affected people and users, whatever you want to call them in the draft.

     I think there's a technical issue in that who would be the rights‑holder under the act.  In the GDPR or data subject, it can't just be anyone under the act.

     As I mentioned, what we've proposed is if you have the obligation on users to identify affected people, then the people identified as affected people could be rights holders, under the act.

     So, those two things, I think, have to go together.  Yeah, we have a proposal, which I linked to provide both redress in the case, that something has gone wrong.

     Also, I think, an interesting one to look at is rights, the right not to be subjected to prohibited AI practice, because, although there are prohibitions, it's going to be clear that companies will say, we're not subliminally manipulating you.

     You have to, then, prove, what you're doing.  Does fulfill this definition of prohibition.  I think we need to think about the enforcement of prohibited practices to avoid the situation where someone just says that's not what we're doing. 

     >> IMANE BELLO:  Thank you so much.  Okay, we've been asked to wrap it up.  Maybe as a conclusion, anymore harass, could you tell us what you see, how you think we can pave the way forward to towards paving the roads for regulation on AI systems?

     >> NICOLAS DIAZ FERREYRA:  Thank you very much.  The discussion has been very interesting and engaging.  As a take‑home message, what this panel is actually exposing, this will take, once again, a lot of independent disciplinary work and very hard work and I think, we should take into account, all the different fronts that converge within the AI ecosystem and all the different stakeholders.

     We have to be aware that, mainly, as you mentioned, Imane, earlier, developers are quite effective as performance driven and many of them, they are not properly trained in AI ethics or any kind of human rights impact assessment. 

     We have to provide the means for conducting that on the front line of AI development.  I think this would be, at least for me, the main focus, also for my research within the next years. 

     And hopefully, we could all join forces into, for creating this trustworthy AI ecosystem that we all want and being able to, with more awareness of the risks of using and deploying AI systems, make use of them in a wiser way. 

     >> IMANE BELLO:  Thank you so much, Nicolas.  Thank you for joining us today.  It was my pleasure to moderate and we'll see you soon around the corner for AI Governance.  Take care.  Bye‑bye.

     >> Thank you, bye‑bye.

    

     [Presentation concluded at 3:36 a.m. CT/10:36 a.m. UTC].