IGF 2021 – Day 1 – OF #10 AI and the Rule of Law in the Digital Ecosystem

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> VANESSA DREIER: I think it worked, right? I see you back online with your name. I've reached out to Edward Asante if he's able to join but he's not replied back to me yet. I would suggest we wait another minute and then we start because we have a very packed agenda and a lot of interesting questions to be raised and a lot of topics to be discussed. I don't want to lose too much time. I apologize already for the technical issues in the beginning. I guess this is the new times with online and virtual meetings, that it's always a bit challenging on everyone's end.

>> PRATEEK SIBAL: It's quite paradoxical that the Internet Governance Forum's website crashes.

>> VANESSA DREIER: Okay. Thank you very much for your patience in waiting. I would now like to start the session so we can deep dive into our topic today. First of all, dear speakers and participants, very warm welcome to our session. My name is Vanessa Dreier, I'm a UNESCO Associate Expert for Digital Transformation and I'll be the moderator for this open forum on artificial intelligence and the rule of law. Kindly note that this discussion will be recorded.

If you have any questions during the session, please type them in the chat. There will be a question and answer session at the end and we will address a selection of the questions raised throughout.

To kick off, I will hand the floor to my UNESCO colleague, Prateek, on our work in the field of AI and digital transformation, and then hand over to our partners of the project, Cetic.br, before introducing our panelists.

Prateek is the AI Project Lead on UNESCO Section on Digital Transformation and Innovation. Prateek, the floor is yours.

>> PRATEEK SIBAL: Thanks, Vanessa, and hello, everyone. I'll be pretty quick and brief. Just wanted to introduce some of the work that UNESCO has been doing in this field over the past 10 years. They've trained about 23,000 judicial operators worldwide. Some of you are here so thanks for joining us today.

Some of you are speakers as well today, and thanks again for that. Our program has worked around 150 countries and has trained judicial operators on questions related to freedom of expression, access to information, and safety of journalists. In 2020, actually, at the Athens Roundtable, which is also happening now, we came up with the idea, along with all the participants, to venture into the domain of AI. And hence, we started working on a MOOC. And as part of this work, we have developed four pillars. First is UNESCO is going to develop institutional partnerships with regional and international courts, as well as international associations of prosecutors and judges to foster democracy and the Rule of Law. Second, we'll be supporting national schools of judges and prosecutors to strengthen their capacities through modalities like train the trainers. Third, as part of the judges initiative, we've built already comprehensive toolkits, which includes MOOCs, curriculum, regular webinars, and on‑the‑ground trainings, which we'll also do for AI and Rule of Law. Finally, there's a technology dimension which is these technologies offer opportunity to strengthen access to justice, but also smoother administration of justice. So we'll be working on the tech aspect as well through repositories and common sources.

I will stop here and really looking forward to the discussion and the insights. And from our experiences before, I think it's always helpful to engage as we develop this program further with you.

So thank you so much.

And I look forward to the discussion.

Over to you, Vanessa.

>> VANESSA DREIER: Thank you very much, Prateek.

And let me now just hand over the floor to Ana Laura, who is coordinator at technical cooperation at Cetic.br. The floor is yours, please.

>> ANA LAURA MARTINEZ: Thank you very much, Vanessa. It's really a pleasure for me to represent the Regional Center for Studies on the Development of the Information Societies, Cetic.br, at the Brazilian Network Information Center, NIC.br, in the opening of this forum which brings key discussions on artificial intelligence and the rule of law to this Internet Governance Forum. We at Cetic.br didn't hesitate on partnering with UNESCO for supporting this event, as well as for developing the MOOC devoted to this matter, which will take place next year.

As will be discussed in greater detail today, AI‑based systems open multiple possibilities for the Rule of Law, but they also pose ethical challenges that need to be addressed.

In this context, producing reliable data is crucial for providing a solid understanding for how AI is used in the judiciary, essential to maximize its benefits and to mitigate its risks.

AI efforts are essential for informing stakeholders and guiding the development, implementation, and monitoring of AI policies, but they face some challenges. On one hand, AI technologies are developing at a very fast pace, which makes the very conceptualization and operationalization of such concepts a complex endeavor. Additionally, measurability of AI is challenging not only because it's not a stand‑alone technology but also because it's often embedded in such a way that it becomes invisible. It is fundamental that a wide range of data providers work, collaboratively, to produce relevant data on the use of digital technologies in general, and on AI in the judiciary in particular.

This should not be an isolated effort, but one made within a multistakeholder ecosystem, following commonly agreed frameworks which offer international comparability and in compliance with personal data protection regulations. Monitoring and evaluation should also account for to what extent AI is being used and adopted ethically in the judiciary. Measuring the adoption of AI is indeed a complex task, but nonetheless essential to inform stakeholders and guide public policy. We at Cetic.br hope that bringing this issue to the table, including it within the MOOC and in debates like today's, will help bridge the gap between research and adoption of AI so its economic, social, and cultural and political implications are better monitored, understood, and addressed.

We wish all of you an excellent discussion and a fruitful exchange in this forum and we hope to see you in the MOOC next April. Thank you, Vanessa, over to you.

>> VANESSA DREIER: Thank you very much, and thank you both for your opening words and introduction to your work and why it's important to build capacities of judicial operators on the impacts of digital technologies and artificial intelligence on justice systems.

Judicial systems worldwide are already using AI for a variety of tasks. They help lawyers, for example, to identify precedence in case law, they enable administrations in streamlining judicial processes. And they also support judges with predictions on issues including sentence duration, recidivism scores, and even decision‑making. So as part of UNESCO's global initiative and in partnership with The Future Society, UNESCO, is launching its first massive course on AI and the Rule of Law with the aim to really engage judicial operators in a global and timely discussion around AI's impact and implications in upholding the Rule of Law. So this open forum will host, we know, judges, lawyers, and representatives of international organizations from around the world to share insights on the challenges and the good practices adopted in their jurisdictions concerning the use of AI and the legal implications of AI for society from a human rights perspective.

It is therefore my pleasure now to introduce you to our speakers of the session. Welcome, Katherine Forrest, a Cravath, Swaine & Moore partner, and also a former United States District Judge for the Southern District of New York. Welcome, Isabela Ferrari, Federal Judge at the Second Regional Federal Court of Brazil. Welcome to Benes Aldana, President of the National Judicial College in the United State. Welcome to Edward Asante, President of the Economic Community of West African States, the Community Court of Justice. And welcome also to Nicholas Miailhe, Cofounder of The Future Society, and also host of the Athens Roundtable on AI and the Rule of Law that is happening in parallel to IGF. Thank you all very much for joining today.

Let's jump into our discussion and first question. Why do we focus on AI and the Rule of Law and why as it become such an important topic? Nicolas, may I ask you to start? As copartner in the massive open online course on AI and the Rule of Law, what is The Future Society's interest at the intersection of AI and the Rule of Law? And what gap are you aiming to close with our joint course? Over to you.

>> NICOLAS MIAILHE: Thanks a lot. Nice to see so many friendly faces today.

It's about capacity building. You said it. And what gaps are we looking to close there? One, it's a gap from moving to education to training. And equipping judges and judicial operators with first a common understanding of what AI and the Rule of Law, AI for the Rule of Law, and the Rule of Law for AI means. And then moving from a basic understanding to more expertise training, including and using and mobilizing the tools, and algorithmic processing across cases in situations in criminal and civil injustice. It's first about that.

It's also about equipping them with the capacity to bridge the gap from principles to practice. What does that mean? When you look at the good governance of AI that's emerging, it has to be the result of cocktails and smart cocktails of self, soft, and hard regulatory mechanisms. At the end of the day, the results in how those mechanisms are adjudicated over eventually in a large way fall on the table of the judges. It's about really equipping them to do their job well across a number of local contexts.

But it's also about recognizing that next to the adjudicatory route, there are other routes through which AI is governed, the regulatory routes and the administrative routes. It's really important to help judicial operators and citizens to situate themselves within this emerging governance of AI regime.

That's the kind of gap we're trying to close and this MOOC is the first step in that direction. As we move forward and instruments and tools get developed, it's going to be very important to keep on closing the gap from education to training and from AI governance principles to practice, as they apply to the Rule of Law.

>> VANESSA DREIER: Thank you very much, Nicolas, that's super interesting, already. May I ask you, Benes, what's the specific need you see on training judicial operators on emerging technologies and specifically AI?

>> BENES ALDANA: Thank you, Vanessa. It's a pleasure to be here with you all of you. As you pointed out, there's a variety of technologies that have been inserted into daily human interactions and more recently AI is beginning to displace certain human activities. Legal disputes around AI are therefore inevitable and judges need to have both the training, as Nicolas said, the capacity, on understanding the legal and ethical implications of this technology.

Take, for example, the emerging use of AI in medicine. This can be expected to give rise to multiple malpractice scenarios, including missed diagnosis or incorrect diagnosis, improper treatment recommendation leading to bad outcomes, privacy concerns, occasioned by a wrongful disclosure of confidential patient information, perhaps even emotional distress caused by a faulty diagnosis, or potential discrimination issues that could arise, for example, where AI is prioritizing the allocation of limited medical resources for patient care.

There are other areas. How do defamation laws apply to AI‑generated speech? What ground rules should be in place as we use AI tools to assist in sentencing? What do hyperrealistic fake videos mean for the rules of evidence? So there's all the legal implications, and there's ethical implications, as well.

I know outside of the United States, particularly in Europe, they're more advanced in trying to develop principles to govern some of the use of AI, particularly in the judiciary. And I know that in Europe principles have been established to ensure to address fundamental human rights and privacy protections.

Again, the challenge for the judiciary will be challenging our most fundamental understanding or commitments to fairness and due process and even our understanding of truth.

And so the judiciary, I think, plays an important role in guiding this conversation, and as Nicolas pointed out, Athens Roundtable is going on right now. For me, the last three years of engagement with the Athens Roundtable, The Future Society, UNESCO, I do believe the most important message about AI is that the future AI is neither predetermined nor beyond our influence. So that is the reason I think we continue to bring the stakeholders together and engage in this conversation and the judiciary needs to be part of it, at least in the United States the judiciary sometimes is behind, and the National Judicial College is doing its part to make sure that it's ahead and, you know, able to take advantage of understanding these issues as AI continues to influence all our activities.

>> VANESSA DREIER: Thank you very much. This was an interesting first round already. I would now like to deep dive into AI in the judiciary and specific use cases of AI in the justice system.

Isabela Ferrari, can you introduce us to the AI system VICTOR? What is the important of the AI system VICTOR in the Brazilian justice system? And what are the main risks you see?

>> ISABELA FERRARI: Thank you very much. I would like to start by saying it's a pleasure to be here. So VICTOR is the Brazil Supreme Court AI decision assistant. From what I know, VICTOR is the only AI tool embedded in a Supreme Court in the world. And to understand the potentials and also the risks that VICTOR poses we need to understand a little bit about what it does and about Brazilian environment, the Brazilian unique litigation environment, I would say so. So in Brazil, we have a caseload of around 80 million lawsuits and one lawsuit for each 2.6 inhabitants. It's a lot. And the appeals field to the Brazilian Supreme Court recently are around 60,000 each year. And the Brazilian Supreme Court rules around 120,000 appeals each year. So when we saw this environment and this caseload and when we had to decide how to deal with that, what was done in my country was to establish the electronic lawsuit. We have electronic lawsuit since 2010. So it's not something that came with the pandemics. And then we also established a requisite of appeal to the Brazilian Supreme Court that is called general repercussion. What is that? It's the requirements to take a case to Supreme Court you need to have social relevance of that appeal. So VICTOR helps us to see what has and what does not have general repercussion.

There's activity of saying it has or does not have or we don't know if it has and the Supreme Court needs to address this lawsuit to see if there is or is not general repercussion because it has already been decided. This was a human activity of the Brazilian Supreme Court civil servants. They used to take 44 minutes to do so. This is something that VICTOR is doing now in around 5 seconds. So this type of activity is being done by VICTOR in five seconds.

But besides saying if the lawsuit has or does not have or if you don't know if the lawsuit has general repercussion, after being trained with a lot of decisions that were taken previously by the Supreme Court, another thing that VICTOR does is overseers the lawsuit. That means it reads the images and understands what is there.

Another problem the Brazilian Supreme Court has is it receives lawsuits from all the Brazilian court. We have state, military, electoral, and labor courts, so forth, and these files that come to the Supreme Court come in different ways, PDFs, images, et cetera.

So for the civil servants they don't take 44 minutes just because they are trying to reason and understand the repercussion. It's also because they need to find themselves and to find important documents in that lawsuit, and it takes time.

So when VICTOR osiers the lawsuit, it also tags the most important part of the decisions, which is wonderful, and it makes the lives of those who have to deal with that lawsuit later easier.

But at the same time, we have a super AI tool that is really understanding everything that's happened in all the Brazilian courts, in all lawsuits, from the beginning to its end. So it's a superpower. And every superpower comes with big risks.

What I would say specifically about VICTOR, the risks that VICTOR poses are some things. First of all, when we use this tool, as its still being tested in Brazil, the parties are not advised. They don't know that VICTOR has been used in their lawsuit. And if they don't know, they can't appeal from that specific situation.

And some people say, some people respond to this, critics, saying that it's not relevant because VICTOR just makes a suggestion and then a civil servant has to agree with that suggestion. But a point is that for all of those who have studied a little bit about algorithmic bias, this is something. So what is algorithmic bias? It's the bias that we humans have that make us tend to agree with a decision that was suggested by a machine because we feel that that decision is scientific and is more trustable, it's trustworthy. And the reality is that even if we have a mathematical basis in algorithms, they learned based on data, and data is produced in our biased society, so many times the suggestions that algorithms make, in all the sectors of our lives, they have nothing to do with science. They're much more human than anything else. So the first critic is we are unaware when this tool is used. So we can't appeal from that. And we have algorithmic bias so the answer that, the response to that, that we have a human in the loop is not enough. It does not solve our problem.

And I think that these are like the two biggest points of attention. Not being informed when the tool is used, and also the point of algorithmic bias that is not something that comes only with VICTOR, but it's something that is bigger and must always be in our minds when you talk about using algorithms to decide.

>> VANESSA DREIER: Thank you very much for raising this important point. And I'm very happy to share that one of the six modules that is actually being part of the MOOC is addressing algorithmic bias and the digital divide, because it's indeed one of the biggest challenges when applying probably any digital technology to justice systems. So yeah. Thank you very much for flagging this.

Let me direct the discussion towards the impact of AI on the Rule of Law and related challenges, or let's continue that path, so to say, as well. Benes Aldana, as the President of the National Judicial College, could you maybe identify two critical challenges for AI in the judicial role that you see, maybe adding onto what Isabela has just shared?

>> BENES ALDANA: Yes. I think the first one that was already mentioned is algorithmic bias. In the United States, we have a couple of cases that that has become an issue. And so I think that continues to develop here and trying to gain a better understanding on how to make sure that those tools we use for presentencing and post‑sentencing, or in court in general, are fair.

The second is in the area of autonomous vehicles, involving injuries and deaths. I think those are complex legal questions in tort and criminal law, and issues of causation arising from faulty AI decision‑making involving motorists is going to challenge judges in terms of identifying who the responsible party is, the owner or the machine, the manufacturer, software developers, and various contributors that input data into those decision algorithms.

So that's one area that we're trying to educate our judges on, but I think the first part, that Judge Ferrari already mentioned, is probably the most critical piece in terms of making sure that we have a handle on it.

>> VANESSA DREIER: Thank you very much. Maybe just to reiterate on that, Isabela, may I just give you the floor again? I found your aspects now very interesting as well and would like to elaborate more on could you maybe share your insight, what do you believe to be the most challenging human rights issues? Maybe for the Brazilian case, for now, considering AI deployment in the judicial system?

>> ISABELA FERRARI: I always think that we need to take a look at the situation and take a decision regarding the situation that you live. So in Brazil, we have a situation that people sometimes they have a lawsuit that takes 30 years to be ruled, 35 years to be ruled. And justice that takes that long is not justice at all. So I'm really an enthusiastic of online resolution systems and online courts and AI tools to help us judges in doing our work and doing what we need to do, but a point that I really want that we advance in this topic but advance in a safe way. We need to be sure that we're heading to the right point. And at the same time that I think that technology can help us with some of the Rule of Law values, like accessibility, like everything that has to do with this rule, we need to be sure that the two biggest risks when using technology, especially AI tools, are addressed. The two biggest risks are the opacity that is inherent to this kind of tool, and the possible discrimination. So I think that this kind of discussion that we're having here, it's really, really important. Because at the same time that we shouldn't stop progress and at the same time that I think that at least in a legal system like the Brazilian one where we have so many problems and where technology can help us with those problems, at the same time that we shouldn't close this door, we need to be sure that we are on the right path. So I think like I think these discussions are very important to guide us to this path.

>> VANESSA DREIER: Thank you very much. I'm happy now that we start the discussion. I see Nicolas has raised his hand and we already have the first question in the chat. Nicolas, firstly to you, and then I will pick up the question from the chat.

>> NICOLAS MIAILHE: Thank you. Let me take this question from a different angle, anchor it in a use case, civilian, electronic discovery. We want to avoid AI becoming "weaponized" to favor the powerful. We know that in some cases mobilizing electronic discovery is going to be difficult and will favor entrenched interests or the most powerful. So pick, for example, a large corporation versus a small enterprise. How do we ensure facilitation as an available tool of an open‑source benchmarking protocols in a situation where judges can facilitate negotiation between two opposing parties to avoid weaponization of the negotiation that is going to eventually favor the most powerful?

So creating this benchmarking protocols, creating the tools to facilitate the work of judges to create a level playing field and to maximize due process and access to justice is something that is extremely important on the criminal side, of course, but also on the civilian side.

And it's a question of practical tools, instruments, standards, particularly bench marking standards to really equip the judges in a way to avoid their dockets being used or misused for weaponization on negotiations. That's an example.

>> VANESSA DREIER: Thank you very much. Maybe that's adding very well to the question that was shared in the chat, by Samik. How exactly are you educating the judges, for example, concerning automated vehicles? Do you have an example on how to transfer knowledge to judges on case laws on AI through the networks that you have?

>> BENES ALDANA: So the University of Nevada, Reno, we have a pretty robust department on campus that's dealing with autonomous vehicles. And so part of our education is partnering with them and, also, through other institutions in the United States to give judges the background and understanding of the technology, one. And raising those issues that I've just raised about, responsibility and fault. And so no, we don't have a complete comprehensive, but it's a start. We'd like to include that in our AI curriculum. We have a week‑long course this year on artificial intelligence and that particular topic will be one of the things that will be covered.

>> VANESSA DREIER: Thank you very much. Maybe I just address another question to you. As you're the leading provider of judicial education in the United States with the National Judicial College, how do you think we can expand the reach of the current training on AI and the Rule of Law? Where are the limits that you see and what needs to be done?

>> BENES ALDANA: I think UNESCO's effort is a big effort in educating judges and judicial operators around the world. So the National Judicial College is excited to partner and be launching that effort in March. That's a first step.

Again, as I mentioned, here at the National Judicial College we're going to have a first week‑long course on artificial intelligence. We've never had that before, here at the college. But we also continue to provide webinars throughout the year on different topics involving AI. Obviously, the pandemic has given us great opportunity to connect around the world on continuing to have these discussions. The only problem with that is we're all in different time zones. So I was telling Vanessa I just finished a panel discussion with the judicial research training student in Korea at 1:00 a.m. this morning. That's part of the challenge. But I think we can continue to build the capacity that Nicolas was talking about by collaborating with each other and the work of the Athens Roundtable continues. And hopefully we meet in person next year, Nicolas.

>> VANESSA DREIER: And one of the topics, at the Athens Roundtable, was also the need for, like, peer exchange on actual AI case law and how to foster exchange globally on specific use cases on challenges that are being exposed. Do you have any thoughts as a training institution to be a leader in this way? Or to set up a network? Do you already encounter different judges addressing you with their specific use cases and looking for counterparts or review partners in that sense?

>> BENES ALDANA: I think that's a good suggestion, that we can use the forum for Athens Roundtable to create that next step. I think that Prateek mentioned, and it's a good initiative, and UNESCO can continue to share that information, and certainly the National Judicial College can help with serving as the repository and sharing that information. So that's a great idea, I thought.

>> VANESSA DREIER: Nicolas, I see your hand is raised.

>> NICOLAS MIAILHE: Absolutely. The kind of shared resources, living repositories, so to speak, to help equip judges who are very competent and very capable. If they have access to living repository then they can mobilize this knowledge within their context and within their cases to be able to exercise and discharge our duties. Creating those shared resources and curating them in a way that effectively builds and reinforces capacities to their benefit around the world is something we care for. And stay tuned. I think this is one of the areas where Athens Roundtable is going to move and help capacity. Not alone. Not reinventing the wheel. Not doing it alone, but building a community working together with the main actors to deliver these kind of resources.

>> VANESSA DREIER: Thank you. Prateek, I see your hand is raised as well.

>> PRATEEK SIBAL: I think I quickly want to chime in on this point as well. I think what we're talking about when we talk about the legal implications of AI, not only to have a repository but one of the ideas is also to work with a community of lawyers who will bring these cases and these users to court. So that is also a community that needs to be incubated or I believe it already exists somewhere, but this is also a challenge where, as Nicolas was mentioning before, this dimension of principle to practice, this is what will also take us to practice, to take these cases and users to court, get new orders and jurisprudence. I think that's another issue to look at.

>> VANESSA DREIER: Nicolas?

>> NICOLAS MIAILHE: A very important point on that, to reconnect with what I said in the beginning. What falls on the laps of judicial operators is oftentimes to resolve issues but then also to self and soft regulation mechanisms. For example, procurement guidelines and compliance officers, the way in which policy is decided at the board level of a company and are realized through compliance mechanisms and/or procurement mechanisms and the lack of respect of those can fall down, as well, on the docket somewhere.

Sometimes it's done by the regulatory and sometimes by the adjudicatory. But creating practices to both equip a judicial operator and relieve them of those things, because it's going to be much easier ‑‑ not much easier, better said than done, but it should be easier to resolve these matters when there are the standards for procurement and compliance, rather than leaving the judicial operators to the Wild Wild West of no self, no soft regulation, where they're confronting themselves with a world of uncertainty.

If we want hard regulatory mechanisms to be smarter, new have, and corporations and professionals have, to develop these tools and practices of self and soft regulation. Otherwise, we shouldn't be surprised that we end up with hard regulation that over‑specifies and a less agile environment. It's really those cocktails and tools in implementing those cocktails.

>> PRATEEK SIBAL: Vanessa, you're on mute.

>> VANESSA DREIER: Sorry, Prateek. Yeah, I was seeing your hand raised again. Please go ahead.

>> PRATEEK SIBAL: While on this question of standards, I think it's interesting to point out, soft regulation is also helpful because technology is evolving. So in a lot of the legal issues which are at the legislative aspects which are emerging, they're putting, pinning it with standards. So you can update the standards while you provide a guideline. I think in that sense, standards are also important going forward.

>> VANESSA DREIER: Yep. Thank you. So maybe just Isabela and Nicolas to reiterate on the Athens Roundtable that is currently taking place. You aim at achieving a form of call of action to answer the question, really, how can we tell the difference between AI that can be trusted and AI that cannot be trusted to advance justice. Could you maybe share your answer, if you have one, to this question? Whoever wants to go first.

>> NICOLAS MIAILHE: Isabela, please go first because last year you gave a very powerful and inspirational call to action and you're the practitioner.

>> ISABELA FERRARI: Thank you. So I think that we, when we make this line between what we can trust and what we cannot trust, we're, first of all, taking into consideration how much information we have on that. So I think that we should start understanding the basics of AI operation. I could come here and tell you that AI needs to be transparent, that AI needs to be accountable, but how can AI be transparent? How can AI be fully accountable? It's impossible when we understand how evidence really works, at least evidence that adopts artificial intelligence. So we should start with, how do they work? And what can I ask developers regarding transparency? Why are they using decision trees? Can they use decision trees that are tools that are much more transparent than neural networks? Is it possible for this situation? We need to understand the basics. And we need to demand transparency as possible, accountable as possible, and we need to understand also the point of discrimination, the point of bias. So I will highlight these two big challenges: Opacity, and then we must discuss what can we do to understand what is happening, and we need to highlight the point of discrimination, that sometimes it's something that has to do with the way the algorithm is programmed, but sometimes it has only to do with the data you use. And when you start understanding these mechanisms, you can start having productivity discussions. Discussions that will take you forward. And besides that, besides understanding and being able to exchange ideas in this field, I think that we need to ask ourselves where can we make mistakes? We can't make mistakes in criminal law. We can't make mistakes with freedom of people. So where is a safe space to use AI? I can use AI in very different ways, even in the legal system. In Brazil, from the judiciary, we use AI to do very different things. To understand that if there is a lawsuit that is being repeated, to find precedence to be applied, to help judges with searches. We could also use it to make decisions or suggest decisions or to make risk assessments, but will we do that? Where can AI help us in a safe place? And where should we leave the work to judges? For me, these are like the basic questions. What do you think about that, Nicolas?

>> NICOLAS MIAILHE: Well, let me briefly, upon what you said, and frame again the question Vanessa asked. How can we tell the difference between AI that can be trusted and that that cannot advance justice, quoting you of last year, Isabela. Contributors of the Athens Roundtable are policies, competence, and standards. If you consider AI as social technical systems, and judicial operators engulfed into these social technical systems, if you want to equip them to discharge their duties properly in a way that creates trust, without benchmarking standards, I gave the example of electronic discovery. Isabela gave another one. We need those. Otherwise, companies are making claims that are not substantiated in an objective way. That needs to stop. We need to move beyond that into a more objective measurements, including methodology and benchmarking protocol. That is done through standards.

Like, the MOOC is doing, competence. Really equipping judges and judicial operators with a capacity to know when to mobilize what, like Isabela says, if you want to mobilize a black box on a very high risk cases which endanger liberty or life, I mean, come on, maybe not. Maybe you have to, but you take your decision in a way with knowledge and wisdom. And last but not least, policies, and a set of policies, from how courts are operated up to how board rooms and companies are operated. So to create trust requires that. Without that, we will not create these cocktails of self, soft, and hard regulatory mechanisms, along with the tools to do that. That's the price to pay for trust. It will not happen in one day, but the good news is that AI, in fact, is not yet there. It's just a bit there. It's so unevenly distributed and the potential to create opportunities and manifest risk is not yet there. We're trying to anticipate, while in parallel the industrial age of AI is building up.

>> VANESSA DREIER: Thank you very much. That was very interesting discussion so far.

Moving from the need for capacity building on AI for judicial operators to AI systems already being implemented in justice systems to challenges of AI in justice systems and now lately the question of standardization and trust on digital technology.

Thank you very much.

I would now like to open the floor for questions from the audience. I have seen that there was one remark of Urszula Marclewicz. I don't know if you would like to take the floor to address the statement on intellectual property that you made?

>> NICOLAS MIAILHE: Do you want to read it out? Can everyone see it? It's a long one, a long question.

>> VANESSA DREIER: Exactly. That's why I thought it might be nice to have somebody voicing it. If not, we can make it also an open statement. And see if any one of you would like to comment on the question of artificial intelligence regulation in terms of intellectual property rights? Is that something that you have come across? And what do you think of it?

>> NICOLAS MIAILHE: I can give a stab at it.

>> ISABELA FERRARI: I can start saying something. It's not my field. I have a good friend that is an expert in that. She's a judge in intellectual property. But the point and the difficulty is that we understand that a creation is something that a human being does. Human being, a human being creates. So when you register something, you are the person that is the inventor. And with AI, we're seeing that some things are being created by algorithms like perfumes. Perfumes. It's something that is so sensitive, and algorithms are creating perfumes. They are creating songs. They are creating. They are painting. They are creating new paintings based on the previous paintings of famous painters. So it's something that you look and you say, okay, so if an algorithm has created a perfume, who is the owner of this creation? Is the algorithm? But it doesn't help the laws that we have today. They do not protect inventions from algorithms or inventions from animals, for example. But then you think, if it's not the algorithm, who is? Is it the programmer? But an algorithm is made on a series of several algorithms. So are all the programmers ‑‑ what did we do with that? This is one of the fields of law that needs to be rethought. We need to rethink about intellectual property in light of these new situations that we're living with technology. This is something that is a phenomenon that does not regard only intellectual property. When you talk about civil liability, we have already talked about autonomous car. If an autonomous car runs over someone, who is responsible? It's the owner of the car. It's the developer of the car. It's the developer of the algorithm that was embedded in the car. So we have now thousands of questions without answers. And each time more, there's spaces of discussions, they will be needed. Because if in law we're used to looking at a book to look for an answer of a problem that exists, now with AI, we are pretty aware that we have no answers. So this is the beauty and the beast of AI, to use the title of a speech that I gave on VICTOR. This is the beauty and the beast. We're in this situation. So I'm just like reframing what Urszula pointed out in her comment. And my answer is, I don't know. I think that we need to discuss everything on all of those topics. But we don't have answers that we can give immediately. And this is a challenge for judges. We're so used to having all the answers. And now we just have all the questions.

>> VANESSA DREIER: Thank you. Nicolas, would you like to add on this?

>> NICOLAS MIAILHE: No. It's been said.

>> VANESSA DREIER: Everything has been said. Perfect. I see we have two more questions on the chat. I would encourage us to answer these or address these at least. First one, from Evgeny Tongik, what is your vision on priority under this way? Regulatory versus standardization? It seems to be more easy/quick to establish some rules, policy regulation, rather than standards. Second one is more long‑terms anyway. What do you think?

Nicolas, maybe you can start.

>> NICOLAS MIAILHE: It's a great question. It's a real good question. It's difficult to answer because both, if they're well‑developed. Let me give you an example. GDPR, General Data Protection Regulation, Europe, it took seven years to develop this hard regulation, then there was two years from adoption to enforce in 2018. Now we're getting a bit of feedback in terms of the implementation. So legislation takes time to materialize, and oftentimes it lags behind technology. Not always. But I agree that standards take a lot of time, too. Why? Because a standard is the answer to an industrial requirement. You need a standard when you move from innovation to industrial practice, when it's about disseminating across markets and societies. For a good standard to emerge, you need enough competition, enough adoption, and enough understanding in terms of what kind of incentives are we trying to create.

Then, it's not one standard. It's standards, plural. Which are also there to reflect industrial and political economies. You know? Americans versus Europeans versus Chinese. There's also this political and economic reality that has to be accounted for.

That's why at the Athens Roundtable we're trying to create bridges across the Atlantic to help decide over what do we mean by interoperable standards? What do we mean by converging international conventions on human rights? Not everything has to be singular.

We also need to beg diversity to render justice to the notion of self‑determination, for example, and pluralism.

So these questions are not easily defined and answered. In my view, at this stage we need to advance things in parallel.

If you look at the EU AI Act, for example, what the EU AI Act is trying to do is exactly that, meaning trying to advance a hard, low instrument, and at the same line having delegated to CEN‑CENELEC, the European Standardization Institute, or Body, so to speak, the requirement to develop a set of standards to render this legislation smarter in avoiding over‑specification of what lends into the law as opposed to what is left to professional practice. I hope it answers your question. That's the way I could take the EU AI Act example.

>> VANESSA DREIER: Maybe the last question in the chat is also maybe more directed towards you, Nicolas, since you're the one based in Europe for now. To which extent do regulations which address automated processing of personal data, such as the Council of Europe Convention 108, already give relevant guidance? It's building a bit on your previous statement.

>> NICOLAS MIAILHE: Well, international public law is confronting a problem, not only one problem, but one problem vis‑a‑vis that.

When you set up an international instrument, you always are confronting the question of what do you over‑specify or under‑specify? Why? To become a law that a judge can really implement in court, an international instrument has to be translated into a national law, and has to be filtered, therefore, into national law following the adjudicatory, regulatory, or administrative rules.

And it creates room for dilution, room for adaptation, which is potentially problematic.

One question the Council of Europe will have to resolve in developing the European Convention on Human Rights they're developing right now, if we want this convention to have teeth and be able to be implemented, do you need to go as far as over‑specifying it to ensure that you avoid the risk of dilution, but at the risk of minimizing pluralism? There's a debate on that.

I'm not an expert enough in specifically 108+ to analyze, but certainly what my view of what the Council of Europe should be doing is take the Budapest Convention and 108+ and do a benchmarking of exactly that. Look at after implementation has been the implementation on the ground. And what did we learn? Do we need to over‑specify or under‑specify to unify uniformity and pluralism according to local contexts?  

>> VANESSA DREIER: Thank you very much.

Prateek, I will leave the last intervention to you before we close the session.

>> PRATEEK SIBAL: I certainly didn't want to be the last one but it's an interesting on the Council of Europe's Convention.

Since we know that they adopted and finally gave the final report which talks about setting up an AI regulation from the Council of Europe, as well, they actually looked into this question as to what is covered by 108+, and do they still need a separate AI regulation. And then they came up with the answer, yes, it definitely is needed, because some of the categories and definitions with respect to AI themselves are not clear, AI processers, they introduced terms like AI subjects. So all this would also potentially go in the new convention that they are proposing. And it will be transversal, not go into sector specifics, as similar to the previous convention.

So they've done this analysis. Now, it's really what are the risk categories? How do you first classify risk? And then decide whether you want to follow a check list, like the EU Commission's AI Act is doing? Or you link it with standards? I think that's an interesting conversation as well going forward on the regulation aspect.

>> VANESSA DREIER: Yes. Thank you very much. And also thank you very much for the active participation of our audience, even though it's a virtual hybrid event. It makes it a bit harder to really interact, but it was very great to have your questions. I think they were addressing a couple of blind spots, so to say, in the AI policy process. And we thank you very much for this. I will close the session now as we are one minute away from having our end time. So I thank very much our speakers and you, participants, for this interesting and timely discussion on the impact of AI on the Rule of Law. As we have seen and discussed, the implications of AI on justice systems are many‑fold. And the forum also shows UNESCO's commitment, I hope, to enhance future‑oriented reflections and foresight initiatives in respect to the challenges of and opportunities of AI on the Rule of Law.

We at UNESCO will not stop our journey with our MOOC on AI and the Rule of Law with our member states. We will contextualize the impact on AI on justice systems at regional levels. We are engaging with judicial networks to make sure our judicial systems and actors are able to evaluate the challenges associated to AI. And I would highly encourage you to work together with us to mitigate the risks, to foster exchange, and to work towards a human rights‑based approach towards AI.

On behalf of UNESCO, I thank you very much for following the 2021 IGF Roundtable on AI and The Rule of Law.

You will find in the chat also a link to the registration to our MOOC on AI and the Rule of Law in case you wish to sign up for the course starting in March 2021. And I really would like to take the opportunity to thank our partners in this course, The Future Society, the National Judicial College, and Cetic.br for supporting us and moving this forward and for building capacities of judicial operators worldwide. Thank you very much. And I hope you have a great day at the IGF today. Thank you everyone. Bye.

(End session at 10:32 a.m. CT.)