IGF 2022 Day 2 WS #350 Why Digital Transformation and AI Matter for Justice

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> CEDERIC WACHHOLZ: Hello, and a very warm welcome to you. Apologies for being late. I'm Swiss and it causes me physical pain to come three minutes from ‑‑ sorry for that. We were caught in traffic coming from the African Union with the traffic jams under way. A warm welcome to all of you participants online and in the room, and also to our panelists here. And apologies also to them for being late. 

We have here to the right, the Assistant Secretary‑General Dr. Tawfik Jelassi, Dr. Ubena, who will join us online.  Mr. Ope Olugasa, hasn't joined yet. 

>> OPE OLUGASA: I am online. 

>> CEDERIC WACHHOLZ: Happy to hear. This is Mr. Olugasa who is speaking? Because we can't see you yet. 

>> OPE OLUGASA: Yes, this is Ope Olugasa. 

>> CEDERIC WACHHOLZ: Oh, very nice to have you on board, too. And Ms. Linda Bonyo and my colleague from UNESCO in Nairobi. Today we will exchange about the e‑judiciary and how ICTs and AI tools can be a solution for enhancing the administration of justice, but we will also consider the related challenges and the changing role of judicial actors worldwide and the aim to uphold the rule of law and then AI Europe.

Now I have the pleasure to invite Dr. Tawfik Jelassi, UNESCO's Assistant Director General for Communication and Information, to deliver the opening remarks. 

>> TAWFIK JELASSI: Thank you, Cederic. Good afternoon, good evening to all of you. Since we are starting this session a bit late due to the matter that Cederic explained, I'll try to shorten my remarks to try to help us be back on schedule. 

Excellencies, ladies, gentlemen, colleagues, I am very pleased to be with you here for this special workshop on artificial intelligence and justice. As I suppose many of you, if not all of you know, UNESCO has been very active for close to ten years now in organizing training programs for judicial operators ‑‑ judges, public prosecutors, judicial operators ‑‑ worldwide. This training has consisted very much on updating judicial operators on regional and international standards in the fields of freedom of expression, safety of journalists, freedom of the press, and access to information. To date, we have trained over 24,000 judges, prosecutors, judicial operators, in 115 countries.

But of course, the question became, what's next? And we know in the digital technology side, we know of one very promising technology ‑‑ artificial intelligence.  And we decided to look at what new opportunities, what new applications can AI offer to the judiciary operators? But also, what are maybe the risks involved? What are the things to watch out for?

So, recently ‑‑ actually, last spring, more specifically, last March ‑‑ we launched a massive open conference on AI and the Rule of Law. We believe this is a quite unique offering, certainly is the first of its kind in the UN system, but maybe also from international organizations. We were delighted that in this first offering of the online course on AI and the Rule of Law, we had over 4,500 judges, prosecutors, and judiciary operators enrolled in this course and following it throughout.

Personally, I received some email messages from a few people, who after 48 or 36 hours, they said, "We completed the course."  I was very impressed, because it meant that these people really took the course almost on a full‑time basis, not just one hour from time to time or a weekend per week, to complete the whole course in a matter of three days. So, this is very important.  I believe this is, AI as we know can help streamline, can help redesign, can help simplify some of the processes involved in the judicial administration, but more than that. If you look at the subset of AI, which is case‑based, certainly, this technology can also help judges and public prosecutors learn from past cases when they are processing a case. So, I think this is very powerful.

We saw applications in a number of countries. UK is one of them. South Africa is another one.  Just to mention a couple, an African country and the European country, where we saw promising applications. But for us, we have to be loyal to what UNESCO has been advocating for a number of years. And you heard at this IGF event the framework of UNESCO, which is, when we use technology, when we develop and deploy and design technology‑based systems and applications, we anchor that in ROAM: Again, R for human rights‑based approach; O standing for open to all; A, accessible by all; and M, following the multi‑stakeholder approach. So, this goes for AI systems as well. 

We have to fully respect human rights, we have to respect human dignity, we have to be sensitive to gender equality. So, there are a number of issues that we believe are central to this piece of work, especially since last year when UNESCO came up and had approved by 193 member states the UNESCO Recommendation on the Ethics of Artificial Intelligence.  This is the first normative instrument of its kind in the world, an in‑depth look at the ethical side of AI. And we are delighted when it was unanimously approved by 193 countries covering the five continents. So, I think this is a major development that we have to take into account. 

Why? Because we know about AI applications, example in facial recognition, you know. Is that something good that you can allow, that you know, in some countries, for some, maybe, systems, including the police, whatever, we can use AI to recognize people's faces and use that data for digital surveillance. So, again, I'm just flagging some of the issues that AI brought to the floor that did not exist with past technologies. 

We have been working with partners, of course, The Future Society, a part of this session here today, and we acknowledge and we are grateful for the collaboration with The Future Society. Other key partners with whom we collaborated: The African Commission on human rights and on Peoples’ Rights as just came back from a meeting at the African Union, where we have other ongoing collaboration projects; but also, we have been working with regional courts, including in Africa, namely, the African Court on Human and Peoples' Rights, but also the ECOWAS Court of Justice. That is the Court of Justice of the Economic Community of West African States. And finally, another key partner is the African Court of Justice, among others. So, this is not just UNESCO, necessarily, by itself. This is UNESCO collaborating with other stakeholders, with other partners, who bring in their thematic knowledge, field expertise, but also sometimes their regional context in which we want to apply AI and the rule of law. 

Let me stop here, because we have some high‑level speakers, panelists, to share with us their insights, their experiences, and best practices. I'm sure that at the end of this session, hopefully, we will have new ideas that could help us pave the road going forward. Thank you all, physically and also online, for being part of this session. 

>> CEDERIC WACHHOLZ: Thank you so much, ADG, for this overview of the current work and global work which is being undertaken. And as you highlighted, this is very much about regional activities on AI in the justice system and also five interventions we will hear. Actually, four will really be coming from Africa. We have one from India, a very good example and preposition too.

Now we are having the difficulties of bringing in Dr. Ubena online, and so, we will continue. He is there. We are working on it. And I will, therefore, delay this presentation, and we will give the floor now to Mr. Ope Olugasa, who is the CEO of LawPavilion and working on strengthening the justice system through introduction of innovative solutions to address the obstacles faced by the judiciary. And he will, as I just mentioned before, have a particular look at AI in the African Judiciary and look also at context‑related challenges and how new rules can be used for West Africa while upholding the rule of law. So, thank you for joining us online, and you have the floor. Five minutes, please. 

>> OPE OLUGASA: Thank you very much. So, permit me to share my screen? 

>> CEDERIC WACHHOLZ: We hear you very well, but we see you in person and no presentation yet. 

>> OPE OLUGASA: So, I'm not allowed to share my screen yet. Host has disabled that. I don't know if the host can allow me to share my screen so I can just speak to it? 

>> CEDERIC WACHHOLZ: Right.  You can ‑‑ are you ‑‑

>> OPE OLUGASA: Okay, great.  Okay, so, I guess you can see my screen now? 

>> CEDERIC WACHHOLZ: Absolutely. 

>> OPE OLUGASA: Thank you.  So, good afternoon.  Thank you so much for this invitation. I'll just speak very briefly to the need for technology and artificial intelligence in African Judiciary ‑‑ Lessons from Nigeria. My name is Ope Olugasa, and our mission is equipping the justice system to enable people and businesses achieve their potentials.

So, what is the problem statement in the judiciary? I'll just look at this data here for the ultimate graphic. This particular magistrate court between January 2019 and May 2020. I can see there the statistics for concluded matters, for civil matters, in criminal, 1,194 are pending cases within that period, and 4,295 new cases on criminal matters during that period. Now, you'll agree with me on what has shifted some of the technological interventions, that this is troubling potential situation waiting to happen. Because the rate at which new matters are being filed in court, the court system in Nigeria, and I believe that's also the same thing across Africa. The courts cannot keep up that pace. And knowing the statistics from here actually says that 36% of these people actually do resort to self‑action. So, that's actually the root cause for some of the symptoms we're seeing in our own plan here in security.

So, what are the solutions that we have preferred at LawPavilion? We have concluded that human intelligence alone is no longer enough, so we needed to integrate technology, we need to integrate artificial intelligence to help enhance human intelligence in justice delivery in legal practice in Nigeria. 

Now, what are the interventions we have introduced? Essentially, across two broad areas. The first one is e‑library ‑‑ that's electronic library ‑‑ and electronic legal research. Now, for this, what we did is we introduced electronic law reports into the system so that judges, instead of spending 50% of their time looking for authorities using hardcover books, we did that and annotated the laws of the Federation. So, when I say annotated, it says each section of the law is explained with cases. So, judicial authorities that have introduced laws which have put them together. Now this makes it easier for judges to quickly look at portions of the law and then quickly identify precedence in the law. Then we have an appellate feedback system for judges. Now, this is to help them enhance the quality of their judgments. So, if a judgment in a lower court, yes, it has been on appeal. Now, when the appellate court takes a decision on that, the lower court judge is notified that this position has happened so subsequently, the judge can correct.

Then legal analytics, some of the challenges we have had is conflicting judgments, you know, quite a number of sort. Now we actually compile these together, so on each principle of law, we show the conflicts, we show the exceptions, and then we show the history of that authority in introducing in our jurisprudence, so it becomes easier for judges to take the decisions promptly, of course with quality at the back of their minds. They will introduce electronic textbooks and journals, still on the platform. 

And then more recently, we then introduced AI‑assisted document review. This is a form of a judicial decision support system. So, when lawyers file matters in court, the judgment comes and they have written addresses. So, instead of the judge having to first read the address, then try to find out what it used to be from then on in, the system automatically takes in the whole written address and then searches through the precedence to bring out related cases on such matters before and then extracts all the legal authorities within that written address so that the judge can easily have a summary of it and then look at it for its references and then look at cases that are related, and it becomes easier for the judges to take that decision and then educate on these matters. So, that's the first intervention.

The second is we call it e‑registry. So, essentially, it's about electronic filing of court processes, instead of having to do everything manually.  But what we put on this is with timestamps, so it helps track timing, how fast the officials, the court officials, quickly respond to matters that are filed online. Then, electronic payments, electronic service of court processes. This was an admitted challenge also here and this solution is helping to serve that electronically, and proof of service is easily gotten, but at a cost. Then, notification of schedule, electronically as well. And then we introduce case management system that helps manage the workflow of matters in court. It calculates the average age of cases in court. Ordinarily, it takes a number of years, about five years for high courts, and then on average about 20‑30 years to prosecute a matter from high court to Supreme Court in Nigeria. So, with the case management system, we are keeping a tab on the average age of case so that judges can quickly depose the cases before them.

And then, what are the visible outcomes on this over a period of ten years? Now, I'll measure this in terms of the growth in number of cases at the court of appeal, between 2009 and 2019. And this graph is from 2009 up to 2019. And with this, where we introduced technology to them in 2012, introduced local resources to them in 2012, now in the last couple of years there has been 200% increase in the annual cases, judgments given by the Court of Appeals. This is three times improvement. And we think with better position and adoption of technology, the justice system in Nigeria can even do more. The same thing within Africa. 

Now, having looked at this, we also then move on to consider other AI solutions. We just recently introduced something, we'll call it a new disruption in civil and criminal justice system in Nigeria. And I'll just play briefly a 1 1/2‑minute video to demonstrate what we have done. And this is essentially really rethinking the addition of justice in Nigeria using artificial intelligence and blockchain such that we are bringing justice really closer to the people, especially to solve the challenge of insecurity in Nigeria and then reforming the criminal justice system in terms of evidence, profiling of suspects. And let me play the video. Let's say 1 1/2 minutes. 

>> Introducing JustEase, your social legal app for community security and easy access to justice. The JustEase app empowers you to know your rights and duties and Soro Soke, speak up whenever you are violated. The community search watch feature provides you with prompt notification on events in your area or community in seconds, empowering everyone to speak up and report every crime, whether you are the victim or not. 

The panic button lets you signal for help when you are in trouble. This immediately shares your GPS coordinates with your preselected contacts and alerts police around you to come to your rescue. You can also access a comprehensive directory of lawyers or police stations near you.

Oh, one more thing. The real ingenuity of JustEase is its dashboards that rely on the power of GPS, artificial intelligence, and blockchain technologies to form a reliable and tamper‑proof digital evidence bank, available for investigators, lawyers, and judges, in a format that is admissible before the court. Its crowdsourcing model of evidence and crime information gathering information within Nigeria and the facial recognition feature embedded in the dashboards enable the profiling of any criminal suspect by running it through a crowd‑sourced evidence bank stored in the blockchain.

Much more, the dashboards measure and calculate the Rule of Law Index for every state in Nigeria.  Now your safety is in your hands. Download JustEase today on Google Play Store. JustEase, disrupting the civil and criminal justice system in Africa. 

>> OPE OLUGASA: Now, this we have just done recently, and we are using artificial intelligence to integrate crowdsourcing of crime information that's admissible in the courts in Nigeria, so it becomes easier. It's instructed to the criminal justice system and even civil justice system in Nigeria, because part of what we are doing right now is so that the common man on the streets can easily file a matter via this app and then it is a form of online system as well. Because the commoner on the streets need to be served justice, and these are the ways we've been doing this and we're still pushing this in Nigeria to ensure that artificial intelligence becomes mainstream in combatting crime and the criminal justice system. Thank you so much.

So, in conclusion, my idea of the future of justice delivery in Africa is shown where cases are concluded in less than one year. Right now, it takes about 15 to 30 years to conclude all three tiers of courts in Nigeria. And using artificial intelligence, this can be much better, much faster, and that's what we're doing at LawPavilion. Thank you so much for your time. 

>> CEDERIC WACHHOLZ: Thank you, Mr. Olugasa. These were very interesting lessons from Nigeria. You pointed first out the challenges, like 4,300 cases, the criminal cases but also the growth of cases with appeal, the 200% growth, but also some opportunities, like legal analytics. But also, of course, a few challenging parts. I think the JustEase App could be discussed from a human rights perspective and from the pertinent part of the face recognition, the digital evidence‑based pursuits, also the AI assistant document review. So, we will have time for discussion a little bit later. I ask you to please reserve your questions and points for a little bit later.

We will now first hear Ms. Aishwarya Giridhar, who is working in the Technology and Society Team at the Center for Communication Governance in New Delhi University. And her work primarily focuses on issues relating to data protection, Internet governance, material liability and emerging technologies, but also, of course, on opportunities and challenges raised through the digitization use of AI. So, you have the floor. Thank you so much. Yes.  Thank you to the Assistant Director General for having opened the floor, and he has other points. 

>> AISHWARYA GIRIDHAR: Hi. Am I audible? Great. So, I work in civil society, so I guess my job is to introduce a few road blocks into our thinking about AI and quick deployment and I think we can all agree in general is this is an issue that tends to come up in India, that we need faster disposal of cases, just as it takes an exorbitantly long amount of time. But I want to raise three challenges, things to think about when using AI technologies in the judiciary and more generally in law enforcement as well.

So, one is with respect to transparency and the concept of trust in the judiciary. The second is with respect to the challenges with digitization and infrastructural capacity. And the third is with respect to privacy.  So, if we start at a very basic level, I know that this is an issue in developing countries, for example. We need to be able to ensure a basic level of access to the Internet and access to digital services, especially when we require users or individuals or litigants to communicate with anything that's technology‑based.

And although the rates of Internet access are different among urban and rural areas and are different within countries, again, what this can lead to is an issue of exclusion of the most vulnerable and marginalized people, which is something you see with tech‑based solutions in general. So, the first thing we do need to focus on is the fact that everyone is actually able to benefit from the technological interventions that we want to make.

The second is on, especially with respect to the judiciary, I think having access to computer systems, digital literacy at all levels of the judiciary's important, and especially when we talk about different kinds of AI. I think that's ‑‑ I mean, one of things that I think everyone who works with AI can agree on is that nobody can really agree on what AI is. So, depending on the kind of tools, it becomes necessary to also make it very clear what the limitations of the tools are. So, for example, if you're talking about predictive policing or talking about risk assessment tools that are used in sentencing or used in assessing bail applications, for example, the fact that a software tells you that someone is more likely to reoffend doesn't necessarily tell you anything about the metrics that were used to come up with that output. And so, especially as a judge, as a police officer, as someone who's making decisions, that will affect your view of an individual based on what a software system says. It's important to know what the limitations of these systems are.

And the third is ‑‑ and I guess this is, again, more of a basic infrastructure kind of issue ‑‑ is that a lot of, especially when we're talking about things like digitizing case records, right, a lot of it relies on, as the first step, making sure that you have machine readable versions of documents. And now, if you have ‑‑ like, I come from India ‑‑ there are a number of vernacular languages across different states. In trial courts, you have very little that is written in English, and so, you have to first transcribe. You have to have machine reading systems that are able to accurately transcribe what a case document says, which is usually handwritten, to be able to do any kind of further processing on that.

And currently, at least, as the rates of accuracy of these technologies is very low, and so that is like a basic stumbling block when you come to, for example, undertaking things like trying to analyze what a physician is seeing or like trying to figure out what a piece of evidence ‑‑ summarize a piece of evidence for a judge, right? So, you need to be able to actually tell what a document says. So, I guess what I'm trying to get at with this is that even at a more basic, like more simple interventions, for example even things like creating a case docket, a case list ‑‑ to make it more efficient can have significant impacts on justice delivery, especially when people figure out how to game the system, or if there are factors that aren't taken into account when making these assessments, right?

So, the second, and maybe I want to spend the most amount of time on is the importance of trust in the judiciary. The whole basis of trust, the whole basis of legitimacy of the judicial system is trust. And so, there are two significant issues that AI systems can impose here. So, one is with respect to the nature of algorithms themselves, and the second is with respect to the actors who develop and analyze these items. So, I think AI is useful to maybe just think about what we're discussing. So, usually when we talk about AI, we're talking about machine learning algorithms, which, basically, their whole goal is to be able to perform a task by learning from previous data and without being specifically told how to do something. So, this can lead to very unexpected and interesting outcomes, but it also can mean that it's very unclear how they arrived at those outcomes. 

So now that leads to two issues. One is what you call the black box problem, which is that it's very hard to understand the reasoning behind decisions, which can make it very difficult to understand whether or not an outcome is fair. So, especially for things like ‑‑ so, there is COMPAS, which is a tool that has been used in the U.S. to assess the risk of reoffense for people who have been convicted to see what their likelihood as is to make decisions based on paroles, sentencing, et cetera.

So, now if you are a person on whom this tool has been used and you have been assessed to be high risk, it's very hard to understand what the basis of that assessment is and if they're making the use of protective characteristics, for example, or if it's discriminatory or biased, for example. There is no real way for you as a litigant to understand or account for outcomes based on AI systems, right?

The second is also on ascribing liability and accountability. So, for example, when an AI system or when a software makes a false ‑‑ a wrong decision or is discriminatory or leads to outcomes that are unfair in whatever way, who exactly do you hold liable? Because you can't really hold software liable, so do you hold the person who developed it liable, which is a team of people? Do you make the person who made the decision liable? So, we need to think about this before we deploy AI systems, right?

The other thing is that the judicial process is based on the concept of explainability. The fact is that you need to understand. This is why you have the reason for decisions being made available, right, like the fact that you need to understand the reasoning behind a judgment, a reasoning behind a decision, so that you can assess and contest that decision. And so, any kind of AI tool that you use in a judicial context needs to have very high explainability standards so that people are able to contest it where it leads to outcomes that they think is biased or unfair. So, partly, this could include things like, very basically, things like the factors that I used in assessing the outcome, the role of human actors in the algorithmic development process, what the input data is, what the different weights are that are given to different factors, things like that, right?

Now, the second broad bucket of issues I was talking about with respect to transparency is on who designs a software that is being deployed in these systems. So, now in most cases, algorithms and software is being designed by private‑sector institutions, which have proprietary ‑‑ on proprietary software ‑‑ and so, you don't really have transparency into some of the things that I said, like the factors that I used in making assessments, the kind of like weight that's given to different factors, for example. But also, you don't have any kind of audit, so it's not possible for someone to go out and see there are, if for example, a software is leading to bias outcomes before it's deployed. So, what this essentially means is that there is a lack of accountability. It's very hard to hold someone accountable when nothing about the way that the system operates, nothing about the way that ‑‑ yeah, where nothing about the way the system operates is made clear.

So, for example, there was a case where the Michigan's Unemployment Benefits System had incorrectly flagged over 34,000 people of fraud, which meant that they lost all their unemployment benefits. A lot of them had to face bankruptcy. A lot of them lost their houses. So, this had very significant implications for the rights of all of these 34,000 people who were incorrectly, again, flagged as being fraudulent. And I think later it was found that the error rate for that software was well over 90%. So, this is a very basic example of how this could have been a very easily avoided outcome, had there been more audits, consultation and information about the kind of information that the algorithm was accounting for in making its decisions. 

Similarly, COMPAS, which I just spoke about, is a private ‑‑ had been developed by a private company; it relies on proprietary algorithm that makes any kind of auditing impossible. And so, there was a study that later found that black defendants were far more likely to be tagged higher risk of reoffense than white defendants, and this was false. Incorrectly targeted as being at higher risk of reoffending. And so, there are some studies that you can do once an algorithm has already been deployed. But again, at that point, it might be too late, because you have a lot of people, potentially, depending on the scale of deployment, whose rights have already been infringed by a biased system, right?

Now, the last concern I want to highlight quickly ‑‑ and I'm probably running out of time ‑‑ is just on privacy. So, I'm going to keep it short and talk about primarily the issue comes with what kind of data sets are being used by developers to train algorithms. So, how is the information collected? And is the information that's being used disproportionate to the purpose that's meant to be served, right?

For example, I think they found that HART, which is a system used in the UK, similar for identifying the discovery offense again, used existing police data, used post code data, but I think also used data from a data broker, right? Which, again, I think there were concerns about where the data broker got the information from, and I think, again, concerns were raised about the fact that it was from illegal sources.  So, again, when it comes to something like this, how legal, how ethical is it to use data sets that have been collected illegally is one thing we need to consider. And secondly, what kind of information is useful, or like what kind of information should be fed into this? So, predictive policing models that rely on social media posts, for example, now, is that a valid source of information for making decisions based on something that might be totally unrelated, right? So, these are broadly the concerns.

Very quickly, again, I think there are a few ways to address this. One is to just figure out whether we really need to be using AI in a system. Just because we can doesn't always necessarily mean that we should. So, I think there's a conversation about what kinds of tools you want to use at what stage in the judicial process. And even when you do use tools, the need for public consultations is paramount before a system is adopted so that you can figure out what the potential pitfalls are. The need for audits, similarly, so you can see whether there are biased or unfair outcomes. The need for small‑scale deployment, right? 

Filer projects cannot be overstated. Because again, a lot of things might show up, or you deploy the software that you weren't prepared for. So doing on a smaller scale prepares you for the larger scale. And having monitoring and review processes to address some of these issues as they come up.  So, yes. I mean, those would be great steps to take once you figure out that you want to use AI. 

>> CEDERIC WACHHOLZ: Thank you so much, Aishwarya. This was very important. Mainly you highlighted the challenges like exclusion, the importance of trust and the black box problem, but also the privacy challenges, and also a few ways forward. Thank you for pointing that out, too.

We will now come back to the African continent, and I will invite Linda Bonyo, the Founder of Lawyers Help Africa to speak. She is working on digital policy innovation within the African continent and is the CEO and Founder of the center. Five minutes, please. 

>> LINDA BONYO: Thank you very much. I was trying to share my screen and then all my secrets came up on here.  But I wanted to just share a bit about the work that we do in the ‑‑ just a minute. I will stop my video. Yeah. I was just going to share about a few things. A lot has been said, and I'm not going to bore you. But I wanted to share highlights from a report that we did recently.  It's a report on AI and Justice in Africa. And we launched it last week at the Africa Legal Innovation Week. And there are a few learnings that I think is useful to share.

One about digitalization, about the concept around artificial intelligence and what's actually happening in the continent is we're seeing this digitalization powered by virtual courts. So, talking about data that was pre‑COVID, we now have a lot of African courts really taking up the use of virtual courts. And so, it's powering everything else within that context. So, we have digital payment systems and people really going for the use of ‑‑ and I liked the first presentation that was done by Ope ‑‑ around you know, payments and assessment of fees and bonds as well. And so, we're seeing that happening a lot within the African context.

I'd also want to say the second point around it is, the question I must answer is, what's the changing role of the lawyer? And what should be done by lawyers in this specific context? So, one of the things that we're seeing for lawyers now is really case management systems, the need to speak coherently within the justice sector. So, even as the judiciary's adopting the use of technology, we see bad associations actually coming up with new ways in which they're managing advocates and verifying advocates. And so, on a larger scale, we see the need for concerted efforts around digital issues that we're talking about, even within the context of digital trade. And so, now because justice is now being productized. And so, how we'll be looking at digital identity for lawyers ‑‑ because we've had issues around verification of who is a lawyer and who is not a lawyer, and that's really a big issue across the African continent.

And so, the second issue that we are also seeing was, you know ‑‑ and we see justice following the same patterns of technology adoption within the African continent. So, you see Nigeria and South Africa and Morocco and Egypt really being drivers of change within the tech ecosystem, but in also the same following within the judicial sector as well.

I also wanted to talk about the lawyer's a strategic litigator. We see new aspects of people really pushing for their rights. There are conversations happening around bias of AI systems and digital learning, which, digital learning has been a huge issue in Kenya, and the Central Bank now coming into force and issuing guidelines around that. And then we see data protection regimes now within the continent now having clauses around automated decision‑making processes and saying that any automated decision‑making process should ideally be supervised by a human being, which is really relevant for the judicial sector as well. 

And so, what's the changing role of the lawyer in this is that the lawyer and the judge now needs to be a computational expert.  And we had discussions last week around, should lawyers code?  Should judges get to code as well so they understand the data set and machine learning that get us to artificial intelligence. And how do we also drive adoption? 

We've seen context around the African continent where capacity‑building is focusing on infrastructure in laptops and iPads, when the person who is using this technology really does not ‑‑ has not really shifted in mind‑set. And so, there's a need for capacity‑building in digital skills for the justice sector, especially the lawyers and the judges, and to see how to get them to adopt these specific technologies, because without data, we can't really build accurate data sets and build a really great judicial system.

I'd also want to mention something that you talked about. I think you mentioned this case of Eric Loomis, Wisconsin -- Wisconsin

State -- where the COMPAS technology was used to determine whether he was going to be a repeat offender or not. And I think that really drives us towards, one, strategic litigation, but two, to really build up data sets that are relevant to the African continent, because our technology that we are using it foreign and it's private, and mostly, it's American. And our digital policy's mostly European. How do we balance our interests to ensure that our judicial systems also mirror the African context in that sense?

I think my final comment would be on the role of the judge. Are our courts going to be the courts of first instance, or are AI systems going to be the first instance? And this we've seen lately from China, where you now have AI systems making a decision first, and then the judge then now reviews that decision, rather than having the judge first and then, you know, having the machine review that specific one. And I think for Africa that is suffering case backlog, which is a huge issue that Ope highlighted on the time frame for deciding cases, to be more than ten years in certain instances when things can be, you know, change shifted and decided within a year, I think that's something that we need to look at.

My final comment would be, you know, to see ways in which we are working on this. The Lawyers Hub is working actively. We have a Digital Policy Institute that's training. We're working together with judges, telco operators and we have courses and we have the law tech that focuses on capacity‑building to get Africa to the next level.

Since 2019, we've trained over 10,000 lawyers to get them to really understand these specific issues within artificial intelligence. And so, if you really need to read the report on AI and Judicial Systems in Africa, you can find it on lawyershub.drg. I think it offers that. 

>> CEDERIC WACHHOLZ: Thank you for the presentation and highlighting roles of lawyers and judges and also pointing out that automotive processing should be overseen by humans and policy‑building. We have all heard of UNESCO's work in this domain. We will hear now more about this from my colleague, our colleague, Misako Ito, which is the Regional Advisor for Sub‑Saharan Africa. And you're based in Nairobi. 

>> MISAKO ITO: Thank you, Cederic. I will be really brief for the sake of time and I will be repeating some of the points that's already been raised by the panelists, but also by the introduction of our Assistant Director General for Communication and Information.

But UNESCO has undertaken in 2021 the Needs Assessment Survey on AI for Africa. And one of the most important recommendations arising from that survey is the need for capacity‑building for AI governance within the executive, legislative, but also the judiciary. So, that's why, as our ADG mentioned, we have a project that is called Judges Initiative, where we train over 24,000 judges on the international standards of freedom of expression and the safety of journalists, and we added a new module on AI and the Rule of Law.

And we had a first pilot training of this module, which was a physical training with the Eastern African Court of Justice, very recently, in September, to highlight, you know, what our previous speakers mentioned, what are the benefits in adopting AI in the justice system, but also, what are the risks associated by integrating AI in the system, including the implication to human rights.

And the training was extremely successful, because first, AI is being used in judiciary processes by already several courts in the world, but a lot of cases relating to AI has been taken to the court because this is an area where we still do not have regulations on the use. So, UNESCO will be continuing these capacity‑building programs, and we will be also working with the African Court on Human and Peoples' Rights and also the ECOWAS court.

And lastly, another point that I wanted to mention, in addition to this capacity‑building for AI in Africa ‑‑ and I think my previous ‑‑ our previous panelists already raised that. We hold a Regional South African Forum on AI in September this year, and the forum was the first forum to build public awareness on the technical dimension of AI, but also on the ethical and human rights dimension, and building also the ownership and the leadership of African countries on these technologies.

And one of the key outcomes, so in relation, in addition to the capacity‑building that has been raised in the outcome document, which is available, the declarations, is the need to colonize the data set on AI. Because currently, the AI technologies rely on low‑quality and non‑representative data from African countries with limited data on local languages, which do not reflect over 1,000 of languages that are being used in Africa, so there is a need to work more on data decolonizations and ownership, in addition to capacity‑building.  Thank you very much. 

>> CEDERIC WACHHOLZ: Thank you, Misako. Now I hand over the floor to participants in the room. Will you please raise your hand if you want to speak, have a question, or want to intervene. And the same for people online. Please do raise your hand or put a question into the chat. Thank you. 

So, please introduce yourself. 

>> AUDIENCE: Okay, thank you. My name is Tess Fay. I am Vice President for Federal (?) and a spokesperson for federal courthouse. And today's presentations, the panelists, it is highly, highly aligned with judiciary, and as you know, it was the judiciaries, a conventional and international institutions. Therefore, I feel that we can adopt and adapt some of the AI in digital products to our court system.

Having said that, we have also a legal framework that helps us to use AI in digital products to entertain cases, but I feel that the experience of Nigeria in India is a telling one. I think we can learn from them. And having said that, there are some things that I want to share, this AI, how we need to use it in our court system and judicial systems.

And I think the data that we are going to use, that we are going to inculcate in our justice system should be protected, because those datas are inextricably linked with life, with the property of litigants, therefore the data security that we are launching, that we are installing in our court system, should be highly protected. That is the first thing that I want to add.

The second one is, interoperability also is I think ‑‑ (background chatter) ‑‑ because if the judiciary is having its own system, ICT system, and the police and the prosecution of the present administration, then it's going to be a fragmented one. Therefore, having an interoperable AI system or the ICT system I think is very critical. What we have is like a fragmented one, a piece one. The court will come up with its own products and the judicial will come up with their own, which is a problem.

And the third one that I want to add is that attitude is I think one of the problems that we have in our, not only the judiciary, but also internationally. Because if you want to come up with a new project, with a new ICT product, then the court, as it is a conservative institution, they say that we want to carry out our responsibilities, our businesses with the previous laws. Therefore, the changes I think are very important.

The final point I want to raise is that UNESCO is giving different trainings for lawyers, for judges. There is a big appetite here in Ethiopia for the federal courts, for federal judges. We want to use AI. We want to catch up with the cyber justice. To do that, we need to create an appetite in our judges. To do that, I think giving a training is very important. Therefore, as the Federal High Courts President, His Excellency is also here, and I am also representing the Federal First Court. I think we can discuss later in how to work together. Thank you. 

>> CEDERIC WACHHOLZ: Thank you, Mr. Vice President. We will have now first ‑‑ oh, there are several people. I will ask you to please keep it short. I really love the interventions, but have a lot of say, and I think the panelists too, but let's keep it short so a few more people can intervene. I want to first see somebody online raised their hand. We will try to connect Nicholas Lenin. And if we can hear you at a distance, please intervene. 

>> NICHOLAS Lenin: Can you hear me please? Okay, thank you very much. So, I just wanted to say that there are obvious benefits for the adoption of AI, but my caution, particularly within Africa, is that we should be careful not to change the password without laying the foundational structures for it. And I say this because in 2019, Ghana launched an e‑Justice system, which was supposed to facilitate filing and make court cases run faster. But in adopting the system, only one courthouse provided the infrastructure necessary to do the filings of the cases. And so, now you actually spend more time filing and getting your case to the court than the manual system used to operate. So, it is important we get the foundational structures right so that when the system falls into it, it operates seamlessly. Otherwise, we get a countereffective result if we just chase the buzz word of artificial intelligence in the sector.  Thank you. 

>> CEDERIC WACHHOLZ: Thank you so much for this intervention. I think many people would agree with your point and have some experiences. Now I think another person, another two people want to intervene. Please.  (Audio difficulty) (off microphone).

>> AUDIENCE: Thank you very much. I'm Nanako, Executive Director of the Ghana Domain Name Registry and Chair of the Ghana IGF. I just have three points to make. I came here because I wanted to ask about the use of artificial intelligence as related to justice.  A lot of representation has been between digitalization, which is separate from what I consider to be artificial intelligence, which is basically machines doing the work of humans, as far as they can go. 

So, I want to know whether we have any intent of separating those two concepts, or are we going to be discussing them together? And if so, how do we do that? And what is a better approach?

The second thing is, I'm very excited to see that AI can be used to speed up the judicial processes, but I'm a little concerned that artificial intelligence can be used to make judgments, because I don't even think that the problem of natural language processing has been solved. I think that when it comes to a point where any AI can fully understand text and all its nuances, that's when it can then now be asked to make a judgment. Because even today, if you put words in Google to translate from one language to the other, it still makes small mistakes.

And the third one is that Aishwarya talked about the fact that before systems can be used to make judgments, they have to do a lot of consultation. And I want to add that it is also a matter of having very high standards and regulations for software testing, because a lot of software can be written and deployed, and nobody knows what systems, assurance testing, rigorous tests have been done before it's used.  Because if software is used to judge cases is tested on a lot of dummy cases, a lot of false positives are definitely going to come out. So, it's a matter of software testing regulation as well. Thank you. 

>> CEDERIC WACHHOLZ: I know we are a little bit over time, but I will want to let (audio difficulty) to the last person here, just with regards to the session, I want to say something that, of course, we have a large definition of AI, and the rights between digital, digitalization and AI, they're overlapping areas, but we had a number of examples where it was really AI. But if we're speaking about AI‑assisted document review, Mr. Olugasa mentioned that, but also other cases and examples from this on accountability and trust with AI. 

>> AUDIENCE: Thank you very much. My name is Brian. I'm from here in Ethiopia, Federal High Court President. And my colleague, Vice President just explained earlier. We just first would like to thank you for your wonderful presentations. Really, the Nigerian experience, Indian experience, that are inspiring us.

My reflection is very short, and it is clear that just to reflect that we have high interest in introducing using technologies in the data environment in general and institutions, artificial intelligence center and the judiciary. Federal judiciary is highly interested in modernizing the judiciary, the service system. So, without technology, there is not access to justice. Access to justice is the right of human being, so we cannot ensure unless we use these technologies.

As we used to deliver justice manually, it is going to take time. There is delays. As we all know, justice delay is justice denied. We don't have any excuse not to use these technologies, because technology time efficiently helps us to deliver justice. So, really, I just want to emphasize, let us work together to prepare to help these 120 million people. If you are going to reach these all people, we need to put in technologies, relevant technologies. Technologies have been proven they're efficient everywhere. Why not here? The communities, human being is everywhere, human being. So, we don't give some excuses, but we need to make sure to scale up the best practices as well. Just my emphasis is let us work together, let us collaborate, let us build capacity. Let us exchange experiences as I support my legislation. Thank you so much. 

>> CEDERIC WACHHOLZ: Thank you so much, Mr. President of the High Court to join the session. And we are very happy to join causes and work together for your interest in joining.

Now I will give the floor very rapidly, in one or two minutes you can respond by our panelists. I hand over the floor. 

>> LINDA BONYO: Thank you very much for the questions. My name is Linda Bonyo from Kenya. I'm just going to say, the point on interoperability is really important. And I think that the justice sector has been removed from efforts around digital policy, and I think that's the missing link within the justice sector. We overfocus on what's happening in the judiciary, but then we are not engaged on how do we get meaningful access? And I think IGF is a really good platform to bring those two ideas on justice innovation, but also digital policy. And that is the gap that Venezuela is striving to get to.

And when we talk about interoperability, there is a lot of discussions that are siloed. Judiciaries on its own, lawyers on their own, and the digital policy movement is actually on its own. I think there needs to be a speaking together. And what I talked about around payments on one side and looking at data on the other side, they all need to speak to each other.

And then, I think finally, I think we didn't talk too much about AI, especially myself, because of the stage of development. 

>> AISHWARYA GIRIDHAR: Okay, I'm going to give it back to her once she's had some water, I think. I just wanted to re‑emphasize one of the points that was made, which is, with respect to the need for data security, and more specifically, at least the way that I think about it, the importance of robust data protection laws around any information that we want to deploy in the judiciary because of the sensitivity of the information and the impact on rights that it is likely to have. So, I'm just re‑emphasizing that. I also just want ‑‑ I guess I was a little bit unclear on what interoperability would mean in this context, because if we're talking about, like, more easy data‑sharing, that again, I think we might need to assess what levels of anonymization, what data security practices, et cetera, we have that we are able to share across different sectors of the government. But yes, now I can see that you had some water, so I'll give it back. 

>> LINDA BONYO: Okay. The enemies in my village are working too hard. But just one comment, I think on interoperability. If you study the justice sector, it's not working in coherence, especially around platforms. Like, we are trying to platformize everything that's happening. And if you look at the example of ‑‑ I'll give an example about Nigeria and what they are doing.  They are way ahead what the judiciary is doing. And there is a need ‑‑ as prosecutors are building, as lawyers are building case management systems, they are working together, everything I think needs to fit into each other. And then, within the justice sector, look at issue of digital identities together, digital payments together. These conversations are actually happening outside of the judicial system, but they really haven't caught up on those conversations. So, I think interoperability, building that case means speaking to each other without necessarily sharing data. And I think Estonia is a good example on how you can just have a gateway, rather than really having everybody share the same sort of data and then platformize everything. Thank you. 

>> CEDERIC WACHHOLZ: Thank you to both of you. I will now give the opportunity to Mr. Ope Olugasa to intervene at a distance. One minute, please.  I think your mic is muted.  Now it should be possible. 

>> OPE OLUGASA: Oh, okay. Yeah, sorry. The host didn't allow me to unmute myself. Okay. Thank you very much. I think Linda's already spoken on the idea of interoperability, and I can't agree more.  Because it's important that we consider all the stakeholders of the justice sector. Most discussions on technology and justice system takes into consideration the judges and the lawyers. But the critical stakeholder ‑‑ that's the common man on the streets ‑‑ is essentially left out of these discussions. 

So, I also think that the interoperability should link the three stakeholders. So, I really agree with Linda that we should platformize it. It should be a system where all the three stakeholders have access to justice. The lawyers can work seamlessly, connect with the court. The court can access data, and then the general populous can also follow through with litigation, with online, with resolution. I think that's the way to go. How artificial intelligence will help in galvanizing all these together for us, I think that's really what we are concentrating on now, how the common man can have easy access to justice, can supply information to the judicial system. Yes, it's going to be a lot. It's going to be a lot of information. But then, because of artificial intelligence, the justice and the judges can easily sift through this information and come up with their decisions.

Now, just as the things that we are presently working on. Thank you so much. 

>> CEDERIC WACHHOLZ: Thank you so much, Mr. Olugasa. 

>> MISAKO ITO: Just briefly, we welcome collaboration with the high court of Ethiopia, so we can contact after that. And I will not comment on the interoperability, but UNESCO's thought reacting to AI is build awareness, build capacity on the risks and benefits of AI, especially its implications to the human rights and also addressing the data gap that we have in Africa on technology development. Thank you very much. 

>> CEDERIC WACHHOLZ: Well, thank you so much. I'm sorry, I see that more people have raised their hands online, but we are already 15 minutes late, so I will be closing now the session, by thanking first the panelists online and here in the room, the very active audience, online also, and here in the audience. Also, my colleague, Charline d'Oultremont, who the an incredible job in putting this session together. Everyone on the panel knows her well, who has been working hard on this. And I think now we know and understand better why digitalization and AI matter for the justice system on the continent, but also around the globe.

We also learned about the need, about more capacity‑building, not just the judiciary, but different justice actors in this field. And we will consider also the lessons learned in this and other occasions in designing a new training toolkit. We are working on that now. But in the meantime, I invite all of you to register for the open online course on AI that we have, which will give you a good start on this topic, if you want more. So, thank you so much to all of you and I look forward to seeing and working with you in the future. Good‑bye. 

>> CEDERIC WACHHOLZ: I didn't see that ‑‑ I'm sorry, I just closed, but I just now, I think the judge has been able to join, and I would love to listen. And everyone's free to join me, please. One or two minutes, please, after closing, a little delight from you, Judge. I'm sorry we had the technical difficulties and I'm glad that you're on.  I think the voice is not clear.

>> JOHN UBENA: The host has just ‑‑ okay, hello, everybody. Thank you very much. I hope you can hear me. 

>> CEDERIC WACHHOLZ: Very well, thank you.

>> JOHN UBENA: Yeah. I'm sorry, I had some kind of technology hiccup, but luckily, Charline has done a very good job and I'm online now. I will say a few words because I have been given two minutes to say a word or two.  I would say we have the potential of using AI in administration of justice, and I think it's very useful, because then Africa has the disadvantage of having many problems. We have corruption; we have a digital divide; we also have very few judicial offices, but the population is booming. So, then, we can use AI to at least assist us in administration of justice.

How? I would say for example in case management systems, we could use AI for that. And even in some decision‑making at the judiciary, where you have like what we call small claims, traffic cases, for example, some of the cases that do not really require many hours for trial, that are not technical, we can simply use AI for that.

Of course, I know in Tanzania, for example, we have recently procured a system for recording of court proceedings and also transcribing those proceedings, so this system will help us to do away with, I mean, handwriting, I mean, where you have to write by hand all the proceedings. So, then now the technology could do that, and that, I think, is fantastic.

However, I have heard some of the speakers, which I really subscribe to, that we are very far from robotic judges.  I mean, at least in the foreseeable future, this is not easy, unless we applied for some limited cases, like trafficking cases where you can have automated billing, I mean, like the camera could be installed in rural highway and then record the speeding and then you can bill. This has been done in Europe, so it's not a problem at all in some countries of Africa as well.

So, for me, I think AI is very important and is crucial, because even when we talk of digital divide, some, say for example, the citizens do not have the knowledge or are not aware of this. But if you use AI application, for example, we have Siri and other assistants, like Google Assistant. We can use the same for the judicial services, where you go to the website of the judiciary and then you just have a question‑and‑answer session. You chat with a robot there, and it will give you ‑‑ for example, you want to open, say, the breach of contract case you have with claim or maybe any other case relating to tort.  This chatbot will tell you what to do if you want to open up, say, the administration of the case, you want to be appointed the administrator. And then this chatbot will tell you how to file your case and all the procedures. So, in this case, we find that AI can also play a role in providing legal education, I mean, in supplying information that otherwise would need a lawyer's advice. 

So, I shouldn't take much of the time, but let me conclude by saying that AI's potential, especially for countries in Africa where we have problem with the digital divide and also, we have a few judges or court officers, AI can help us to do away with ‑‑ or to deal with all these problems. Thank you very much for your attention. Let me end up there. I shouldn't take too much of your time. I'm sorry for coming late. Thank you. 

>> CEDERIC WACHHOLZ: Thank you so much, Judge Ubena. It was a very good closing with a few practical examples, too, you gave, and I'm happy we were able to connect at the end still. And I thank, again, all of the panelists, the participants online and in the room for hanging in there with us. It's the last session I think in terms of timing at the IGF, and I wish you a very good evening, and thank you again.