IGF 2023 – Day 3 – WS #409 AI and EDTs in Warfare: Ethics, Challenges, Trends – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> ROSANNA FANNI: Okay. I think we can get started. So, welcome to everyone here in the room. Maybe a few people are still coming. And also, a warm welcome to the online audience to this session, which is here, at least in Japanese time, quite late in the afternoon on the third day of the IGF. My name is Rosanna Fanni. I have been actually working for the Centre for European Policy Studies, in short, CEPS, until last week, and this session is a very special one because it's a topic that I'm personally very passionate about, that I've been working on for quite some time now, and it is also a special session because I think the topic that we are going to address today is anomaly not really at the centre of the IGF discussions, which is, of course, the use of AI and Emerging Technologies in the broader defense context. Why? Why is that topic relevant for the IGF?

Well, let's just consider for a moment that almost all AI systems that we are currently speaking about, the models that are currently being developed, they are, of course, used for civilian purposes, but at the same time, they could also be used for defense purposes. So, that means they are dual use. And as we also know today, literally everyone today has access to data and can easily set up machine learning techniques or algorithmic models, can use coding assistants, such as ChatGPT. And so, this means that basically almost all the technology, the computing power, programming languages, code, encryption information, big data algorithms and so on has become dual use. And of course, the military, not only the civilian sector, but also the military has high stakes in understanding and using these technological tools to their own advantage.

Of course, we know that these developments are not really new. Things started with the DARPAM, which some of you may be familiar with ‑‑ the U.S. defense inhouse R&D think tank, and that was already at the time central to developing the Internet, software, and also AI that we all use today.  But as we see with the conflict in Ukraine, AI is already in full use and has big potential to change power dynamics considerably, and our panelists will speak to that. 

While we have seen numerous developments, the use of AI already in those contexts, we see there is quite a disconnect to the civilian developments in AI, which include a large number of ethical principles, ethical guidelines, regulation, soft law, hard law, and so on. However, we don't really see that happening yet in the defense realm, which for me is quite concerning, because the stakes and the risks in the defense context may even be higher than in civilian ones. 

And this is also I think a great example ‑‑ or a surprising example ‑‑ when we look at the current European Union approach to AI, so the much‑applauded AI Act, which is risk‑based. It actually excludes virtually all AI applications used in a defense context, so AI is completely excluded from the scope of the AI Act, which is funny because it's called risk‑based approach, right? So, this just as a means of introduction. We have a lot of urgent questions and very little answers so far, so I hope that our panelists will enlighten us. 

I will introduce the panelists in the order that they are speaking. And before they are speaking. So, I will now introduce the first speaker. And I also first want to introduce Paula, my now former colleague who is based in Brussels and joins us today as our online moderator, and we have foreseen half an hour, or maybe, yeah the second half of the session where we want to hear from you, so answer any questions that you have.  So get your questions ready.

And I think now it's time to dive right into the discussion. And to do that, I will introduce the first speaker, who is joining us here in person. I want to introduce Fernando Giancotti. He is a Lieutenant General of the Italian Air Force, retired now. And he is also a former president of the Italian Defense University. Fernando, the floor is yours. 

>> FERNANDO GIANCOTTI: Thank you very much, Rosanna. I am very honoured to be here to share some thoughts about this topic, which in the great debate about ethical use of emerging and disruptive technology kind of lags behind. We have heard so many organizations involved in so much, and rightly so, in ensuring ethical behaviors in many of the domains of our lives, and we don't see as many taking care of one of the most dangerous and relevant threats to our security and to the lives of our fellow people.

So, this panel, which I think is the only one here about the topic, is meant to give a, let's say a call on this. Wars are on the rise, unfortunately, and conflicts. I don't need to expand on that. I think we have enough from media.  And in the field, a lot of violence is going on. And while this is in the forefront of attention, not so the implication of what is being already used on the fields. Yeah? Closer? Yeah. 

So, I argue that this is important, both for ethical and functional reasons. According to our research, recently published, about ethical AI in the Italian defense, a case study, commanders need clear guidance about these new tools, because first, the ethical awareness is ingrained, both in education and in the system, which implies also swift penalties if you fail. This is due to the, in democracies, to the assignment of the monopolial violence to the armed forces. So, ethical awareness is high. 

And also, on very practical grounds, accountability issues. Commanders are afraid that without clear guidelines, they will have to decide, and then they will be held accountable for that. And furthermore ‑‑ and this is another major point ‑‑ what came out from the research, which, by the way, was authored by the moderator and co‑authored by me ‑‑ you can find it on LinkedIn. There is what I call the bad guy dilemma, which is a very functional problem about ethics in AI and EDT in general, applied to warfare, which is, if we do not carefully balance value criteria with effectiveness, and so, we don't do a good job and find balance, and the bad guys do not, let's say follow the same principles, we will be in a disadvantage. This is another worry that came out from the research. 

So, now let's go very quickly in a few words through what's going on on the battlefield about this in the industry and in the policy realm. On the battlefield, we can see three main timelines: Before Ukraine, during Ukraine, and we can imagine what's going to happen after Ukraine, given many indicators. Before Ukraine, AI was not much used in warfare at all, and bad for experiments in a few isolated cases. But with the breakout of the Ukraine war, things have changed massively, which means that there has been a strive to employ all the means available. There is no evidence. There is a very recent report a few weeks ago from the Centre for (?) Analysis which shows that there is no evidence of extensive use of AI in the Ukraine war, but except for decision‑making support, which is, of course, critical.

Now, there are several systems that can employ AI, and maybe they have in cases, and there is, for sure, a big investment in trying to increase the capabilities of artificial intelligence in warfare. 

What we can expect, given also the big, huge programmes that are being developed, which already by design include artificial intelligence, there will be a huge increase in that. And the industry, as a matter of fact, also, because it's a dual‑use industry, largely, is working much on this, and we cannot expand on the systems that are being developed, but really there would be a major change in the nature of warfare due to AI.  So, this is briefly what happens now on the battlefield, what happens in the industry, with government commissions, and now what happens in the policy realm.

The policy realm, the EU does not regulate defense because it's outside the treaties, but Europe, the EU is doing many things that are outside the treaties regarding defense, and especially for the Ukraine war. So, it's kind of a fig leaf, let's say, I think, this point. 

The UN as a major international stakeholder, has focused on highly polarized lethal autonomous weapons initiative, which doesn't move forward, but there is no comprehensive approach to tackle with the more general framework. Single nations have developed ethical frameworks for AI in defense, but by definition ‑‑ and remember the bad guy dilemma ‑‑ these kinds of frameworks are relevant if they can be generalized at the largest possible level.  So, we should, I think, according also to the multistakeholder approach that is typical of, for example, this forum, have the UN join and lead the way for a comprehensive ethical framework, kind of a digital use umbrella in a multistakeholder approach.

The UN was born out of a terrible war. Its core business is to prevent and mitigate conflicts. And there is some good news. As Peggy Hicks said, of the Office of the High Commissioner for Human Rights on Monday, she said, "We don't need to invent from scratch an approach to digital rights because we have decades of experience in the human rights framework application."  I can say that we don't need to invent from scratch a way to implement and operationalize ethical principles in operations because we have decades‑long approach in application of international humanitarian law with procedures and structures dedicated to that. The bad news is that we don't have those general principles to operationalize at the strategic, operational, and tactical level.

Before coming here, in my previous job, I had been the President for the Centre for Higher Studies, which is our national defense university, but also the operation ‑‑ before that ‑‑ the operational commander of the efforts. And I can guarantee you that every operation has a very tight, rigorous approach for compliance to international humanitarian law, which goes to specific rules for the soldier on the battlefield ‑‑ rules of engagement and things like that. So, my, let's say thought, that can be, of course, discussed, that is very simplistic put in this way ‑‑ and maybe in the question‑and‑answer, we can expand ‑‑ but it's that we should really get a general effort, because I think there is evidence that these ugly things that are wars and conflicts are not going away. We'd better try to do our best to mitigate them. Thank you. 

>> ROSANNA FANNI:. Thank you, Fernando, for your contributions. And I have already some questions prepared for you. We'll come back to you when we speak about the question‑and‑answer session. So, yes, exactly. I will hand over to the next speaker, who is also here with us in person, Pete Furlong. He is a Senior Policy Analyst at the Internet Policy Unit at the Tony Blair Institute think tank. And yeah, the floor is yours, Pete. 

>> PETE FURLONG: Sure, yeah. And thank you for having me here. I think, you know, it's important when we talk about these issues that we kind of ground it in, you know, specific technologies and think about what technologies we're talking about. I think, like Fernando mentioned, we can often get caught in these conversations about lethal autonomous weapons that can be, you know, pretty fraught, but there's a lot of other technologies that are important to talk about, and you know, especially when you think about the emerging and disruptive technology, beyond just AI. And I think, you know, when you look at the Ukraine war, things like satellite Internet are a very good example of that, but also kind of the broader use of drones in the warfare. And I think it's important to realize that extends beyond just traditional military drones, but also through to, like, consumer, commercial, and hobbyist drones as well. And I think that when we talk about things like that, it's important to realize that, you know, these systems weren't designed for the battlefield, and I think that's often the case for a lot of dual‑use AI systems as well. And they weren't designed, you know, with maybe the reliability and the performance expectations that a war brings. And the reality is that when you're fighting for your life, you're not necessarily thinking about these issues. And so, it's important that in these forums that we start thinking about and talking about these issues, because this technology has a really transformative effect on these conflicts, and I think the use of consumer drones in Ukraine is a great example of an area where Ukraine's been able to leverage U.S. and Turkish sophisticated attack drones, but also simple, like custom‑built, even like DJI, which is like a consumer commercial drone provider, drones from different companies as well. And I think that you're really blurring the lines between these different types of technologies which have different governance mechanisms and different rules in place. So, I think that's important for us to think about.

And I think the one other thing that I would bring up is that, again, moving beyond just the discussion about AI in weaponry, but also by the military more broadly, you really have the potential to escalate the pace of war significantly. And I think that's something for us to really consider when we talk about things like, you know, ensuring there's space for diplomacy, ensuring there's space for other interventions as well. And again, really, the intent is to accelerate the pace of war, and we need to really think about the consequences of that as well. So, thank you. 

>> ROSANNA FANNI: Thank you. Thank you very much. And yeah, also good that you came back to this aspect that I already mentioned in the beginning, that the wars, so to say, are now, almost as you say, like a community, because everyone can build a drone, can develop a model, and kind of be an own actor, almost, and that, of course, has many‑fold implications. Yeah, thanks a lot.

I'm going to hand it over to the third speaker who joins us online from New Delhi, and I hope we are also able to see her on screen soon. I'll introduce her in the meantime. Shimano Mohan is a Junior Fellow at the Centre for Security Strategy and Technology at the Observer Research Foundation, also a think tank, based in India New Delhi. And Shimona, the floor is yours.

>> SHIMONA MOHAN: Perfect. Thank you, Rosanna. Just wanted to check if you can see and hear me well before I start off.

>> ROSANNA FANNI: Excellent. We can see and hear you well.

>> SHIMONA MOHAN: Fantastic. Okay. Thank you so much for having me on this panel. It's the perpetual blessing and curse of having talented panelists that measure up simultaneously easier and harder, but I hope the issues that I will be speaking about will be of value as well.

So, since Fernando and Pete sort of spoke about why ethics are important already, I will just probably take the conversation further and into the domain of two separate methodologies around AI in different applications that we have seen being employed recently and how they've sort of come about in the different space.

So, the first one which I'd like to sort of give a characterization around is explainable AI. And while there is no consolidated definition or characterization of what explainable AI is, it's usually understood as computing models or best practices or a mix of both technical and non‑technical issue areas which are used to make the black box of the AI system a little bit more transparent so that you can sort of delve in and see if there are any security issues or if there are any blocks that you're facing with your AI systems in both civilian and military applications. You can sort of go in and fix them. So, that's definitely something that we're seeing coming up a lot.

And as Rosanna mentioned earlier, DARPA was actually at the forefront of this research a few years ago. And now we are seeing a lot more players sort of come into this and sort of adopt XAI systems, or at least putting resources into the research and applications side of them. So, for example, Sweden, the U.S., and the UK have already started research activities around using XAI in their military AI architectures. And then, we also have a lot of civilian applications which are being explored by the EU as well as market standards by industry leaders like Google, IBM, Microsoft, and numerous other smaller entities which have much more niche sectoral applications around this. So, that's one.

Another thing that a lot of us are sort of noticing in the defense AI space now is something called responsible AI, and responsible AI is sort of understood as this broad‑based umbrella terminology that encompasses within it stuff like trustworthy AI, reliable AI, even explainable AI to a degree. And it's mostly just the practice of sort of designing and development and also deploying AI, which sort of impacts society in a fair manner. So, countries like the U.S., the UK, Switzerland, France, Australia, and a number of countries under NATO have also sort of started to talk about and implement responsible AI strategies within their military AI architectures. And for those who work around this area, they may also be aware about the responsibility AI in the Military Summit in the Netherlands, convened earlier this year as sort of a global call to ensure that responsible AI is part of military development strategies for about 80 countries that were there at this particular meeting. 

But the interesting thing, and this is where I'd like to bring in a geopolitical angle to this, is also the fact that out of those 80 governments that were present at this meeting, only about 60 of them actually signed this global call. And it's interesting to note that the country where I come from, India, was one of the 20 that did not sign this call.

So, the analysis from this ranges from considerations around national security and a prioritization of national security over international security mechanisms, which is something that countries like India have pursued before as well. So, India's actually one of the four or five countries which have not signed The Nuclear Non‑proliferation Treaty either, and that was on the same sort of principles of ensuring its national security over aligning itself with international security rules and regulations and software laws. So, that's an interesting dilemma here.

And another dilemma that I'd like to sort of put my finger into is something that Fernando mentioned earlier, which is the bad guy dilemma. And of course, there is no clear answers to sort of solve this bad guy dilemma. But something that's being brought about by the Responsible AI in the Military domains, discourse around military AI, is the fact that AI‑based weapons systems, autonomous weapons and other aids that have not been screened for responsible AI considerations, carry a lot of tangible risks of exhibiting bias or error‑prone information processing for the operational environment in which they are deployed. So, systems which actually don't have responsible AI or ethical AI frameworks around them also pose unintentional exclusive harms not only towards adversities at which these military AI systems are employed, but also possibly for the entity deploying them, themselves, which makes the use unnecessarily high risk, despite the other benefits which they give to the employing entity.

And while we're on the subject of ethics and AI, I'd also like to just spotlight another sort of aspect of this ethics debate, which is gender and racial biases in military AI. So, we already know that there is a ton of biases that AI brings to the fore, not only in civilian applications, but also in defense applications. And something that's given a little bit less emphasis on is gender and racial biases. So, gender is sort of seen as a soft security issue in policy considerations, as opposed to hard security deliberations which are given a lot more focus. And the issue of gender in tech, whether it's in terms of female workforce participation, is also characterized as sort of an ethical concern, rather than a core tech one.  So, this characterization of gender as an add‑on essentially makes it sort of a non‑issue in security and tech agendas, and if at all it is present, it's usually put down as a check box to performatively satisfy compliance‑related compulsions.

But we've seen that gender and race biases in AI systems can have a lot of devastating effects on the applications where they are employed. So, there was actually a Stanford study a few years ago on publicly available information on 133 biased AI systems, and this was across different sectors, so not limited to military AI, but across the ambit of dual use military systems it and about 44% of these actually exhibited gender biases, amongst which 26 included both gender and racial biases. So, similar results have also been obtained by the MIT Media Lab, which conducted the Gender Shade Study for AI Biases, where it was seen that the software, the facial recognition softwares which are popularly employed in a lot of places now, recognise, say for example, white male faces quite accurately, but they don't recognise darker female faces, up to 34% of the time, which means that if your particular AI system that you employ in your military AI architecture has this kind of biased facial recognition system, 34% of the time when it looks at a human, it doesn't recognise her as a human at all, which is, of course, a huge ethical issue as well as an operational issue.

So, going back to the argument given by Fernando that ethics are not only just a soft issue, they also have a lot of operational risks attached to them.

And my last point here would then be also about how we are seeing these sort of blanks emerge in how military AI is developing in terms of both gender and races. And these blanks are sort of three‑fold. So, the first blank here would be the technology blank itself, which means that when you have and are developing these AI systems, you have skilled data sets or you have unconnected biased algorithms, which are sort of producing these biases in the first place.

The second blank, then, would be your legals systems, your weapons review processes, which don't have gender reviews, gender‑sensitive reviews, or race‑specific reviews, or any other particular aspect of your military AI system which could be biased.

And then, the third set of blanks would be in terms of a lack of policy discourse around AI biases in military AI systems and how they affect the populations which they affect most. So, the idea for us now is to sort of take forward these conversations about ethics, about biases, about geopolitical specificities and put them wherever we can so these are not left behind and we're not sort of only looking at military AI systems as killing machines and not as systems that need to be regulated according to a certain set of rules and regulations.  Thank you so much. And I look forward to all the questions.

>> ROSANNA FANNI: Thank you. Thank you, Shimona. That was also super insightful. Also thank you for raising the issue of gender and race, which I think is already a big issue in the civilian context, but again, this is replicated in a defense context and definitely not sufficient attention at the moment, at least, is paid to this issue.

Okay, so, that concludes the first round of the interventions. Thanks a lot to all the speakers.  I will hand it now over to our online moderator, Paula, to give us a short summary, so to say, of the points of view you just heard, and then maybe also already start with a question‑and‑answer session. So, I'm taking some online questions first. And also, I will invite you, the three of you, once you answer, you can also refer to the points that you made, as we don't have a, so to say, a circle of points or reactions from your side, but feel free to include them in the question/answer session. Okay. Over to you, Paula.

>> PAULA GURTLER: Yes, thank you. Greetings from Brussels, where it is still in the morning. So, thank you for an interesting start into my day, so to speak. For me, there are so many interesting points that you've raised that it's really difficult to just settle on three takeaways. For me, the first one would be that we need ethical principles at international level so that we need to find some kind of agreement so that we can move forward with ensuring more ethical practices in military AI applications that also relate to accountability issues that were raised by the commanders in the Italian Military Defense.

The second one is, for me, the main takeaway probably of the entire session is that the conversation is much bigger than laws. And by just focusing on legal autonomous weapons systems, we really miss out on much of the conversation on expandible AI, responsible AI, and also what you mentioned, Shimona, in the last intervention, that we really miss out on gender and racial biases if we just focus on laws and these extreme use cases. So, I think, really, that the conversation is bigger than us. It's one of the key main takeaways. And another one that complicates the use of AI in the military is, of course, geopolitics and the powerplays that are pitting stakeholders against each other. So, I think this is already so many interesting points.

And I would love to give our online audience a chance to raise their questions.  Please, feel free to raise your hands, type in the chat, if you have any questions. But if there aren't any, I have my own questions, which I'm really excited to ask. 

So, I will just start off with my own question, and then in back to the online participants. Please, don't hesitate to be in touch via the chat. So, what I'm wondering is, on the ethical principles that we need for AI in military use, I'm wondering, do we need different ones than for those that we already have? We know how many ethics guidelines are floating around, and I'm wondering, do we need different ones for use of AI in military contexts? I also heard bias plays a role, responsible AI, explainable AI. Do we need ethical principles that look different to those than we have right now to cover the military domain? So, thank you so much. I'm really looking forward to the continuation of the discussion. 

>> ROSANNA FANNI: Okay. I don't know who wants address this question? And then we also, of course, go to an in‑person round of questions. You will not be forgotten, but maybe we can first address this one. I don't know who wants to go first? Okay, we do first this round and then we'll have another round of questions. Yeah, go. 

>> PETE FURLONG: Yeah, I think it's a great question, and I think, you know, in an ideal world, you know, these principles would be the same. And I think, you know, that would be great. But I also just think there's an element of maybe not necessarily do we need different principles, but do we need maybe more targeted principles that address some of the issues that we're seeing more specifically, because I think, you know, again, most of these AI principles are very useful and important, but they're, you know, intentionally broad because they're meant for a wide variety of applications. And I think that that poses a challenge when we talk about how do we implement them, and you know, you can end up in a situation where different countries interpret these things very differently. And I think that's maybe the risk in having pretty broad interpretation here. 

>> ROSANNA FANNI: Shimona and then Fernando. Do you want to say something? Yeah, maybe we have Shimona first and then you? We will have Shimona first and then you can go, yeah.  Please.

>> SHIMONA MOHAN: Thank you, Rosanna. Just to add on to Pete's already very substantive point.  I would also highlight the fact that in the absence of national policy prioritization of military AI, it's very hard for countries to actually go ahead and form intergovernmental actions armed military AI. So, while we talk about ethical principles, since military AI is not a tangible entity you can control via borders, the most effective sort of ethical principles might only emerge from intergovernmental processes around this. But to get to that step where we are discussing substantive intergovernmental processes, I think the first step is to have a good national AI policy for all the countries who are currently developing military AI systems or any other systems around AI which might have military off‑shoots. So, that would be sort of my two cents on this. 

>> FERNANDO GIANCOTTI: Very quickly, I think that quality of the process does not change from what has always happened, also for all the other ethical issues that have been raised and tackled, for example, after World War II, with the Constitution of the UN and then the implementation of the agreed guidelines. There have always been a very dialectic and contradictory process. We will never get a perfect framework everybody's going to comply with, but striving for the best possible balance ‑‑ I mean, I think it has no alternative because the alternative is to let things go, you know, possibly in the worst possible way. So, we have no certainty, according also to what we see for the other big agreements, agreements about the nuclear and also, you know, and conventional weapons and many other frameworks. And Shimona mentioned exactly that some countries prefer the national interests in specific cases and so, this is going to happen, but this doesn't mean we shouldn't strive to push forward compliance as much as possible through the, as it has been said, the intergovernmental process, and especially the organizations that have the responsibility to promote this. 

>> ROSANNA FANNI: Thank you. Fantastic, all right in the midst of the debate. We will now take the in‑person questions, maybe also one after each other, and then I also hear from Paula that we have another online question, so we will take that afterwards. But first, if you would like to ask a question, also maybe briefly introduce your name, your affiliation. I see you don't have a microphone. Maybe you would like to use this one. It's a bit far away, but, and if you have this one already.

>> AUDIENCE: My name is (?) from DW Akademie from Germany, also from broadcast. I would like to ask Fernando from your perspective as a military leader, does AI make our world safer or not? Because we are coming from the massive retaliation strategy from 25 years ago. And if I see now that we are living in a situation where we may think from a perspective of states or NATO that a preemptive strike is better when the other side has massive AI capacities and also in tactics, when we compare our own capacities on the battlefield, then we also might say, okay, let's go for a preemptive strike. And so, that in the end will mean that our world will be more unsecure than it was before because of AI. So, what do you think?

>> ROSANNA FANNI: Thank you. We'll just take the other question first and then you can answer together. If you would like to ask a question now ‑‑ I see you have a microphone, excellent. You have to switch it on. Yeah.

>> AUDIENCE: Okay. So, thank you. I'm Rafael Delis, a scientist in infection biology. I am concerned about the invisible battlefield that is biological warfare and non‑state actors. Now, with AI and deep learning, generating bioweapons has never been easier. So, I'd like to use this forum to ask what should we do to ensure biosecurity and peace?

>> ROSANNA FANNI: Thank you. Also a very pressing question, for sure, especially in the international context. Over to the speakers for their replies. Maybe Fernando, you'd like to go first this time, and then I'll let Shimona and Pete fill in. 

>> FERNANDO GIANCOTTI: The question is very interesting. By the way, this paper I just mentioned, the one that I mentioned, whether the massive use of AI will make things more stable or more unstable.

Now, there are good grounds to say that could be either way, which is like things have always been. It could have been the other way, one way or another. What I think ‑‑ and I'm very interested in the augmented cognition that AI can bring ‑‑ what I think, that many strategic tactics that led to wars, if we really get to an excellent degree of cognition, augmented cognition, could be avoided. For example, if you study wars, you see that most of the time, it was a strategic miscalculation that led decision‑makers to start wars for which they paid a very high price, much more than expected. Had they had lesser fog of war, most likely they would not have done that. Ukraine case is a perfect case of that. So, I think that if we can ‑‑ and now we cannot ‑‑ use AI for an actual quantum leap in strategic decision making, then this should be a stabilizing factor for most of the cases.  There will be anyway cases, I think, in which this augmented cognition will prompt, you know, intervention. And so, again, either/or.  But better to go toward augmented cognition, judging from what's been shared for miscalculations so far. 

>> ROSANNA FANNI: Okay, Shimona, Pete? I don't know who wants to add something also to the second question? Shimona, do you want to go next?

>> SHIMONA MOHAN: Sure. I can add just another point to Fernando's already very well‑done answer, and also take the second question.

On the question of whether military AI makes our world more unsecure or safer, I think all weapons systems are developed with the singular focus of giving yourself an edge over your adversary, as a result of which, in like a systematic format, it definitely makes the world a lot less safe. But then, we also have this idea of what kind of cobra effect will come out from this, what kind of opposite effect can we see emerging from this? And I think Fernando highlights that very well when he says that this augmented factor must lead to a higher threshold of war, which might eventually, then, might eventually then make it safer. But again, these are just optimistic viewpoints at this point, and it remains to be seen how this plays out in the global scenario. 

On the second question of biosafety and security as well, it's very correct to say that AI is something that will contribute a lot to this domain as well. And in fact, it's already a risk factor that a lot of issue domains and experts are already aware of. So, there is this documentary on Netflix that's called "On‑loan Killer Robots," and it was chilling in a sense that it showcased a lot of these military application potentials, which we haven't really explored a lot in the autonomous weapons debate at the intergovernmental level. And one of these risk potential factors was how AI can be used to make a lot more poisons and bio toxins and generate them at an alarming speed which we as humans at this point are not capable of. And this is even more exacerbated by generative AI applications now. So, it's very right to have the assumption that AI will lead to a lot more of these risk potentials around biosecurity also coming up. But at the same time, anything that is a genius for the wrong things can also be a genius for the good things. So, let's hope that while we have malicious actors or nefarious entities sort of taking over the biosecurity domain, from the negative side, there are also scientists and policy researchers and innovative actors working on the regulations side to help prevent that from happening, or at least having punitive measures in place before and when it happens. And that's, unfortunately, the best I can say for now. 

>> PETE FURLONG: Yeah, and just to add to what my colleagues have said, sort of quickly here.  I think on the biological weapons side of things, I think one of the concerns that I have is that when you talk about, for these types of use cases, right, if I'm using a generative AI system to develop some sort of, you know, drug to help people, right, that needs to work every time. If I'm developing a biological pathogen for some sort of attack vector, it only needs to work once. And so, I think there's a gap in terms of capabilities, that when we talk about trying to address at this stage, is very important for us to recognise. And I think that it poses a significant challenge. 

The good thing I will say is that I think on this issue of, like, biological weapons is something that people are starting to talk about a little bit more. I know with, like, the UK Summit for AI Safety, that's been one of the topics that is going to be addressed at that.

And then actually to build on what Fernando said earlier, I think when we talk about this idea of improved cognition, I think one of the potential fears that I have with that is that cognition is only as good as your sensing. And so, actually, my background's in robotics. And so, one of the things in robotics that's very challenging, right, is that you can have a very good robotics software system, but if your sensors aren't strong enough and your sensors aren't able to perceive the information, then that doesn't really buy you anything. And so, I think it's important for us to consider that, you know, these AI software systems exist in a broader system and in a broader ecosystem, and it's important to consider all those factors as well. 

>> ROSANNA FANNI: Absolutely. Thanks a lot. And if I may just abuse my moderator role a little bit and add one tiny point about bio risks, bioethics, biotech, so to say, is that I think with COVID, of course, you have seen a complete shift of mind‑set when we look at institutions. And I can just speak about the EU, because that has been the focus of my study. But I think with a lesson learned, so to say, of the COVID pandemic, institutions have, I think at least woken up and have seen that they need to be prepared much better to tackle those challenges and those risks also emerging from the rapid spread and also the cross‑fertilization between technology and bio.

And as you may know also, the Commission itself has established and hired a new Directorate‑General, so a new DG, which just deals with pandemic preparedness, but not only pandemic preparedness, but also the future of, indeed, protecting civilian, you know, yeah, civil people and those risks, also bio risks. And I have friends that work also in this department, so it's always very insightful to hear that, actually, institutions are already thinking about this issue. But I think still there needs to be done so much more. And I think especially also when you look at international institutions, much more foresight I think will need to happen. And foresight, as we know, is a tool. It's not to foresee the future. It's not to be a storyteller of what actually happens, but to be prepared and to know certain scenarios and to know certain risks, and I think there needs to be much more investment in research and development into foresight, into methodologies, into actually training also civil servants, capacity‑building, what is also mentioned here a lot in this context, so that, eventually, institutions themselves can be prepared, and hopefully, then, also the world as such, so that you know, also especially Global South nations are not left behind. Because, of course, you know, if you have more capacities to set up your institutions accordingly, then you will be better prepared, hopefully. But this should not mean that there should be, again, a race between Global North and Global South countries who arrives there first. And of course, often Global South countries do not have the appropriate resources to work on those topics. So, I think it's really important that especially international institutions such as the United Nations take over more responsibility in this point. 

Okay. Now, I talked a lot as my moderator role is abused. I'll hand it over to Paula for the online question. I hope the person is still there and also interested in following. So, yeah, over to you, Paula. 

>> PAULA GURTLER: Yes, I think I can confirm that the person who asked the question is still here and interested and engaged, because they asked a second question. And I would like to offer it to you, Lloyd, to actually take the floor yourself, if that is possible? Otherwise, I'm also happy to read your question out aloud.

>> ROSANNA FANNI: So, I think it should be possible, if the technical department is just able to ‑‑ I think the person can unmute herself/himself and just ask the question out loud. 

>> Lloyd MWASHITA: Very good morning. All protocol observed. Thank you very much for the session, first and foremost. It is a great pleasure to be part of great conversations that would obviously be impacting the way the world is going to be looking at things. So, my first question is ‑‑ oh, sorry, my name is Lloyd, and I'm actually calling in all the way from South Africa.

So, looking at, obviously, the great work that everybody's doing on the platform, my first question would, obviously, then be more around what are the ethical considerations when developing and deploying autonomous weapons systems, right, and how do we strike a balance between human control and automation? How does the body, obviously, as IGF look at that?

And should I quickly ask the second question? Sorry, Paula. Okay, awesome. And then the second question is, how can AI be leveraged to reduce civilian casualties and minimize collateral damage? And obviously, armed conflicts. And what ethical principles should guide this use itself? Has any thought been put around that as well? So, those are my two ‑‑ well, call them three main questions from my side. Thank you. 

>> ROSANNA FANNI: Thank you. Thank you, Lloyd, for asking the question and joining us all the way from South Africa. Greetings from Kyoto. I don't know who wants to answer this question? Pete, do you want to go first this time?

>> PETE FURLONG: Yeah. I mean, thanks a lot for some great questions. And maybe just to take your second question first. I think there's been a lot of talk about, you know, trying to use AI to better target strikes and reduce the likelihood of civilian casualties. So, that's been kind of a way in which people have been talking about using AI to reduce the likelihood of those issues. But I mean, I think it's worth also bringing up kind of the flip side of that, which is that, you know, if you can conduct more targeted strikes, we might see more strikes. And I think, you know, when you look at the use of drone strikes in the past 20 to 30 years, maybe that's the reality.

In terms of ethical principles being used for autonomous weapons, I think the Reaim Summit, you know, its goal is to try to get to that, but for now, it's just more a call to action at this point, and I don't think we necessarily have anything concrete, and the UN Convention on certain conventional weapons has tried and somewhat failed to this point to address that as well. 

>> ROSANNA FANNI: Thank you. Maybe over to you, Shimona?

>> SHIMONA MOHAN: All right. Thank you so much for those questions. And I think these are sort of the cardinal questions that we also have to ask ourselves when we research around military AI and ethics.

On your first question about the balance between automation and ethics, I think that's a very, very important question, because that's also something that the explainable AI domain is sort of struggling to contend with. In fact, the performance and explainability trade‑off is something that's very well established within the AI and machine learning space, which tends to the fact that the more explainable, or let's in this case say the more ethical your system is, the less it would be ‑‑ the less performative it would be, or the less capable it would be in terms of its performance levels. So, there is this sort of idea established which sort of pits these two values against each other. 

My personal take would be that it probably is a false dichotomy. There's definitely a lot that we're looking into, which sort of makes sure that we're not compromising on one aspect of a weapons systems to make sure that another aspect is fulfilled. So, in an ideal scenario, of course, this would not even be a question; you would always go for the ethical point over the performance factor. But because this is a realistic question, I think the idea is more around ensuring that these systems have and retain their level of performance while also having an add‑on of ethical or responsible or explainable AI systems attached to them.

Of course, how well they are insured is only something the country's military systems know about, because this kind of information is usually classified or is behind a number of barriers when it comes to weapons testing, et cetera. But the idea would definitely be to make sure that we're not compromising on one for the other. And I think policy conversations are also going according to that tune itself, that we're not policing your capacity to build your weapons systems to its fullest capacity, but we'd also like to make sure that these particular systems are ethical enough to send out into the world without causing undue harm. 

So, as of this point, that's where the conversation is stuck, of course, as and when we advance more in this field, we'll have a lot more nuanced ideas about where this particular balance stands at that point.

On your second question, I think Pete sort of summarized it perfectly, and I have very little else to add, except for the fact that maybe in terms of casualties, we are still looking more towards civilian AI systems being employed, rather than military AI systems. Of course, this line is blurred in a lot of places. But for example, facial recognition systems are a good example of dual use technology. These systems have been deployed in, for example, the Russia/Ukraine conflict, where soldiers were sort of identified through these facial recognition systems and then their remains were sort of transported to either side. So, there are a lot of these, so‑to‑speak, civilian AI applications which are being employed in conflict spaces. Whether or not they minimize civilian casualties is still a larger question that we're contending with. 

>> ROSANNA FANNI: Thank you. And the last word to Fernando?

>> FERNANDO GIANCOTTI: Thank you, Lloyd, for your great questions. Very quickly. The research I mentioned before ‑‑ and by the way, I want to thank the Centre for Defense Higher Studies for having sponsored this research ‑‑ has a table. And so, if you go on the LinkedIn profile of Rosanna or mine, you will find this table with five examples of ethical principles which have been developed by UK, USA, Canada, Australia, and NATO, which talk, basically, about human moral responsibility, meaningful human control, justifynd overrideable users, just and transparent systems and processes and reliable AI systems. So, these are, as I said, the principles which have been developed by single nations, and I just got kind of a summary because they are different, okay? They are not the same on the table. 

Now the problem is to get, let's say, a more general framework, as we said, which will have to be negotiated, and that will not be easy. So, for the collateral damage there, I can speak with cognition because I can guarantee you that when I talked about operationalizing the international humanitarian law, there is a process, processes and procedures with specific rules and specialized legal advisor which evaluate the compliance, and let's say, clear the commander decision to engage. In some cases, I can tell you that it's not classified information, we had drones for 48 hours over an area to observe movements before deciding to engage, you know. So, this means that in today's system already, this is, let's say, this issue is of a high priority. That doesn't mean that there are never mistakes, unfortunately.

The AI, if it is used with a man in the loop can help doing better. I can tell you that at this point, at this stage of the game, I have heard nobody saying that they would relinquish the final decision to the machine. I think we cannot think that. We cannot trust AI to drive a car, which is a simple task. Can we trust it to do much, you know, more relevant things?

>> ROSANNA FANNI: Okay. Thank you. Thank you very much. Being mindful of the time, we are already three minutes over time. I would conclude the session now, saying that I think we answered some questions, but we have added probably a lot more questions during the conversation. So, yeah, feel free to reach out to the three speakers. You can find them all I think on LinkedIn, and they're all always more than happy to engage on the topics with you to connect. And also, my former colleague, Paula, has put the link already in the chat to the study, so you can also retrieve it and read it on your own, the case study that Fernando and I co‑authored. And yeah, with that, I'm wishing everyone a great rest of your day or evening, wherever you are, and thank you a lot, again, for your attention.