IGF 2021 – Day 2 – Networking #93 Global PeaceTech: how can good governance help societies avoid risks and exploit the potential of digital technologies?

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.



>> We all live in a digital world. We all need it to be open and safe. We all want to trust.

>> And to be trusted.

>> We all despise control.

>> And desire freedom.

>> We are all united.

>> MICHELE GIOVANARDI: Okay. I think that's a sign we are starting. I invite warmly our guests to activate their video so we can start the interaction. I see some people attending in presence as well.

>> ANDREA RENDA: There's a remote hub from Bangladesh also connected. Hello, everybody. We see some of our guests there. It's time to get started. This is going to be a short session. So we want to make the most of it. Right?

And let me start by introducing myself. And introducing this initiative very quickly. And then I'll give the floor to some of the academics and researchers who have been involved in this initiative.

We are here today to discuss the use of technology for peace. The use of digital technologies for peace in general. And this is, as many of you know, a very tricky and increasingly important question. Meaning digital technologies have shown the enormous potential to be deployed for good and for bad reasons. Right?

Suffice it to recall the famous quote from Stephen Hawking, right, that said artificial intelligence can be the best thing ever happened to humanity or the worst. And we yet don't know which.

The good thing if we talk, think about how to deploy and develop digital technologies in a more responsible way, one of the things we can say is it depends also on us and depends on humans at least at this stage to create a framework for the use of digital technologies that is oriented toward peace. It's oriented toward the global good and the public interest.

My name is Andrea Renda. I'm a Professor at the School of Transnational Governance. I work as a senior fellow at a think tank in Brussels.

I have in the past been very active in developing, in particular, regulatory principles and frameworks for digital technologies. And also I've been a member of the European Union, European Commission's Expert Group on artificial intelligence that defines ethics guidelines for trustworthy artificial intelligence. More generally, I work at the intersection of public policy and the digital technologies.

I'm one of the academics involved at the School of Transnational Governance in this initiative which is the creation of what we will call in a few minutes, Michele will do that, Global PeaceTech Hub. You'll hear from some of my colleagues at the STG and some colleagues from the European University Institute, a broader, let's say, umbrella organisation. And colleagues from our partner organisations, New York University Governance Lab and University of Lucerne. I see them already connected.

What I will do, I will give the floor to the researchers that's really at the moment and since the inception leading this initiative in organizational but also let's say in substantive terms. Michele Giovanardi. I'll give you the floor. You'll walk us through what you think and why a Global PeaceTech Hub is needed. The floor is yours. Thanks to all of you for being with us today.

>> MICHELE GIOVANARDI: Thank you, Andrea. It's exciting to have a session here and meant as a networking session. We want to keep this very informal and really to have an informal chat. It would be nice to also be in the same room. You know, we are all over the world and we're connecting here together.

So this session is about this idea of the Global PeaceTech Hub, which is something we're trying to launch here at the STG, School of Transnational Governance in Florence. Of course, it's a wide reach.

So today, we are here together. We have different researchers that are working on PeaceTech. I can see them on the screens. What I'll do is introduce the concept of Global PeaceTech and leave the floor to you. Because this is really about sharing ideas about this topic.

So PeaceTech is an emerging field. And there's a lot of applications in the world of technology to peace. And what we're trying to do here at the school is to map these applications and to try to make them relevant for the policy level. Governance level.

And I have a few slides to share. We identified some research track that we would like to focus on. And we've identified four main areas which doesn't mean it covers the whole PeaceTech subject, but, of course, it's a starting point for this kind of research. And here today, we have experts in Digital Identities & Blockchain Technologies, which is one of them. We have a track on Peaceful Digital Ecosystems. How do you use digital platforms to fight stereotypes with trust empathy?

We have a track on Predictive Analytics for Peace. And we have a track on Human Rights, Ethics and Tech Governance.

The reason why we're starting this, we're doing that with some partners, with New York University and University of Lucerne. But also the idea is to create a hub of academics reflecting on this topic. And I would like to just maybe give the floor to some of these researchers before telling you something more about the idea of the hub.

So maybe we can have just a round of, you know, introductions and you can share your take on PeaceTech. Maybe starting from some faces that I know. So, for instance, from Liav working on digital identities. Liav, if you want to share some of your ideas about the PeaceTech, I will leave the floor to you and we'll reconvene and I'll tell you more about the PeaceTech Hub.

>> LIAV ORGAD: Thank you, Michele. I'm really impressed with the way the slides were going up and down on your screen. I feel now old‑fashioned doing it without those technologies.

I'm Liav Orgad at the European University Institute. I'm also based in Berlin. I'm originally from Israel. My idea and my research is dealing with the connection between technologies, emerging technologies and citizenship and global governance and democracy. My interest in global tech for peace is using self‑sovereign identities and digital identities in order to promote peace.

The case study that I have in mind is one of the most silent conflicts we have in the world, the Israeli/Arab conflict. As you might know, there are 2 million people in need in the Palestinian side, in Gaza. And there is a blockade for more than ten years, which creates lots of human rights issues and humanitarian problems. This means that every year the international community, Europe, the United States, the Gulf countries and other partners provide a few hundred million U.S. dollars to the Palestinian side. When there are conflicts even more. I mean, the last conflict, the Israeli campaign in Gaza and war in Gaza in 2019 ended up with only the Qatari side brought half a billion U.S. dollars to the Palestinians in Gaza. There are some problems with that.

The way it goes, I don't know if you saw it on TV, at least the Qatari side, people come with cash, a suitcase from, like, 200 years ago. Then distribute the money to those who are (?) by the Hamas government. The idea here is to use technologies in order to cut all the buffers. Hamas, the Israeli side. All the people in between. And to use virtual wallet in order to think how the money can get directly to people who really need it.

Now, I know that there is a strong interest on both the European side and the American government that President Biden actually had a call to think of how technology can facilitate the transfer of financial aid directly to the people in need in Gaza. What I'm kind of thinking, connecting this to global tech for peace is how we can really implement it.

Of course, there are lots of challenges. Technological, political, economic challenges. For instance, give an example on the technological level. There are lots of questions how to register a user. How to secure the virtual wallet. How to guarantee anonymity so people, the governance side that runs the system, doesn't know exactly what people buy and so on.

There are political issues as well. Views. Kind of how to define political goals and so on.

Then there are legal issues of privacy, trust, money transaction. Lots of economic issues. The idea, in brief, and I don't want to take more than one more minute. Just I feel that, Michele, just one more minute. The idea here is to use blockchain technologies to create digital wallets. And then to using smart contract to transfer directly the money to those people in need. If you have, like, 1 million people who have virtual wallet, let's overpass all the technological barriers. The idea is the money can be used for certain purposes defined by the social contract. And/or certain purposes that you cannot use the money as defined by the social contract. It's fully transparent. Fully ‑‑ there will be some options to control the system. And mostly, it means more money, much more money, perhaps even 40% more, to people who need it.

So the idea here is to use global tech for peace to kind of try to facilitate peace in one of the most silent conflict in the Middle East. Thank you. Michele, back to you.

>> MICHELE GIOVANARDI: That was a first example. We'd like to present a few more. A couple more then maybe we'll open the floor.

So another track that we identified and want to research on here at the Global PeaceTech Hub is on predictive analytics for peace. How to use data for good and to analyze conflict and peace and to drive a positive impact.

And on that, I would like to give the floor to one of our core partners, which is the New York University and to Stefaan Verhulst. If you have slides, you can share them. I hope this works.

>> STEFAAN VERHULST: Thanks. Can you see my slides? Thanks for having me. Delighted to see everyone here. Indeed, wish that we could all be in a room together. But let's hope that this will happen soon. And then we can have a more conversation where we actually also see the nonverbal aspects of how we engage with people, which I do miss on a regular basis through Zoom. Especially when giving presentations when I only see myself, which as you all know, is not the most enjoyable experience. Anyhow.

For those who don't know, GovLab, based in New York, our mission is to transform the way we make decisions using digital technologies and other tools and methods as well.

I think one of the big lessons learned from the last few years is that if there is one asset, and some might call it toxic waste, emerging from digital technology, it's, obviously, data. And I think the key question is then, of course, as a society, how do we start using data to actually improve the way we make decisions, including decisions that impact peace and conflict to a large extent.

That's one of the areas that we are interested in is to really try to understand what is the role of data in peace. And how do we use the asset that digital technology is developing in a more responsible way and a constructive way so that we actually can use data as a tool for peace as well.

Now, there are, obviously, a whole range of possible areas where data can start answering questions. That if answered would be transformative in how we go about peace. By the way, it would provide quite often a baseline of truth which is already a key concept that occurs quite often. Results in conflict if we don't have a shared vision of what it is that we actually are talking about.

But the problem with data, which we have seen in a variety of cases, is that too often, when we start using data for improving decisions, we start from the data as opposed to starting from the questions.

And it reminds me quite often, anyway, when we look into a whole range and work with the whole range of actors on data initiatives, it reminds me quite often of that cartoon where someone is looking for the keys and the only place where they look for is actually where the light is. And it's the same with data. It's quite often we're looking for answers. The only place we do is look at the data that we currently have available.

So what we try to do is turn around the paradigm of using data is to actually start from the questions. And really try to become smarter about what are the questions that matter for which we then can start using data to transform the way we answer and subsequently act upon the insight that has been generated. And that was the impetus for The 100 Questions. In the past, I made numerous presentations where I said it's great to have 100 data sets as a result of an emerging new technological ecosystem. What I really would like to have are The 100 Questions. Then we can actually look into how we use the data, if available, or how do we make the data available to start answering that.

And that's also the initiative that we would apply to the peace and conflict environment. Basically, very quickly, Michele, it basically has four key components. First, it tries to establish a new science of questioning that could complement data science. I always say in addition to data science, we need question science to actually make sense of the data in a meaningful way. And the first one is that ‑‑ the first innovation is it's a participatory way of formulating questions. Because it matters to do this in a collective way as opposed to have one individual quite often within a university come up with a question that he or she believes is the question that will be transformative. I think we need to do this more collectively and need to actually tap into the collective intelligence of society in order to identify the questions that matter.

We need to do this in a matter that, anyway, is based upon a topic mapping which is basically trying to, anyway, not go into the trap of the forest and the trees. But that we actually do have a view of the forest and then can hone in on what tree do we want to prioritize. For instance, with regard to nurturing and answering.

And then we need to, indeed, have a way to prioritize it. Because, obviously, we cannot answer all the questions at once. And that, in itself, is quite often a political exercise that we also need to make more open. And we need to also engage the collective in order to have a sense of what is the question that matters.

And then, obviously, we need to not only formulate the question, but we need to then use the question as a purpose for the creation of data collaboratives.

We've done this now for a variety of fields. Such as, for instance, with, of course, Andrea, we have an initiative around food system sustainability. Where we also developed a topic mapping. Where we actually tried to understand what are the issues associated with food sustainability. And how do they interrelate so you actually have a topic map that can be translated into a system. And actor map. So you actually know who to engage.

We also have developed a taxonomy of questions. Because, of course, there's not one type of question. But we are interested in questions that provide for situational awareness. Questions that provide for cause and effect. Questions that provide for prediction. What Michele was referring to. Also impact assessment. Think about as it relates to peace, we can actually have in all those four areas of important questions that can improve our way we go about peace as well.

And then, of course, we prioritize and we do it typically around four kinds of criteria. Such as what's the impact. Quite often, the impact in this context could be the impact on people's lives. Even on safe lives. We go into that the question has not been answered yet 100 times and it's actually feasible and qualitative.

What we hope to do later is actually tap already into your wisdom to start doing ‑‑ I'll stop sharing here. Start collecting what other kinds of topics that are associated with peace for which questions and data would matter.

>> MICHELE GIOVANARDI: Thanks, Stefaan. That was a nice overview on how to use data for peace. We'll do this work together for sure over the coming months and year. We're excited about that.

I see other people on the screen. I see Evelyne, which I think she's the only one attending in presence in Poland. And Evelyne, she's part of the University of Lucerne. Of course, she can introduce herself. And she will help this hub with the ethical and human rights dilemmas that you have when you have to deal with governance of tech.

Of course, all these things we're talking about, the applications of tech for good, tech for peace, they entail political and economic and ethical dilemmas. And, of course, governance can help, you know, tipping the use of this technology from a negative to a positive use. It's very important to understand this kind of dilemma and to find governance solutions to that.

So, Evelyne will focus on that a little bit. If you have slides, of course, I think you can share as well.

>> EVELYNE TAUCHNITZ: Oh. There's a problem with the echo.

>> MICHELE GIOVANARDI: That's on peace technologies ‑‑ we always have ‑‑

>> ANDREA RENDA: You need to mute the IGF 5 account. Or mute yourself. But it's less ideal. We'd hear from the overall room mic. But I don't know if we can do that.

>> EVELYNE TAUCHNITZ: Again, if possible. Yes, I think that's good. Just getting some technical support.

>> MICHELE GIOVANARDI: Okay. While you have technical support, maybe we can go around the table a little bit. Again.

>> ANDREA RENDA: We have a question already. There's a hand raised.

>> MICHELE GIOVANARDI: I don't see it. Yeah. Go ahead.

>> Good evening from Bangladesh.


>> Hello. Good evening from Bangladesh. Am I audible?

>> MICHELE GIOVANARDI: Yes, we can hear you.

>> How can Bangladesh, different geographic locations in Bangladesh, have problems related to natural disaster. This kind of data is available. Is there any collaboration or scope of research opportunities with the other universities under the (?) So is it possible to make a collaboration with your team and others?

>> MICHELE GIOVANARDI: I will let Stefaan answer that.

>> STEFAAN VERHULST: Yes. Please. The idea is to build a collective of people working at the intersection of peace and data. And that inclusive ‑‑ disaster mitigation that if not done well, it could either generate conflict or could be made worse at a time of conflict as well.

And so we will definitely reach out to have whoever is interested from Bangladesh to be part of our cohort of what we call bilinguals, which are people that are domain experts but also data experts and can help us formulate questions that matter that subsequently, of course, we will seek to answer through increased availability of data. But also through the application of advanced data analytics.

So I can ‑‑ I don't know, Michele, whether the details has been shared. But feel free to reach out to me directly or through Michele or through Andrea and we will make sure that you're part of our cohort.

>> ANDREA RENDA: Thank you. I'll add one thing. First of all, greetings to all of you. I see many people sitting at the table. Thanks for asking the question. I think what we can do since you are connected, we share our contact details on the chat directly of this call. So also the other participants have them. And we can be in touch because there are many things that can be done with data in terms of prediction. In terms of putting together and integrating data. Not only for preparedness but also for response in case of environmental disasters. And so understanding a little bit better which angle you want to take I think would be very interesting for us to start a conversation on hi to integrate your specific perspective into our work on data. And certainly Stefaan is the guru there. In the process of The 100 Questions, I'm sure there will be an opportunity to identify and answer some of the key questions that you guys are facing down there.

>> MICHELE GIOVANARDI: Very good. Thank you for your question. Let's see if Evelyne solved the technical problem. We also see you from a different angle. We know it's ‑‑

>> EVELYNE TAUCHNITZ: The hybrid meeting that really works. That's great to know. Thank you, Michele, for presenting us, for presenting this great initiative as well. I'm representing the University of Lucerne Institute of Social Ethics. The research on the track of governance, human rights, and ethics. I prepared a couple slides. I hope I can share my screen. Let's see if that works. Yes, I think so. No? You can all see that? Okay.

So there you also see my email. For those who want to reach out. How do I go on to a slide. Here. Okay.

So really the research agenda that we are adopting, in fact, I'm already conducting my own research in this area. I'm a senior researcher and also writing a second book on this topic.

So the aim is to develop a global governance strategies to promote peace from an ethical and human rights point of view. This includes the critical evaluation of trends, opportunities, and risks of the digital transformation to first better understand, like, what situation we are in. What impacts that kind of relate to The 100 Questions things I was thinking. Like, in order to understand where we are at. Like, this is also our point in history. Like, what is the digital transformation do to us? What opportunities does it give us? What new risks are arising?

And based on that, the assessment, then discuss possibilities for policymaker to direct this change toward a peaceful future.

So the first question is really where are we heading to? And the second is, like, where can we actively guide this development in a direction that we choose?

And if we're talking about the direction, like, to know where we want to go. That's really where ethics come in. Like, ethic are always questions about the what. So what kind of society, for example, do we want to live in. What should we be doing actively to reach whatever vision of peace that we're adopting. So that is where ethics comes into play.

Let me first talk about human rights then. And technology. Why are human rights important? So human rights are important because, really, they are based on the underlying value of human dignity. And they provide a baseline and minimum standard. They protect vulnerable groups from abuse, which is particularly important in terms of online hate speech and harms and so on.

They also empower people to speak up. Particularly important, also the freedom of speech and expression. And human rights also present universal standard that holds governments accountable. Because human rights are, of course, valid online and offline and need to be respected by both government, private and Civil Society. Everybody basically. It's really the governments who are accountable there. To make the laws that, for example, tech companies then need to respect.

So that's really key if we talk about governance.

And what advantages does it offer to focus on human rights? There are legal norms. Means it's not only moral code of conduct but they are legally binding. And universally applicable under all circumstances. Means online as well. And they're recognized on a global level. Means we don't have to start from scratch. But basically, all states have already accepted human rights as a universal standard. So there's something we can already build on.

And, shortly, we talk about ethics, why ethics is important.

Ethics are the what question. What are our goals, and why? Like, also, in there, like, what does peace stand for? What kind of peace are we understanding? Peace is a broad concept. What kind of vision of peace should we adopt? What kind of society do we want to live? What values do we prioritize?

For example, yeah, freedom, security, are often in conflict with each other. Human dignity. And injustice, as well, as mentioned.

Also, it's important with ethics, it really informs ideally the social debates within society. Before they feed into politics. Like, politics usually picks up what is discussed in society. Like, the, ideally, at least. If it's a democracy and a representative. It should pick up the concerns of society.

So then, like, how can we ‑‑ politics more about how. How can we achieve our goals? How can we find an agreement? How can we implement those decisions?

One output, then, of politics could be legally binding agreements. And also question, like, how can we make sure that everybody follows these rules. So that's really how these different areas link together and how we're going to focus on this area at the University of Lucerne. Like, in this interplay between ethics, politics, and law.


>> EVELYNE TAUCHNITZ: I'll present in the breakout rooms. I think breakout rooms are concerned. What are the primary concerns of people who choose the breakout room. Can human rights as a minimum standard, are they not or do we need something else? Or what is the need of additional ethical guidelines? That's so far from my party from Katowice, by the way.

>> MICHELE GIOVANARDI: Thank you for the interesting overview on the key questions there. Interesting work to do in that area as well.

And I think Kalypso raised their hand. I'd like to ‑‑ I see more participants. I see Innar, Liav, and Mark Nelson. We're here to connect, network, build something together and be in touch. So Kalypso, thank you.

>> KALYPSO NICOLAIDIS: Yes, indeed. I'm Kalypso Nicolaidis. One of the STG advisers and founder of this wonderful programme. Very much looking forward to hearing old friends on the call including Mark and others.

I just want to say where we are at this very second to add to what Andrea and Michele said at the very beginning. We can see already from these presentations three kinds of connecting ambitions that we have. You know, one is to connect individuals, researchers, with a problem. A challenge. We have Stefaan's questioning. That's all I tell my students, all I can teach you is to ask the right question. I don't have any single answer.

So Liav has a question. How do we bring funds to those who need it? In his case, in Gaza. That's a question to add to Stefaan's list. We can work on that.

The second thing is clearly listening to both Stefaan and Liav and adding Evelyne.

Thirdly, also, perhaps, to stress and having listened to the question. Just to add to what Michele said at the beginning. Many on this call have been involved for years on PeaceTech. PeaceTech is a very important and growing I would say industry as well as intellectual challenge around the world and conceptual space, as it were. We have this kind of global etiquette at the beginning. Global PeaceTech.

It will be important to investigate together what we mean by global. Because global is about, of course, connecting is the hub part of the story. But it's also the idea that in this area, the very micro, what happens to the individual, the individual that doesn't have an ID and will get one digitally. The individual involved in micro conflict connects to the global directly.

And, yes, there are all these intermediary stages. Geographically, legally, et cetera. We live in an era where the very micro and very macro need to be connected and are connected by our technologies. So it's this micro/macro link that is also very important and embodied into this label of global. That's all I wanted to add at this stage, Michele.

>> MICHELE GIOVANARDI: Thank you so much. That's so important and relevant. Maybe I'll add something more after this first round about the idea of the Global PeaceTech.

And if you want to share some thoughts, Mark Nelson, as well. I know you work at the Peace Innovation Institutes. So I think you have for sure something to share on peace and technology. And how we can link the two and how the two are links. And we can promote ‑‑ we can use technology in ways to promote peace.

>> MARK NELSON: Very nice to meet all of you here and to see old friends again. My name is Mark Nelson. Thank you, Michele. I'm founder of the Peace Innovation Lab at Stanford back in 2008 and currently on sabbatical at the, our institute in the Hague this year. So I'm nearby many of you here and looking forward hopefully to meet in person if COVID lifts a little.

Our particular interest is on peace as a behavior. And so we're very focused on looking at the behavioral data that sensors can pick up of humans doing positive pro‑social behaviors to each other. Which also means that of all the different social behaviors that humans do, we're really focused on positive peace. So how to increase the good behavior, rather than how to reduce the bad behavior. It turns out that DNA gives us a little bit of an advantage here.

And then that means, of course, then that we unlike many of our colleagues around the world, we're looking ‑‑ as you were just alluding to, very much at the microanalysis level in terms of sequences of tiny little behaviors that add up in the aggregate to the kind of effects we're looking at ‑‑ or that we care about between groups.

So, so I would say in addition to the need for good data scientists, there's the need to be able to evaluate high‑frequency, low‑amplitude signal and sense make on that in such a way that we can then design interventions that can ingest that kind of data. And make very fast, fine‑tuning decisions in response.

So we're paying attention, then, to sort of the emerging field of autonomous peace technology as a result. You see trends in this direction with smart contracts and so forth. As Liav was sort of alluding to there, which I'd like to know more about.

The other thing that happens, then, is that this gives us a potential tool to, some people would say, repair capitalism. Some people would say reinvigorate capitalism. Some people say maybe a whole revolution in capitalism. But maybe for the first time ever, the combination of sensors that can detect these kinds of tiny behaviors in real time between group members. And the kind of actuators that can then respond to that in real time means it's then possible to create new forms of capital that were simply impossible before. Because the computation required was much too high. But these new forms of capital allow us to get fundamentally to the relational infrastructure on which all of civilization is built. Which currently most of our capital is built against very non‑human inanimate things like bricks and mortar and tangible but not living infrastructure.

But the thing that enabled us, of course, to build all those things was our ability to coordinate our behavior with each other. So you can think of this as an emerging opportunity here for capitalism to go down one level in the stack of value creation to what really truly generates our value with each other, which is our ability to coordinate our behavior and coordinate our behavior specifically for the purpose of mutually beneficial value creation.

So those are the kinds of things I'm interested in and eager to explore further.

>> MICHEL GIOVANARDI: Thank you. A very interesting conversation. I hope we can talk more about this.

So as a last, I think the last person I see on the screen, it's Innar Liiv. If I'm not mistaken, you're a computational social scientist and expert in government. So what's your take on that? On this application of technology for peace. I don't want to put you on the spot. But if you want to share some thoughts or find this interesting, find a good idea/bad idea, I leave it to you.

>> INNAR LIIV: I must say, number one, I agree with most of what Mark said. Since we had a chat with Mark two days ago, I'm not too sure that I'm able to ‑‑ But perhaps I would make recommend one book I just happened to read. Maybe to take a larger perspective on this. It's a book about “Positive Computing.” The goal is very good. “Technology for Wellbeing and Human Potential.”

As Michele told ‑‑ I'm a tech person. My interest is data. Perhaps if I focused my current focus of PeaceTech into very ‑‑ frame it very narrowly, then I think the core question for me is currently that how can people invest in peace? Basically, how can peace investments be profitable to larger scales of peoples? Essentially, currently, we are able to buy some stocks who benefit from, you know, military activities. But there's no good way to invest in peace. This leads us to, you know, all those topics, what Liav, for example, was mentioning. Blockchain, which is often overused or overhyped.

I actually have this sense that in this case, you know, developing, like, a financial instrument to support either peace organisations or, specifically, some peace activities, that in there blockchain could help and could be the solution.

So I would sum it up in a way that if anybody's interested in having a chat, a short or longer chat or how we could invest in peace. I know colleagues have also been involved with several initiatives like business and peace or stuff like that. So basically, peace is profitable. In that aspect. Then I'm really happy to have all those chats. So thanks.

>> MICHELE GIOVANARDI: Now that we finished this first round, what I want to say and give back to you is first of all, this is very interesting. How do you invest in peace? How do you invest in PeaceTech? So in technology that promotes peace.

So, first of all, we need to study, like, what does it mean to use the technology? Because this technology always comes with some dilemmas. Economic dilemmas, political dilemmas, ethical dilemmas.

What we want to do here is to leverage on the work that is out there. We had a very good introduction here today. Digital IDs and data. All of these applications. So there more institutes and centers working on that. And studying this topic and this area. But what we want to do here is to find the nexus, if you want, or to bring them together and see what are the connections. And how we can make sense of all of these at the governance level. And with all the ethical and political implications that this entails.

And so before leaving the floor to Andrea, I'd like to bring up ‑‑ and to Liav, again, of course, opening the discussion again. I'd like to bring up a few of ‑‑ actually a couple of these very nice sliding slides.

And so what we want to do here with this project is to connect. And you've seen it today. To connect different PeaceTech initiatives that are around there. But sometimes they don't talk to each other. So really create a hub, like different initiatives and to bring them together to constructive dialogue that is relevant to policymaking as well. Because we're a School of Governance and what we care about.

The second thing, of course, is to research. So to do mapping exercise and produce reports and to evaluate what is out there. And see why it's relevant. And what are the areas of where we can invest the most. And create this framework, innovative framework to study PeaceTech at the global level.

The third thing we want to do, of course, is accelerate. So accelerate these policy innovations and trying to accelerate new policy ideas I think we can bring to the political arena. And to do that, we'll have specific events and hackathons and events first to identify the problems with the help also of the New York University. So to identify the key questions. And when we have that, we'll try to answer them with the tech and the data we have. And take into account all the ethicals, again, and economic, political, dilemma.

So that's very ambitious work we want to do. And we're just starting this journey. So this is the very beginning. And we'll see where we go. And we can tell you in one year, I guess. I leave it to Andrea.

>> ANDREA RENDA: Thanks. I wanted to echo a little bit about what was said about the responsibility of investment. Currently, financial markets are struggling with the proliferation of signals and what we call ESG investment. Economic, social, and governance. And those indicators appear to be losing credibility over time. In the past, in the analog age, if you wish, we had problems, for example, with private certification. Things like conflicts and other things. The idea that what kind of signal can the market or spontaneous corporate initiatives give to investors? And how credible is that signal? This is something that will need to be studied in‑depth to see whether digital technology can actually solve those problems and give a credible signal for investors that can then concentrate on responsible investment in the direction of peace including supporting peace organisations.

I think we discovered many of the other potential uses of digital technologies here that are extremely important. The role of decentralized architectures. And reintermediation possibilities of digital technologies uniquely presents. The potential through this, Liav talked about this, to empower and connect communities, if you wish. Directly. The idea to foster trust through sharing the same information set. Which is, indeed, the idea behind blockchain.

The possibility to create empathy or to avoid the exacerbation of sentiments by either proactively try to bridge distances through communities and curbing disinformation attempts. I see Hubert switched on his video. Using digital technology and data to really help these processes by creating a granular understanding of what's going on and leveraging data for prediction and decision‑making.

I don't want to bring you on the spot, Hubert. I was reading, for example, Facebook has been actively or initially not particularly actively involved but in the Myanmar issue. Right? With the Rohingya community. That's been one of the typical cases in which one large intermediary is faced with the dilemma, should we play the role of mutual intermediaries as we have been doing since the beginning of the Internet? Or do we have a proactive responsibility to filter content or to sort of moderate content to avoid that sentiments or escalating? And, thereby, leading to unwanted impacts. So that, perhaps, whenever you want to say something about this, I'd be interested.

Finally, on Evelyne, I'd be curious to know whether there's a movement ongoing to building a declaration of human rights for the online sphere. Whether you think that's needed or the human rights framework already gives us everything that we need in that respect. So just a few comments, sparse comments on this. Thanks a lot to all of you.

>> MICHELE GIOVANARDI: Liav was next maybe.

>> LIAV ORGAD: I want to share a thought I had while hearing all the participants speaking. I've kind of taken for granted the idea of what peace is about. Mostly because what I had in mind, what I am interested in doing nowadays is kind of connecting technologies to peace negotiation and peace initiatives in places in the world where there is a really kind of a real battle. So it's kind of connected to the immediate, urgent, conflicts. Sometimes armed conflict. In which there are people in need, millions of them, not necessarily in the Palestinian side. All around the world. And there is a big amount of money who is transforming to those people. And just want to make sure that they get it. And you want to make sure that they get as much as they can out of it and use it for good purposes.

But hearing other people, and I think one of the fascinating things that we can think of is how broad the terminology of peace can be. And I find lots of synergies between those urgent wars here and now that we have to peace in a more holistic sense or more kind of broader sense that we can think of. And for me, the session has been fascinating. Especially concerning to how exciting it can be to think of peace beyond the traditional approach. Thank you.

>> MICHELE GIOVANARDI: The last two questions from Liav opened ‑‑ if we want to go on further two, three, hours, we can. Thank you.

>> LIAV ORGAD: Congratulations. You have a project that you're ‑‑

>> MICHELE GIOVANARDI: It's going to be interesting. It's going to be interesting, yeah. I also had Hubert Etienne who wanted to, without going too far, to answer to some of Andrea's comments. So let's try. And then we have the room which is also has their hand raised and also Evelyne. Let's see. We're kind of finishing the time. I don't know if the IGF will kick us out or how it works. But the official time for our networking session is almost over. We have five minutes. Of course, we can talk more, if we want to. I'll give it to Hubert, the room and Evelyne.

>> HUBERT ETIENNE: I won't take too much time and comment too much on the Myanmar situation. As a larger perspective, it's important, indeed, as you said to have research focusing on these issues and will not come from Facebook internally and should come from joint project and joint research project between the academic route and these big tech companies. The one have the information, impact, and the others have the time and effort to conduct real research in this. First, they have the authority to do so.

I think this is quite obvious. Then how can we partner in this way is another question. I'm trying to promote that as much as possible. If you have ideas, I'm always happy to consider them and help to make projects happen with the Meta.

I'm not an expert on this one. All the questions around content, I'd be happy to see more collaboration in this direction. That's it for me. Thank you.

>> MICHELE GIOVANARDI: Thank you so much. Hubert is working at Meta. We haven't said that at the beginning. Say one word at what you're doing.

>> HUBERT ETIENNE: I don't know it it's relevant. I'm a philosopher. Meta AI. Physical approach to understanding content moderation and practical decisions in this direction.

>> MICHELE GIOVANARDI: We have a question from the room. How data and technology can bring it closer to the people to ensure good governance. I don't know if that's the question or there is another question from the room. Let's assume that was the question.

>> ANDREA RENDA: It triggers a long answer. There's literature we can exchange on how to orient the deployment of digital technologies toward sustainable development. There's also mapping going on between the available digital technologies with a focus on AI and Internet of things and Sustainable Development Goals. I can send some links in the chat.

Finally in the EU approach to trustworthy artificial intelligence, when I was part of a high‑level expert on AI, wellbeing was one of the requirements of trustworthy artificial intelligence. This is currently not yet translated into the upcoming EU regulatory framework on artificial intelligence. But, yeah, we're working on it, let's say. Giving incentives to make sure that responsible AI development is AI development that is oriented toward sustainable development is a key front in the cooperation on artificial intelligence and the Internet of things.

>> MICHELE GIOVANARDI: Okay. I will share ‑‑ I think that's the best thing maybe. I will put in the chat a link for shared document. Maybe we can keep track of all these things there. Names. Questions. Literature. We have reference. Evelyne?

>> EVELYNE TAUCHNITZ: Hello. I want to pick up on two comments that were made. One that peace is also a really broad concept and important to not forget that. Like, what kind of peace do we want to achieve? Also needs a bit of a discussion with The 100 Questions. What we understand by peace would be really important to ask as well. How to see what people think there. Peace doesn't mean the same for everybody. Do we just understand the negative peace or also understand peace that includes respect for human rights and freedoms?

Secondly, it's important to build on already existing efforts. Especially with regards to human rights. There are lots of initiatives going on in that respect.

(Scheduled captioning ending)