IGF 2021 – Day 1 – WS #279 Fighting disinformation as a cybersecurity challenge

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> We all live in a digital world. We all need it to be open and safe. We all want to trust.

>> And to be trusted.

>> We all despise control.

>> And desire freedom.

>> We are all united.

>> RAYNA STAMBOLIYSKA: Hello, everyone. I hope you can hear us. Yes. You can see us also. I'm seeing signs from nice people. And a yes in the chat. Thank you for being with us today. Let me try and solve a few technical quirks I'm still seeing. People are connecting. And we are Lucien. Are you here with us?

>> LUCIEN CASTEX: Yep. I'm connected. Hey.

>> RAYNA STAMBOLIYSKA: Hey. So you are still in Katowice today. And we are all here. Okay. Shall we begin then or wait for two, three, more minutes? I'm seeing people are still entering the room. So let's wait for two, three, more minutes. Lucien, you give me the go when you feel it's fine.

>> LUCIEN CASTEX: Yeah. The room has still people coming in so let's say two minutes.

>> RAYNA STAMBOLIYSKA: Okay.

>> LUCIEN CASTEX: I guess it's two minutes. Rayna, you can go ahead and start.

>> RAYNA STAMBOLIYSKA: Okay. Thank you. So hello, everyone. And welcome to this workshop. It focuses on point of discussion that I would like us to have which is where disinformation and cybersecurity could meet. And why they should meet.

So a few housekeeping rules before we begin. We are in a hybrid session. So it comes with its own challenges. Of people, you know, being in person in Poland and other being online from elsewhere.

So Lucien is on‑site there doing stuff with his phone. He will be helping me collect and address questions from the people who are on‑site. And we'll be having other questions from participants online, which I invite you to submit through the discussion window that we have for the Zoom tool.

So this discussion will last for about an hour and a half. And it was intentionally that long because we want everyone to be able to speak out their minds and share insights with us. Rather than, you know, racing to keep up the time.

So this will be a rather informal decision. I've asked people not to share slides to, you know, make this informal.

So we'll start off with a short introduction by everyone. And I will also ask Lucien to remove his moderator hat and to put on his member of a commission at the French government hat. And share with us a few insights from, let's say, more government perspective because there have been developments recently on disinformation specifically in France.

And then we'll proceed to hearing from other ‑‑ our three other participants who are hailing in alphabetical order from Vienna. Michael is here. Mohamed I believe is in London for the time being. Viktoras is I believe in Lithuania.

It's up to you to tell us more about the challenges and, perhaps, the solutions you are seeing from the French government in terms of how is it helping us approach, address, and make sense of informational threats at large.

>> LUCIEN CASTEX: Thank you, Rayna, for giving me the floor. First, everyone in the room, you can sit around the table, obviously. If there are free seats, you're, obviously, welcome to sit around.

Indeed, thank you, Rayna. I'm member of an expert committee with a French regulator on disinformation. And it was quite an interesting experience, indeed, because regulating disinformation has a lot of implication for speech, but also for privacy.

So I'm going dynamic in Europe in regulating disinformation. Is focusing basically on the balance between freedom of expression and cybersecurity as well as the resilience of society. Which is a topic, really, in the workshop today.

In France, indeed, we (?) We tackle information online and better project democracy against hybrid threats. The fight against information disorder. It's been enacted on 22 December, 2018. So it's been already two years. Three years. And it's quite an interesting point. Because basically, the law creates a new range of duties for online platforms. Including an obligation to cooperate with the regulator. And it's been to practice. Basically, the idea of the law is to target the rapid spread of fake news online. With a particular attention to election campaigns, as you may have guessed. Either just before or during elections.

The law also creates a legal injunction, allowing an interim judge to qualify it as fake news and to order its removal of defining an 8081 law on the freedom of the press, following criteria of the fake news being manifested. Disseminated deliberately on a massive scale and obviously leading to disturbance of the peace.

Another interesting point, very quickly, is the law is also promoting transparency obligation for digital platforms. And the duty of cooperation for this platform. And the compliance with the duty is entrusted to the French council. And the French council developed an innovative and quite collaborative approach to fulfill its new duties. Including putting together a project team which is now becoming a new direction of the regulator. And put together an expert committee, which I'm a member of, composed of 18 experts from different background to try bringing expertise to the council when fighting information disorder.

Also, to complete its duty, the regulator also conduct yearly an extensive (?) to cover reporting mechanism, transparency of algorithms and, for example, information directly provided to end users of digital platforms. It's quite a collaborative way to do so. And also with, obviously, the trend of legislation both in France and in the European Union, the regulator is gaining more power as it concerns content moderation.

Thank you. Back to you, Rayna.

>> RAYNA STAMBOLIYSKA: Yeah, thanks for setting the scene. So why I asked Lucien to start off is, well, he will then be taking over as moderator to, you know, the in‑person meeting. But also because framing the question of information disorder, as he mentioned it, solely through the angle of what can and cannot be said, is the way we have been talking about, well, handling the protection of information today.

And the reason, the rationale behind the proposal of this workshop, was to kind of extend and go beyond that small, like, limiting definition. To say, look, tomorrow, we'll be having, you know, autonomous systems making decisions on their own. But those autonomous systems, they don't magically train themselves. They need training sets. And to compose those training sets, we need to ensure integrity and availability of information that is representative and adequate.

And so we are not here talking about speech as reflected through, you know, social media and so on. We are talking more and more about information, structured or otherwise. And as a source for training of tomorrow's, if you like, autonomous decision‑makers making decisions.

So this is where we kind of come on a territory that is a different one, which is the one of cybersecurity. And which is basically of saying we need to protect assets. That's what cybersecurity is about. And in those assets, so far, at least in the west, let's put it that way, we've been focusing pretty much on protecting information systems. But with the focus on the system, rather than on the information. And we've been focusing quite a lot about protecting infrastructures. And at least this technical view, this technological view, is starting to be, perhaps, a little, you know, short. Given the challenges that we have today.

And Lucien already mentioned one such challenge that we have today which is hybrid threats. And so this is where, you know, we are going into something that has a much broader definition than just regulating speech on social media. Or just protecting vital infrastructure.

And so how are we going to kind of join those things and make that (?) if you like, between those two very different challenge topics. Because the way policymakers, researchers, and entrepreneurs from different fields of activity have seen the whole discussion start coming around is about how do we manage or, you know, use digital technologies in the context of civil and military conflicts.

So without further ado, I would like to introduce us ‑‑ to introduce you, sorry ‑‑ to our guests today. So by alphabetical order of the first name, I am starting with Michael who's hailing from Vienna. So each of our guests will have the opportunity to introduce themselves and to, like, within five to ten, like, really max, ten minutes, tell us more about where they come from intellectually. Why they're versed or why they're interested into the multistakeholder discussion that's going on around tackling hybrid threats and making basically cyberspace safer.

So our second guest today will be Mohamed El Dahshan hailing from London for the time being. You're an economist by training.

With us as well hailing from Lithuania is Viktoras Dauksas. Correct me if I'm mispronouncing your name. Who is the Director of debunkeu.org. He focuses with his NGO on challenges in the Baltics and some other countries, including Georgia, Montenegro, and Poland, if I'm correct. And, yeah, so, Michael, the floor is yours for your introductory remarks.

>> MICHAEL ZINKANELL: Wonderful. Thank you very much. I hope you can hear me loud and clear. Thank you very much for inviting me to today's workshop. I'm very excited to be here and to engage also in a discussion with all the participants. I see that the room is already quite full. So I'm happy to see that there are also people who have been able to travel to Poland in person to attend this session.

As Rayna has introduced me already, my name is Michael Zinkanell. I am the Deputy Director of an Austrian‑based think tank, the Austrian Institute for European and Security Policy. We focus on researching various security challenges in the setting of the European Union. We work closely with Austrian ministries, in particular, advising the Ministry of Defense and Ministry of Foreign Affairs. Also closely related and collaborating with various European stakeholders.

My role, my research role, focuses on investigating hybrid threats. In particular, disinformation campaigns and cybersecurity. However, I must say that I'm not coming from a technical background. My background is in political science and peace and conflict studies. So I analyze the security implications that the cybersecurity, cyber-attacks, and disinformation campaigns have on the European Union. The Austrian government, in particular.

And in analyzing these threats for the last 3, 3 1/2 years, I'd just like to briefly share a couple of thoughts which are on the one side probably going a little bit into the motives, the backgrounds of disinformation. Also touching upon probably some more thoughts that will certainly paint a picture within the realm of a new security environment that we still have to get accustomed to.

And so when looking at disinformation, we have to first acknowledge that it's nothing completely new. Disinformation was there for the last centuries. For the last millenias. I very much like to quote at this point a quote from "The Art of War." Where in this famous book, he quoted that disinformation, misinformation, that the ability to use methods of deception is very vital in any sort of war environment. In any sort of fighting. So the quote goes, "When we're able to attack, we must seem unable. When using our forces, we must appear inactive. When far away, we must make the enemy believe we're near." I believe this can be also applied to today's world. However, there have been some changes over the past decades, especially, with relation to the last couple of years. Because in the past, spreading misinformation, of course, was very different than today.

So the underlying factors, three underlying factors that I'd like to briefly outline are the following. First, the speed and accessibility of spreading information online, both harmful and informative, truthful information, has completely changed. It's now possible due to technological advances to reach a huge amount of people by relatively simple, relatively cheap, means. And it has also led to the change in roles. In traditional roles of who's creating and who's receiving information. There was a shift, to date, especially online, social media. Everyone becomes a consumer and at the same time publisher or creator of information or disinformation.

Second, there is an increasing and undeniable dependency online and digital technology. Simply, the fact that we are all joining this workshop together from various cities, I believe, is proof that the COVID pandemic has further accelerated also the strength. We're constantly in the private, the personal, in the work environment related to the pandemic on technology and especially digital technology. Online technology.

And third, we are at the moment in a stage where geopolitical tension is rising. Where new forms of threats are emerging. And unconventional areas of warfare are on the rise. Without going further into detail, the current geopolitical environment between the U.S. and China, but also between the European actors, Russian actors, is increasingly tense. If not to say hostile.

This leads us to a new digital reality. Due to these underlying factors, there's a growing interconnectedness between the digital and the physical analog real world. And these interconnected ‑‑ this interconnectedness has implications. Today, we also see that algorithms prey on these developments. Learn biases, behaviors, from us that are constantly being produced and reproduced. So to sum it up, we're living in an era where hybrid threats and disinformation becomes a new type of accelerated warfare.

And this area does not target tanks or is not involving the typical traditional means of warfare. Air, sea, and the maritime dimension. But it's more going into the hearts and minds and to the thoughts and to the emotions and the beliefs of people.

So without becoming too philosophical, basically, the concept of truth is tackled. Where people have disencouraged. We're to a point where the truth doesn't really matter anymore. It's becoming more blurred. I usually try to refer to that term which is called gaslighting which means that a victim is trying to be targeted through deception. Ultimately, aiming toward delegitimizing personal beliefs. Personal views. Core values. To seeds of insecurity, doubt, of distrust. And this works very well at the time of a global pandemic. Of crises. In an area where uncertainty is something that we're constantly faced with. And, therefore, flooding disinformation in that specific time has become, as we have seen, as research has shown as well over the last two years, very aggressive.

So where does the link to cybersecurity come into play? I see it in various fields. First of all, as I mentioned before, this interconnectedness between the digital and the real world, but also in terms of real tangible attacks that combine aspects of cybersecurity with disinformation, disinformation campaigns.

For instance, hack and leak campaigns. Also incidents where Twitter accounts were very (?) Social media profiles have been hacked to create so‑called zombie profiles that are then being used and exploited to spread disinformation. Because if you're hacking not any account but a (?) account, the spread of disinformation, the ability to believe that person who is probably relatively influential, is increasing. There have been substantial research that shows that various accounts have been targeted. Spreading disinformation.

For instance, the Hong Kong protests, the outbreak of COVID‑19. Just to give a small example here.

Looking in the future, Rayna has already mentioned it briefly, we will be faced by algorithm, by machine learning, by artificial intelligence software, that is further accelerating the current trends that we're currently seeing.

Just to give you an example here as well, there have been already incidents where AI software was used to imitating voices of CEOs to demand transactions. But also as I have recently discovered, there have been also incidents where governmental officials have Zoom conferences with other governmental officials from other countries. Just by figuring out later that the other party was actually not who they were supposed to represent. They were ‑‑ the abilities also applying artificial intelligence software to recreate not only the voice, but also the appearance of another person and an environment of talking from one government to another that can very tricky. Then can be also used to spread harmful information. And harmful is probably the buzzword here because in my eyes, the intention behind spreading disinformation and the intention behind targeted cyber-attacks, is very similar. And we can see there are similarities here. Intention to deliberately destabilize societies. To disencourage trust. To undermine political trust. And to sow these seeds of uncertainty, in general.

Therefore, I see the intention, the motive, behind these information attacks and cyber-attacks as a common denominator that we have to keep in mind when analyzing current threats. But also when trying to figure out new means, especially whole‑of‑government, whole‑of‑society, approaches to overcoming future challenges. I believe that is absolutely necessary to include not only the traditional security environment and community of Ministries of Defense, Ministries of Foreign Affairs and tackling these issues, but also including scholars, academics, Civil Society Groups. Companies. The private sector. Also experts in the field like ourselves. Into creating a safer environment.

So I'll just stop here because I believe I'll otherwise go on for too long. And I'm very happy to take questions and to engage in a discussion with all of you at a later stage. Thank you.

>> RAYNA STAMBOLIYSKA: Thanks, Michael. Yeah, we'll have the opportunity to continue exchanging views in short time.

And, yes, I agree that intention and the question of trust are quite paramount in that discussion.

And when you said, intention, it triggered something that has also been, you know, in my mind that's also been one of the sources of this ‑‑ of the motivation of this workshop today. And that's also why I invited Mohamed to join us.

Quite often, when, you know, when something happens, a lot of people ask who and how. And an increasing amount of people start asking why. You know, that's the intention. And what we've realized especially, you know, in the light of the recent revelations of the Facebook whistle‑blower, Frances Haugen, is the design and the economic, if you like, or the financial gain of spreading information that may be harmful is paramount in tackling those challenges.

And I realized that being in the disinformation space, the cybersecurity space, we're still struggling with have this economic parameter in mind and taking it into account.

So I'd like to turn to Mohamed. You're a trained economist. So, and you understand things I don't. So tell us a little more about where you come from. And I know you focus ‑‑ because we've worked together in the past. And I know you focus on emerging markets, as in African markets, but not only. And you also focus on the frailties of governance. And on the way or ways, actually, those frailties impact prosperity and peace.

So can you with your background and your vision as an economist tell us a little more on what fuels, basically, harmful actions online.

>> MOHAMED EL DAHSHAN: Million‑dollar question.

>> RAYNA STAMBOLIYSKA: The easiest one for you, right?

>> MOHAMED EL DAHSHAN: First, thank you very much for having me. Thank you, all, for being here. It's a great pleasure. I'm very sorry I can't be in Poland. But I'm happy to join you from England where it's definitely colder than anywhere else might be.

I am from Egypt. So I'll try to bring in some examples from the Arab or African region.

And then I'll ‑‑ yeah, I'll bring in some of the economic aspects.

I come more to that issue from ‑‑ so on one hand from my own background, my own activist background, shall we say, from an issue of freedom of speech, of online freedom of speech. But then a lot of my work now, as you said, is development focused. And I look a lot at fragile states and post‑conflict countries. And with the mass scale of disinformation campaigns we've been having, it's been important and very concerning to look at the implications of this on what we could term fragile situations or fragile cases.

So if you allow me, I'll start with examples, I guess, from the region. And then maybe we'll bring in a couple of the economic factors involved.

So someone mentioned earlier misinformation in electoral campaigns. Right? And I think the biggest case that we've seen this year on the continent was in Uganda, which had the specificity of being quite diversified in terms of the messaging. So it went from the political messaging and accusations of corruption and whatnot, to the ad hominem attacks. For instance, there was quite a stream of misinformation in that election that was all about attempting to defame the opposition candidate. A lot of it focusing on suggesting that he was gay. Sort of taking advantage of an ambient homophobia, I suppose, amongst the part of the electorate.

Also, if you look at misinformation surrounding the war in Tigray in Ethiopia, because this is something we've watched with a lot of concern. A lot of it is Diaspora generated. And it has been amplified a lot by both the Ethiopian and various times other governments as well. So you have this confluence of states involved in that conflict taking advantage of the tools they have to amplify misinformation that like they fomented.

We've seen lots of spikes of disinformation. Russia has been attacked for being behind quite a decent chunk of that. And with the creation of a lot of pages spreading news ‑‑ hyping Russia, really, as a good friend of Sudan. Russian military base in Port Sudan. Probably one of my favorite posts, if I can say, has the Sudanese Prime Minister as a vampire trying to steal Russian aid from a crying child. Just the moment dramatic and over‑the‑top stuff that we've been seeing in Sudan this year.

And one thing noticeable about that one, that one was a Facebook page that was removed. The page was called The Better Past, which is a bit of a theme we've seen in a number of countries. A number of post‑conflict countries. With a lot of the misinformation focusing on the nostalgia aspect of whatever we have currently is bad, you shouldn't trust the transitional authorities. You should go back to supporting the warlords, basically, and various actors in charge in the past.

Libya, I'll probably use that as a last example just because it's such a fascinating case. Libya is a country where ‑‑ there's a multitude of countries that have (?) all of this is definitely complemented by misinformation. Right? We know Russia, Egypt, Saudi Arabia, and Turkey, at least, have been implicated in misinformation campaigns over the last seven years.

And what an important concern in that is that with all the fake organisations pretending to be speaking on behalf, sort of the Libyan media consumer is aware that there's a lot of misinformation. Right? The problem is that leads them to, one, lose track of the genuine voices in that is one. And the second thing, to mistrust generally. Sort of the Civil Society as a whole. And that is something that I think warrants a lot of concern.

So, yeah. So there's been quite a high level of ‑‑ and they've been learning a lot. Right? All of these actors have been learning a lot from their experience in Africa and the Middle East, specifically. That's what I'm focusing on. That may or may not replicate what you might have seen in other parts of the world. I'd love to learn more.

For instance, what I'm seeing is a model of franchising. Right? So we know that Russia's Internet Research Agency is paying local actors in the countries they operate in. Paying them more than what it generates for the campaigns out of Russia.

We've seen sleeper pages. Pages flagged for being found by the same actors. But looking at the content, there doesn't seem to be much that would raise an eyebrow. So we're wondering whether that's a long‑term investment in a way in building that brand to potentially use it as misinformation sources as in the past.

The level of complexity and level of organisation that a lot of those state actors have been reaching is aggressive. We've probably heard a lot about the Riyadh farm, especially around the murder of Jamal Khashoggi. A lot of the information was ‑‑ People were paid, essentially. I don't way to say an electronic army. It is a bit more fancy.

Yeah, this is quite a level of sophistication. Another thing I found fascinating is that in 2020, we had ‑‑ we learned that Hezbollah, the Lebanese political party, had their own misinformation training centre, training academy. Which was, yeah, which basically trains people in, well, the disinformation, fake accounts, and manipulation, et cetera. This has been directly connected to one of the organisations in Iraq, which is likely to be behind the assassination of Al‑Hashimi who was an independent researcher. Right? So a long, several months of disinformation about that person that culminated in him getting murdered. Right? So the costs are very clear.

I don't want to take too much time. So just on the cost issue, well, I mean, there are two things here. From the side of the perpetrators, I suppose, a lot of what we're seeing in my part of the world is state led. And the truth is the countries, you know, like, their CPM isn't probably the most important factor in that case. Right? But, and that's a problem. That's a problem. Ideological misinformation is something that especially when if it's coming on the back of a state actor doesn't have the same challenges of financing that a private one might have. Even the private ones are seemingly doing very well. Right?

And we have some data that is similar from the United States and from the United Kingdom. But the revenues that people managing misinformation websites and whatnot can generate are simply appealing. From a pure economic perspective. And the more egregious, the better.

There was a "Washington Post" estimate around the time of the previous American elections that getting your fake news shared from someone within the Trump campaign could translate into roughly $10,000 worth of ads. Right?

So the economic incentives are at that level. So it works both at a personal level and the pure, you know, how much clicks bring you. But also what are the long‑term ‑‑ what are the long‑term benefits that a country could obtain. A country like Russia, for instance, if we're talking about Russia to convince the local population to allow them to open or not object to opening a military base.

Now, a question there, and I'll probably end on this point. We can probably break down numbers a little more further, if you will. But the interesting thing that we're seeing is that a lot of that happens in countries, and it's done by foreign actors. Right? Especially in developing countries. And the question is, how much could those countries, so the African and Arab countries in that case, how much space do they have to try to control that? Or at least try to push it back. Or even what incentive do they have to go after that? Because, you know, like, we have the evidence that the Russian ‑‑ the RA folks are paying people ‑‑ were paying people in Sudan, I believe, to manage their pages. So those are local perpetrators within Sudan. Why government is not particularly going after them.

And it comes down to the fact that simply, they benefit from that, too. And the people designing the content make sure that, you know, some of the ‑‑ I want to rephrase that sentence. That in many of the countries, many African and Middle Eastern countries, a lot of the foreign propaganda or foreign misinformation also benefits some local actors sufficiently that they have no incentive to try to curtail that. Right?

So whether it's the same messaging or whether it's complementary messaging developed simply to make it slightly more friendly or more appealing to those political actors. So that's sort of hybridization of messaging allows that to endure.

I think I should probably stop here. That's been eight minutes or so. I thank you all for your time. I'm more than happy to come back in the discussion.

>> RAYNA STAMBOLIYSKA: Yeah, thank you for that. It was a very rich, and I'll definitely get back to the whole franchising and cost‑effective, you know, implication of foreign actors in the Q&A session afterwards.

But I would like now to turn to Viktoras Dauksas. You said something about nostalgia, looking at my notes. The better past. You know, like, I come from eastern Europe as well. Viktoras comes from eastern Europe as well from the Baltics. I think we've seen our fair share of the better past, you know, online, in newspapers, and so on, and in political campaigns.

What is very interesting, also, in the Baltics is that we have seen in the recent months a huge, yeah, uptick, if you like, of events and tensions rising with the challenges posed by the Belarusian management, if you like, channeling of migrants through Lithuania to Poland.

So this is also something I think Viktoras will touch on a little bit. But more generally, can you tell us from your work, from where you are, because I know you have Civil Society actions, but also you interact with people from Ministry of Defense, NATO, and so on. Can you tell us from that perspective how is this challenge of poisoning content, poisoning interactions, of promoting harmful content online, seen and how it is acted upon, if at all, from where you work. From your area.

>> VIKTORAS DAUKSAS: Thanks, Rayna, for the introduction. Hello, everyone. I'm Viktoras Dauksas. Head of debunkeu.org. My background is applied physics. For the last 14 years, I worked with media and media technologies. So I do really understand how the back end of things work. How the infrastructure is built. How GDPR works. And how media in general operates. Like, having this background, worked with the biggest media companies in the Baltics here.

Context of Lithuania. So 20 years ago, the first economic community started to kind of do work, debunk these disinformation cases and present them. 14 years, Lithuania military worked, the media joined. We kind of evolved as an organisation countering disinformation.

What's really important, I think, is that we need to speak about how do we actually analyze misinformation and how do we kind of agree on things, how they should be, and what should exist and what shouldn't.

And last few years, we have worked with a team of five Professors and our kind of large team of disinformation analysts, to practically analyze and also to develop a methodology for disinformation. We analyze almost everything that exists in English language that you can find in the academic world, books and other things in the last 20 years. From that, we developed a methodology. So now we call it the three‑step analysis. Starting from the source analysis. Analyzing the source. Communication actor participating in disinformation campaigns. Then we assess every content piece separately. So trained analysts are reviewing those. And then we assess the circumstances or the context in which this is being spread and why this is actually happening.

I think that's a really important thing. In today's discussions of kind of from our analyst perspective and Professors and academic community, we should really agree to stop using fake news as a buzzword. It's not a term. It's academically kind of a nonsense thing. So it should be removed from all the communications. And so I think it's ‑‑ just avoid it.

So two terms that are actually really useful is disinformation and misinformation. We have to be very clear what is the difference between them.

And the difference is just the intent. Is this just an honest mistake or spreading or amplifying some false content? Or it's actually kind of a systemic way to spread disinformation, to spread and deceive the citizens and all the who are being targeted with that information campaign or operation.

So there are more terms, but they are less practical. These two are easy to remember. And very easy to use at large‑scale analysis.

Kind of ‑‑ I'll jump in also in the manufacturing and migration crisis. It's important to remember, too, to understand with the current amount of the flow of information to analyze this with just the naked eye and the computer and any number of analysts. It's ineffective. It's really hard to do. So you need some kind of process automations. And methodology helps to do that well. So methodology allows to for more people collaborating together synchronizing so they can talk to each other and reproduce the same results with different analysis with different people which is a really big thing.

Then we kind of work a lot to develop the infrastructure in which we can automate the processes. We can support the analysts. That worked really well.

Now we monitor 3,000 disinformation outlets. 3,000 websites. And thousands and thousands of groups and pages that are spreading disinformation. And social media. Only the websites produce 2 million to 3 million content pieces every month in 32 languages. And we are kind of analyzing how they're kind of systemically deceiving citizens all around the world in their languages and running these disinformation campaigns.

Speaking more generally, it's important to assume ‑‑ so there are the superspreaders or superspread events. So the big disinformation campaigns or the hybrid attacks that are kind of part of the hybrid worker. Targeting those could kind of solve a big part of the problems around the world. Connecting to information operations in general.

So our recent work, so we are really heavily working on this manufactured migration crisis in the Baltic States. And it's really important to note that it's manufactured. It's not a real migration crisis, as how it's portrayed.

So last year's elections in Belarus, presidential elections, Lukashenko still kept his seat even though illegitimately. Sanctions were imposed seven months. None of the EU leaders communicated with Belarus. Then on May 28th, Lukashenko announced he would start to flood Europe with drugs and migrants and they will not control the border. Then July 1st, they announced there will be a visa‑free kind of regime for 73 countries from the Middle East and Africa.

And then we started to see a huge flow. In Lithuania, there were more than 4,000 migrants who crossed the Lithuanian border. Big part of them are economic migrants that were kind of deceived by the regime. Kind of selling these what's called the package deal. So they're selling ‑‑ visas included. Flights included. Taxis from the airport to Minsk then to the hotel and other taxi to the market to buy sleeping bags, batteries, and other equipment, to even attack the border wall.

Then they are also ‑‑ the service also included to bring them directly to Lithuanian or Polish border. This is an organized migration crisis. A verify report provided the leaked documents that showed how Belarus office, cabinet, signed agreements to kind of contract an Iraqi person to sign an agreement to increase the number of flights from Baghdad to Minsk and other agreements to kind of transport migrants coming to the airport, to the city. And then to the border.

So there's a lot of evidence already kind of proving. And starting from the 8th of November, there was this huge attack of the Polish border. With more than 2,000 people. And in forward this, there was 18 more times increase in kind of disinformation spreaders that we are monitoring. So this produced just a week later 18 times more content to kind of deceive people and spread misinformation about that.

We kind of reported a lot of the cases that we find. And we do find, actually, a lot of cases. So we monitor this crisis in ten languages in multiple countries from which migrants are coming. Mainstream media there. Countries in the Baltics. Countries connected to the Kremlin and Minsk. Every month, we spot around 1,000 cases of mis and disinformation.

The latest kind of big thing that happened and we were reporting to Facebook all of these cases of disinformation or really illegal content being published on Facebook. Finally, Facebook started to remove those accounts. I think that's the first time in the Baltics that that happened. And Facebook directly attributed 41 Facebook accounts, Instagram accounts, and five Facebook groups and directly attributed to Belarusian KGB. That's a big deal. Evidence of not having the border patrol involved, Belarus office involved, other institutions. KGB now also as well.

Also we see huge involvement from Kremlin's media. There's big support coming from Moscow to support all these events happening in Belarus.

These campaigns are coordinated. We organize them. You can read our work on our website. Just remember, this is a manufactured crisis and not a material one. Thank you.

>> RAYNA STAMBOLIYSKA: Yeah, thanks, Viktoras. A lot of concrete examples. It helps, indeed, shed light on this ongoing situation. I mean, the lifecycle ‑‑ sorry, the news cycle has passed a little, but the situation is still there.

Just a reminder to anyone who is willing to ask questions, if you are online, please do so. On the chat function in Zoom. If you're on‑site, reach out to Lucien for transmitting the question. Where I think the mics can be open as well on‑site. Raise your hand or make yourself known that you have a comment or question.

I have a few questions of my own listening to you with those introductory remarks that are really rich in learnings. And in examples.

What I'm learning, we're in a situation where we have gone beyond, you know, individual action. We are in a situation where each user, basically, can have an activating crowds action. We're much more talking about the sum of individual actions rather than a single one.

And what I heard a lot is the question of trust. And to me, this is, perhaps, one of the most important ones. Michael, you mentioned something about it. Mohamed also spoke about it. Viktoras also spoke about it. From different angles. You know, from who's speaking. How the message is transmitted. And so on and so forth. But what I'm thinking of in connection to cybersecurity is, for example, the questions of phishing. Of stealing identities online. Or luring, you know, individuals into giving their passwords, identifiers, and so on, against their will. And then usurping actually the addresses of legitimate users that the victims know and trust. So in that aspect, we're clearly having a coincidence, if you like, or a crossing of cybersecurity aspects and disinformation where trust basically makes the harm possible.

Because I can compromise a lot of people's accounts, but if I don't do anything with that, then we are still kind of in a safe place, if you like. Quote/unquote multiple times, right?

But, so, Michael, I would like you to kind of ‑‑ and the others, please jump in, chime in, when you feel like you have things to add. I'd like to ask you about this question of trust. How do we kind of try to handle this? Because very clearly, you know, and Viktoras was very explicit on that. Fake news doesn't exist. It's not the proper word of qualifying anything. And you, like, hinted toward the fact that, you know, facts are equally and interpretations are equally important, increasingly. Regardless of, you know, how close they are to reality.

So how in that situation with the tools we have, or we still don't, how can we basically address the question of trust here? What do we need to do? Because, clearly, policy and speech, fake news and so on, that's not enough. That's not appropriate in the sense that it's not efficient. So what would be efficient for you from both a, well, a tool perspective, if you like, and the policy perspective.

>> MICHAEL ZINKANELL: I think that's a very relevant and definitely at the same time very difficult question to answer. Especially in a short time. And I hope that European stakeholders and the democratic governments around the world are asking themselves that same question as we speak.

Trust is something that needs time to be built and at the same time can be destroyed in seconds. So it's a very fragile concept. It goes also hand in hand with not only what do I believe in, but also what do I think is true? I believe trust and truth is something that we should think as combined in that sense. And both aspects, which are, of course, related to our feelings, to our feelings of belonging, are targeted by disinformation. Systemically, as we have heard multiple times. Not randomly.

The example that was given by Mohamed of tackling a political opponent, spreading information that he's homosexual, taps into homophobia. We have seen similar accounts, as well, in Europe. In more traditional environments where religion plays a more important role than probably in central or Western European environments. Where the same tactics were deployed to spread disinformation that the COVID‑19 pandemic was also transmitted especially through people who are homosexual which is, of course, complete nonsense.

However, these feelings, these disinformation clusters, resonate with the people and are deliberately targeting special groups of people and their core values. And, therefore, you create this environment of fear. Therefore, you create an environment of distrust. To a point where you distrust everyone and everything. Where we then go into a field which also plays an important role, and I'm a little surprised we haven't mentioned it so far but probably will come up in discussion, of conspiracies. And conspiracy and mythology almost. I don't want to call it theories because theories can be disproven. With these kind of complete nonsense conspiracy tales and stories is not the case.

So looking at Qanon, it's a fascinating phenomenon to analyze how you get people to distrust everything completely that is under the umbrella of, let's say, logical senses. And go to a theory ‑‑ not theories, but stories and tales that are, yeah, as we all know, out of the world, couldn't be better made in a science‑fiction movie.

I believe trust is a very important aspect to tackle. However, I don't have, now, the silver bullet. I don't believe there is one. To answer that question holistically. How can we build trust in a society that is constantly systemically targeted by disinformation, by spreading distrust. How can we build up that trust from within. It's very hard to answer, in my eyes.

>> RAYNA STAMBOLIYSKA: Oh, I was hoping you would provide us ‑‑ no, just kidding. Thank you. Thank you, Michael. This is ‑‑ I mean, it's super complicated, otherwise we'd have all the answers by now and we'd just go on with our days if there were a simple answer.

I have a few more questions. And there is one more in the chat that I believe Lucien has someone in the room.

>> LUCIEN CASTEX: Yeah. Exactly, Rayna. We have some people interested in the room to interact. I have one to my right. I give you the floor. You press the button.

>> ERIC: Okay. Hi. Name is Eric. I'm from Poland. And I am an active Internet user. I'd like to ask about something which was already brought up by Mohamed. And I'm speaking about foreign influence. Because I believe both Rayna and Viktoras hailing from eastern Europe, as me, can agree that foreign influence is not enough, which is unique to our world and developing countries. But it is also present in the, for example, Eastern European post‑Soviet bloc.

I'd like to ask especially about the main problem of, what is so hard fighting with this kind of disinformation. And I think it's because of the form. Fighting with disinformation in press articles and politicians and Facebook content is one thing. But the form, which is most appealing to the young generation, is in form of memes, content, in form of videos and YouTube and other services. It is very hard to tackle on this problem. Because every time, they're very often humorous in nature. So restricting this form of content is hard because tackling kid is very often seen as restricting freedom of speech. My question is, how can we actively fight against this soft form of disinformation? Not in press articles that can be fought off with fake news policies, which are applied by some countries. But this ‑‑ well, this newly very fluent, very unique form of spreading disinformation in memes, for example.

>> RAYNA STAMBOLIYSKA: Thank you. I'm seeing Viktoras is raising his hand. Mohamed, do you want to take this one? Because you were explicitly mentioned. If not, I'll pass to Viktoras.

>> MOHAMED EL DAHSHAN: I mean, first, Eric, thank you for your comment. I'll try to bring a small piece of the answer, if I may try.

So you're absolutely right that the experience of foreign actors is global. I think ‑‑ I mean, it was seen most globally when we were talking about interference in the American elections in 2016. Right? I think that's when it was really front‑page news.

So the foreign aspect is interesting. But I think your point is really interesting about sort of the format of the messaging. And that kind of ‑‑ like, that brings me back to another example. I think it was a Cold War example. I think it was mentioned by Brian Klaas who's a political scientist here. I'm mentioning that reference because I may not be entirely correct on the details. But it was basically an instance of fake news where the Americans were generating a radio station. And one of the ‑‑ which included a lot of fake information. But they were very keen on trying to make sure the information went as local as possible. To the extent that at one point in time, they would literally cross the border. Go into the neighboring village on the other side of the border. Just to get a copy of a phonebook. So they would be able to reference actual people who live in that area. That, to me, was fascinating. It's the hybridization of the messaging which is including the incorrect information in the midst of something that is correct. It's not formatting, especially if you go at a localized level, that makes it all the more believable.

And, of course, that brings us into the discussion of how much hybrid targeting you can do with advertising and whatnot.

But, yes, the formatting of the messaging, going very local and merging the real with the untrue, is probably the most important combination that I can see. So that's my bit. Thank you.

>> RAYNA STAMBOLIYSKA: Viktoras, I know you had a comment as well. Go ahead.

>> VIKTORAS DAUKSAS: Yeah, you know, disinformation and information operations, it is a broad field. And it can be two perspectives. Either to complicate or simplify.

And, I mean, it's impossible to solve all the problems at the same time. And we need to somehow prioritize, where do we place our actual efforts and resources? And where do we concentrate to kind of make the impact?

And I think that from one perspective, there are problems that are worth to be solved and there are others that are worth less to be solved.

So taking an example of cyber-attacks and disinformation combination. So the ghostwriter campaign that is the last five years and currently have been previously attributed to in a way to Kremlin, but now it's attributed to an office in Minsk from which these operations have been run.

This is a very clear example that, clearly, is illegal. So it's not only illegitimate, it's also illegal. Because they hired a team who started to hack websites, media websites, in the Baltic States. Media websites. They've been able to kind of hack a nuclear power plant organisation website to post some fake content there. But what is really sophisticated, when we see the campaigns that they are very well integrated.

So they use hacking as a method. They use social engineering as a method. They use email spoofing. When emails are sent in the other person's name looking from the same email. And they combine those. So they map out the actors working in the Baltics. They implant the fake article in the media outlet. They then use email spoofing to add a link in that ‑‑ with that article in the spoofed email and sends out this to institutions and media and kind of attempts to create some kind of form of crisis.

So this happened multiple times in the Baltic States. And also in Poland as well. So this is a very clear influence operation with also possible attribution. This is a clearly illegal act. This kind of act should be forbidden. And it's just a political will question and also how to frame it.

Other examples of manipulated social media accounts or manipulated social media algorithms or manipulated comments under the international biggest media outlets around the world. So having manipulated public opinion in social media or in those comments, is clearly, again, illegitimate and also illegal. So they are sending patrol armies to kind of polarize the communities. Creating fake accounts. Managing them with the troll factories or accounts in a more automated or semiautomated way.

So this should be actually illegal. We don't want to create, live in some kind of virtually created world which looks like a kingdom of mirrors where we can't understand what is actually happening.

So a basic question should be if there are other things that are illegal and doing offline, and still in online, a lot of those things are actually legal. So the question is why do we treat digital space so differently from the offline world?

So even connecting those two things, we could start to solve these problems.

Even, you know, someone could be speaking outside to 2,000 people. We would never consider this to be a kind of private conversation. But if we take social media and there's a group of 10,000 or 100,000 people, the admins can change the settings and make the group private. And then organisations like ours are not allowed to kind of analyze the content.

So how so, how a group of hundreds of thousands of people online can be a private group when there's so many conflicts there.

So just kind of bringing these similarities to offline and online would actually help. And just target and kind of resolve the problems of manipulated comments, manipulated social media accounts. This is easy to do. Create regulation. To pressure big tech. So the fake accounts would stop existing. Thank you.

>> RAYNA STAMBOLIYSKA: Thanks, Viktoras. So I'm seeing questions coming in in addition to mine.

There's a question in the chat so I'm reading it. Are there any disinformation campaigns the speakers can recall or currently see unfold whose topics have been so politically charged that disinformation researchers decided to stay silent and self‑censored themselves for their own protection? Meaning does the field have preference for calling out the easy enough disinformation campaigns while shying away from tackling the hard ones? Any testimonials about this one?

I don't recall something like this. Perhaps things that have gotten less coverage in mainstream media than they should. But go ahead.

>> VIKTORAS DAUKSAS: Yeah. I can give a quick comment. I think there's a lot of cases that are not analyzed. And the basic thing is that we need to understand how much resources there are for this type of analysis.

Just to compare, so RT, they are kind of also the big spreaders owned directly by Kremlin. And Russia Today spreading disinformation cases and having a yearly budget of more than 1 billion euros. And then in the European Union kind of the budget, yearly budget to kind of counter academically and as fact checkers, disinformation, it's 43 million.

So this is just an example of comparison to understand, like, how many disinformation cases and campaigns are not analyzed. Because the kind of resources are not dedicated. So I think we can find the answers there.

>> RAYNA STAMBOLIYSKA: Thank you. Michael, Mohamed? Any comment to this one? Specifically, no?

>> MICHAEL ZINKANELL: I agree. I would not say that shying away is probably the right term. I would definitely agree with Viktoras that it's simply the resources that are limited that I also am not aware of any disinformation campaigns where researchers have shied away due to their own protection. Or self‑centre themselves. No.

>> RAYNA STAMBOLIYSKA: Yes. Same. It's more a question of means or you also have the balance to strike. Like, do you talk about this? It's so low, you know, level of noise, if you like. That if you talk about it, you basically push it to the fore and give it much more visibility than it would have had if nobody talked about it. So it's always very complex juggling around with what do I do with it? Provided, you know, people operate with unlimited funds which is definitely not the case.

There is another question on Zoom. Lucien, do you have a question from the room? I'm not forgetting you. Okay. Go ahead.

>> LUCIEN CASTEX: Yes, Rayna. I have one question on my left. Please, take the floor.

>> I'm Deputy Attorney from Government of Nepal. I want to ‑‑ is it necessary to address disinformation as a serious cyber threat related to cybersecurity?

>> RAYNA STAMBOLIYSKA: Can you repeat the last part?

>> Is it necessary to address disinformation as a serious cyber threat relating to cybersecurity?

>> RAYNA STAMBOLIYSKA: Thank you. Guys? Michael, Mohamed, what wants to take this one?

>> MICHAEL ZINKANELL: Just briefly, yes, I believe we should simply because of the arguments that we have brought forth. That there are certain correlations and motives and the means and the tactics of cybersecurity, cyber attacks, and disinformation campaigns. I see some clear correlation in all these fields, tactics, motives, means. And, therefore, I believe we should have more coherent and more broader approach to tackling disinformation as a new area of cyber threats.

>> RAYNA STAMBOLIYSKA: Yeah, thank you. I have a question on Zoom. It says, I'm a Brazilian Internet Youth Ambassador. Associated with the freedom of expression. For instance, the Cyber Security Treaty that will be started to be discussed in 2022 has received oppositions from human and digital rights organisations. In this sense, how can we create a policy framework that provides at the same time security and reduces the risk of undermining fundamental rights?

>> VIKTORAS DAUKSAS: I kind of have a comment on this. I think we speak about information on a very large scale. And the only way to create a smart policy is to have the data sets of previous disinformation cases. And kind of work out and challenge the suggested policies with those cases.

So this, actually, an effective law could be presented.

On the other side, the Civil Society activists and others should present the other side cases. Taken the kind of data‑driven approach for policy development, you can actually achieve a really effective way.

So I think that is the future way to go. And kind of the question is how to create the data sets. For example, we work a lot on that to kind of develop these data sets. Have these examples. Then have a very kind of targeted and deep discussions about those cases.

So I think we need to go from this political level and speaking broadly and kind of vaguely into having the specifics of cases and then think how the law should be actually implemented so it would be actually effective.

>> RAYNA STAMBOLIYSKA: Yeah. Thank you. That's an interesting question because it brings us to many other realms of activity, if you like. I mean, we've been, you know, as a community, generally, we've been advocating for opening of data, public data, for opening up of more recently algorithms. And there are developments, especially in Europe, in the EU, but not only, on greater transparency to political advertisements. Especially if it's targeted under age or, more broadly, at people on social networks.

So this is, of course, transparency and accountability are, let's say, a precondition for stability. Be it, in the cyberspace or elsewhere. Which cannot be divided. Because our lives are connected lives. Cyber stability is, you know, a whole‑of‑society challenge since it is stability for all of us.

And I have a question because there are two questions/comments on the chat that basically ask how do we handle, how do we basically, you know, mitigate the danger of informational disorders and look like the two comments point toward education. And where I'm adding in, drawing in a question, and this time it's toward Mohamed. The specific one is what is the business model of truth? Like, you know, we've been discussing how information disorders are harmful. That, of course, intent matters to decide what to do with that thing. And, of course, we cannot just push enter, whatever button and, poof, challenge solved.

So since, you know, in so many countries, foreign actors, foreign operators, of harmful information can have a license to operate because they are bringing in, you know, financial benefits to people who are impoverished or otherwise disadvantaged. So what is the, you know, and we are arguing that we need to push forward policies based on truth. So and on transparency and on accountability.

So I'd like to ask, Mohamed, go ahead, because you've been a little bit silent in the past minutes. But, everyone, chime in. What is the business model of truth and transparency that can, you know, kind of overcome what we have now?

>> MOHAMED EL DAHSHAN: Well, allow me to say two things. I'll come to that. One thing before that since a lot of the discussion was about misinformation and phishing. And then there were a lot of comments about digital literacy as a tool to fight that.

I'm sitting there and thinking of that on a micro level. Right. On the person level. Honestly, the biggest source of misinformation in my life is my own stepson. I don't know about you, but WhatsApp family groups are a cesspool of COVIDs and politics. Everyone is ‑‑ like, everyone is a professional. Everyone's an epidemiologist now. That aside, a lot of the people who are in charge of ‑‑ who are responsible for this are also the main targets of phishing and whatnot. Because, like, these are the kind of people, those somewhat gullible people are the ones that, you know, actors will ‑‑ that negative actors will prey on and get them to click on the wrong links.

I don't know what the answer is. I mean, public education is very important. Yes. But I, again, I don't really imagine my own stepson learning spending 20 seconds learning something new. It's hard to think of that on a microlevel.

More seriously, tools we're creating to combat phishing, two‑factor authentication, are a nightmare for those exact same people. Right? Trying to get people to use their key, enter it from a text message every time they need to log into their Facebook account is complicated for a lot of those users. Right?

I don't know what the solution is. I know that the tools year implementing to combat phishing are far below what we need and can't be replicated as they are.

On the business model of truth, I mean, it's ‑‑ that's a really good question. But first thing I can think of is that we know that the amount of energy expounded to fight bullshit is multiple than the amount spent to create that same bullshit. And that kind of goes into a little bit into that spending time to debunk information. It's not necessarily something that's very lucrative.

However, what we could look at is what is ‑‑ so it's not what is the benefit of truth. It's what's the cost of lies. Right? So it's what is the downside of that business model?

And it's something that we're not very clear on because we haven't gotten to the point where we've systematized ‑‑ and that applies to the private sector and applies to governments. Right? So we know, for instance, the private sector, there's some analysis. There's some companies trying to break that down into what are the different costs of lies. And it goes from the cost of verification and triage, to the cost of dealing with a crises that arised from false positives and false negatives. Don't have time to go into detail there. How much it cost businesses that things are not as streamlined as they should be because of all the new information they need to deal with.

And something that I also find very fascinating is more generally speaking, what is the economic cost of manipulation, right? Because we know companies have lost shares ‑‑ sorry, company shares have lost value because of fake information about their financial health. We know that about ten years ago, oil prices went up by a full dollar, which had ‑‑ because there was a rumor, like, someone faked the identity of a Russian minister and said Bashar Al‑Assad had been killed. That sent oil prices up. That means that there's a whole bunch of countries around the world that ended up paying that much money because just to, you know.

So if we get better at understanding ‑‑ so I don't know what the business of truth is. I think that we're kind of screwed because on this particular approach, because it's very hard to generate, I guess, financial value from sharing correct information, which is what we assume to be the default.

However, I think that if we start flipping that backwards pane thinking of what is the cost of lies and how that, you know, what is the money that is being spent to mitigate that? How we can get that money and actually reinvest that into proper regulation and whatnot. I think that might be the approach that I would favor. To put it this way. Thank you very much.

>> RAYNA STAMBOLIYSKA: Thank you. You do have a talent for getting yourself out of my questions. (Laughing) No, I'm kidding.

>> MOHAMED EL DAHSHAN: I did my best.

>> RAYNA STAMBOLIYSKA: Yeah. I know. Thank you.

I'm looking ‑‑ we have three minutes before we are cut off. I know there are a lot of questions. Please, send them over and we'll try to kind of respond off‑session to the best of our abilities.

Guys, 30 seconds each for conclusion. A tweet, if you like. Go ahead, Michael.

>> MICHAEL ZINKANELL: Thank you very much. Challenging at last. I'm happy that people have been asking the questions of solutions because that's also something I wanted to wrap up with.

So just briefly digging into that topic, I believe we have to divide into long‑ and short‑term solutions. Short‑term solutions would certainly include taking down sites. I believe it's effective not to have disinformation continue to spread online. But that can only be a short‑term solution. Also to continue our analysis.

On the long‑term side, I believe that as has been mentioned, media (?) including that into schooling and into primary education and into formal education is certainly absolutely necessary. But also including evaluating, reevaluating, our policy strategies from a European, from a national, point of view. Simply because this is such a fluid environment that we constantly have to change our perception of what is at stake, what is in this field of new emerging technology, new emerging security threats. What is possible and how to readapt our own policies and strategies to mitigate those challenges. Thank you.

>> RAYNA STAMBOLIYSKA: Thank you, Michael. Mohamed? A tweet.

>> MOHAMED EL DAHSHAN: A tweet. I mean, first, thank you for having me. I'm very grateful that I got to share those viewpoints.

Just the tweet would be that developing countries deal with misinformation in very ‑‑ the challenge is very different. And, potentially, the solutions are as well. And, therefore, when we're looking at that as a global question, it's important to remember that these regional and geographical distinction are important. And we need to keep that in mind when we're developing solutions. Thank you.

>> RAYNA STAMBOLIYSKA: Thank you. Viktoras.

>> VIKTORAS DAUKSAS: Thank you for having me here. So I'll say analysis is kind of the first step. Because with analysis and data‑driven approach, we can actually have an impact. Methodology is the second thing. Because without having the common methodology and common understanding, how can we work together and collaborate? It's kind of not possible.

And the long‑term approach is kind of media literacy and education. Just to give an example. So last year, we did a bad news game. Just a game. 15 minutes. That proves, you know, people to become more resilient to disinformation. And the game was played by 118,000 people. So you can actually do, you know, these little actions working together. Collaborating can actually bring ‑‑ Now we started to do a course for civic resilience so we can check that. It's just 90 minutes online course. 1,000 students are already testing that in English language.

So thank you, all. Be safe. Have a good evening.

>> RAYNA STAMBOLIYSKA: Thank you. Thank you so much for this. We'll be ‑‑ I'll be adding all the links that people have been either sending or referring to the final report. And time's up. Lucien, over to you for a final word. And I think we are off now.

>> LUCIEN CASTEX: Clearly. Thank you, Rayna. Time is up, anyway. Thank you, everyone, for joining us. Participants, the panelists. And it was quite a great hybrid session. Thank you, all.