The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
>> MODERATOR: Good afternoon. Welcome everybody who watches us here in Katowice and those who watch us online.
Well, my name is Malgorzata Bonikowska. I represent the Center for International Relations from Warsaw, Poland.
And one of topics of this panel is relevant to our work. We fight disinformation, has been awhile dealing with this subject. And it will be one of the topics covered today. Actually, the topic is larger. We will be focusing on new technologies generally, especially the newest technologies like AI, big data and all of the solutions based on these phenomena.
And we would like to think about how we can use it smartly. How we can use it in order to from one side use them well and benefit of having them actually. And then in the same time not to be harmed by using the new technologies.
And the guests to this panel are I hope the people who help us to really understand where is the balance between the two because, of course, one thing is certain, we will implement new technologies, we will use new technologies, it is just a question is where this line lies in order not to be taken by new technologies and not to be dominated by new technologies but dominate them.
So let me introduce our speakers today. Let me start with our guest from Lithuania, Dalia Bankauskaite. Hello, Dalia. Nice to have you here in Katowice in Poland. Dalia is dealing with disinformation phenomenon, has been a while, some years. See does it in practice but also she teaches at university how to fight disinformation at Vilnius University. But she is also involved with the Swedish Defense University, and she works with several think tanks for us in Warsaw but also for Center for European policy Analysis in Washington.
Of course, our main topic is proclaiming propaganda and the information war we seem to live in already. Thank you, Dalia, for being with us.
And then let me introduce starting from my right side is Dobromir Clas. (Speaking in non-English language.)
>> DOBROMIR CLAS: (Speaking in non-English language).
>> MODERATOR: Hello. Dobromir is our guest from Poland. He comes from the business sector. He represents the technology company Edge who has really tools, algorithms who can be used very smartly in order to fight disinformation and not only all the narratives who can lead us or mislead us.
And then we have with us Jaroslaw Pacewicz. (Speaking in non-English language).
>> JAROSLAW PACEWICZ: (Speaking in non-English language) Very good day.
>> MODERATOR: Very good day Jaroslaw represents Siemens Company. I don't need to introduce Siemens to you, one of the oldest technological companies in industry. And Jaroslaw is responsible for, let me quote, digitalization and product and solution security. That is his domain.
So I hope you will help us to understand what such huge companies like Siemens can do to from one side give the clients what they want, the public sector clients, and from the other side protect them as well.
And finally Edward Strasser. Hello, Edward. Edward came to us from Vienna despite the tough rigid measures taken in Vienna against traveling. So we are happy that you made it. He escaped, yes, to Poland, yes. Edward is a CEO and co-founder of the Innovation in Politics Institute in Vienna and represents the political angle. He is personally interested in the political scene but also as an institute they try to develop so-called dem-tech solutions which is actually technology implemented in service of democracy.
And the topic of the panel is exactly about that. It is not only about new solutions like AI or big tech in public sector, but it is also about, you know, if this is a challenge, if this is really a challenge, how to respond to this challenge. Especially during time like pandemic. That is what we want to discuss, first of all, because that is the time we are still in. And it makes us reflect on many, many things also on new technologies.
So let me start with Dalia maybe because that is the topic of the panel. We used this phrase, this notion, disinformation is one of the challenges we think are really the most important.
If so, Dalia, can you help us to understand what is really disinformation?
>> DALIA BANKAUSKAITE: It is an old thing and at the same time as information. First of all, what is information? Information is everything. And it is extremely, extremely useful thing. But at the same time, extremely dangerous thing or very sharp thing if it is mishandled, misused.
So when we talk about misinformation, it is just mistakes. You make a mistake because you didn't know, you were in a hurry, and you correct that mistake.
>> MODERATOR: So misinformation is when we forward something we found on the internet, but we don't know if it's --
>> DALIA BANKAUSKAITE: Or we say something. I might make misinformation because I might make a mistake not being aware.
While we talk about disinformation, it is already very intentional misuse of information with the purpose to get benefit to channel, to manipulate you into certain behavior, to get certain profits.
>> MODERATOR: And from your perspective because you are very much into this subject, has been a while, several years, do you think we did right thing to stress that that is one of the challenges which is really one of the most important today?
>> DALIA BANKAUSKAITE: It is -- it will -- it is very topical, and it will be extremely topical especially when we are developing so fast, and we are moving into the digital world, and everything is digitalized.
So this penetration of information and the means of persuasion and speed and the demand to react to that immediately becomes so everyday life. That is our environment.
So disinformation will remain for the rest of our lives with the technology making it even more powerful.
>> MODERATOR: So actually, you know, our assumption is that disinformation is a phenomenon. Maybe it has been, you know, since the beginning of human civilization.
>> DALIA BANKAUSKAITE: Since the Bible.
>> MODERATOR: Since the Bible maybe. But the problem we face is that we never had such technologies, so sophisticated technologies. And we never had internet, which is global net, to be able to spread out these pieces of information, the pieces of disinformation so fast and everywhere.
I would like to ask Jaroslaw now if you can from the technological point of view also help us to understand how we should see the speed of technology itself, you know.
And let's take Siemens. Siemens is one of the oldest companies. It's, if I am not wrong, 150 years old?
>> JAROSLAW PACEWICZ: Almost.
>> MODERATOR: Almost, so it is an old company which itself had to go through a lot of changes internally.
But you observed this phenomenon of this speed of technology that pushes also you to change. And you have to also think about, you know, how to protect yourself and how to protect your clients.
How would you describe this phenomenon of the technology growing?
>> JAROSLAW PACEWICZ: Well, I'm not too old, I mean not 150 years.
>> MODERATOR: You personally not, but technology is able to make us live over 150 years?
>> JAROSLAW PACEWICZ: Indeed, indeed. A couple of days ago I read some expression that I really like that the new technologies means new perspective.
So day by day we are faced with completely new challenges. And that is why I think that, for example, from the cybersecurity point of view what Siemens is doing trying to secure the critical infrastructure. I mean the water supply, energy supply, that kind of the things.
We are trying to on one hand internally but also supporting our customers, we want to always pass the message that anything what we do is -- it cannot be a one-time task. So we cannot say that hey, let's today try to protect our computer infrastructure and that's it, let's forget it. No. This is the continuous, always continuous process.
So there is no, let's say, vacation from this kind of stuff. We have always think, analyze, verify, and then implement some measures to protect our environment. And then optimize or, I don't know, modify, review. And the circle is going back and forth.
>> MODERATOR: So if I understand you correctly, it's just like a vision of technology going absolutely everywhere. Even to those domains who are, by, you know, nature quite old.
Like energy, you know, production, it has been many years we have to think about production of energy.
>> JAROSLAW PACEWICZ: I think that the big challenge for today is that even 20 years ago, every single industry or the public services were growing somehow itself with the very rare links.
>> MODERATOR: So sectors. Sector by sector. Now everything is interconnected.
>> JAROSLAW PACEWICZ: Indeed. Right now, you're fully right. Everything is connected, even the shop floor in the factories are connected to the internet Which means there is new huge risks for critical infrastructure. For, I don't know, the Medicare, whatever.
>> MODERATOR: So we have two dangers already mentioned. One is disinformation as a phenomenon because there is more and more pieces of information. And among these pieces could be many more pieces of disinformation. And it's more and more difficult to recognize them. And the second is what you mentioned, Jaroslaw, that actually so many sectors are interconnected because everyone is connected to the internet. Human beings as well.
So I want to pass to Edward now. Edward, if you can just comment on the nature of these two phenomena actually for our Democratic systems. Because this is also one of the challenges we in the west face. It's not only the technology separated. It is technology with people and people make democracy because you cannot visualize Democratic countries without people who vote. Who, you know, have some point of view, who discuss. So how do you see this development, where are we right now as far as connection of these two?
>> EDWARD STRASSER: Before I go into this let me ask one question. This text over here, is this written by an AI or is there an actual human interface doing this?
>> MODERATOR: That's perfect because what we are seeing is actually a written form of our discussion. So we can see that the technology which I think started to be searched 30 years ago or 20 years ago, and now it just happens that it is just so perfect that whatever we say it will be visible in form of text.
>> EDWARD STRASSER: So there's no person somewhere sitting and typing this in?
>> MODERATOR: It's algorithms.
>> EDWARD STRASSER: It's an AI. So I can blame the AI for what I'm going to stay, okay, thanks.
>> MODERATOR: And it could be misunderstandings here because it's maybe not as perfect as we want to. So mind what you are saying. Mind what you are saying.
>> EDWARD STRASSER: So it's a good thing.
Just to give you an example what is happening, especially during the last two years in the pandemic. Before that, it was strange for many people in this -- in the political world, for instance, to use technologies to bring citizens into the decision-making processes.
I mean a couple of years Axel Maier in France introduced this initiative to cowrite a bill with all citizens of France and 23,000 citizens wrote together a bill in some kind of online Wikipedia page, that was five years ago.
And today after the pandemic when people had to stay at home and politicians tried to bring more citizens into the democratic processes, these democracy technologies have become kind of mainstream. Cities are doing participatory budgeting online. Parties are doing elections and ballots online.
And institutions and regions are doing some kind of deliberation things online. And they all use different technologies. It is a good thing because democracy is getting stronger with that. On the other hand, today they are using technologies nobody knows from companies nobody has heard of ever. Technologies that are in some cases not even tested yet.
And this is also a kind of bit of a threat because there is no industry standards, no quality standards yet to be developed. So I would say the positive thing is that we have a real, real rise in using technologies for strengthening democracy especially since the beginning of the pandemic.
And on the other hand, well, many in the political sphere do not know yet what they are actually using there.
>> MODERATOR: And maybe one more challenge is that if the technology is used by political parties, by the politicians especially in the Democratic system, there is an extra danger because it could be manipulation. It could be that they don't manipulate, somebody else manipulates. Because it can go through the companies whom we don't know, or we don't trust. So it's the whole list of questions concerning standards, concerning certifications, you know, concerning to know better with whom we work.
Because you know very well that the algorithms or applications even can lead you to a complete control. It can be controlled by, you know, methods you don't even expect because you don't really -- are aware fully of that.
And one more thing is that the politicians are -- most of our political leaders are people over 40, over 50 or even older. Two presidents of the United States, Joe Biden is almost 80 years old, and Donald Trump is I think is 77. So if we have such people, they are not necessarily and not always really understand all of the dangers. Yes, please.
>> EDWARD STRASSER: We did a series of interviews with technology use, CEOs of dem-tech companies that provide these kind of technologies and asked them what their most precious concern was. And they said our problem is that our clients do not understand what we are doing.
Because the political sphere has no clue what these technologies are actually about.
>> MODERATOR: Yes. And also we've seen that everybody remembers when Mark Zuckerberg was invited by Congress, and he was explaining to the Congressmen and Congresswomen the nature of Facebook.
And there was so many misunderstandings and also it was visible that some of the politicians were not really aware fully.
So, Dobromir, the question to you now. Because you yourself, you came from, let's say, social sciences into an IT company. And you offer solutions, you offer algorithms.
Who could help us to fight against these phenomena? For example, against disinformation. Can you elaborate what exactly you are offering?
>> DOBROMIR CLAS: Yeah, sure. I will gladly share some experiences.
You know, we have over two years working for U.S. State Department tracking disinformation in the region. And had some experience working also in United States for LAPD during presidential elections.
So basically maybe this will sound a bit disappointing, as I say, about some experience there is no magical AI who could solve every issue of disinformation.
I mean there are some tools, there are some algorithms that I will just shortly tell you how they work and what can they do.
>> MODERATOR: But at least there are some tools based on AI already who can fight and other tools who disinform, right?
>> DOBROMIR CLAS: Look, you should always look at this as a hybrid process where there is some kind of AI and there is also a human in the loop.
When you talk about social media, you can, of course, have algorithms that do early detection of narratives. That's very useful and this is what we do. You have between 24-48 hours before maligned narrative will spread to the mainstream. And actually --
>> MODERATOR: So fast.
>> DOBROMIR CLAS: It is really, really fast. So you need to really detect narratives on the early stage before the news get written, right, in order to hijack the narrative. So this is one thing you can do based on the algorithm, so it is a segmentation of different topics that people are talking about. And this is accessible actually.
But the interpretation of the narrative itself and whether it is dangerous or not, this is not something that AI will actually help you to find out.
The second element of the AI system, as we can say, is actually finding what's false, you can find some elements that are not natural for the traffic and obviously connected with some fake accounts or amplified traffic. And this is actually the essence of disinformation now.
Of course, we can tell a lot about narratives, about fake news and so on. But actually building up the reach of the fake news or amplifying some elements of the narratives, this is something that really builds I think some tensions. And I know spreading really a bad -- and is a sort of related to some bad intentions.
>> MODERATOR: If I just may interrupt just to stress that when we say disinformation, it can be now in any domain, in any sector as far as the topic is concerned.
For example, in our pandemic itself, you know, increased number of disinformation narratives in the internet and in the social media such as anti-vaccine, such as many theories. So there is really every single domain people are active that disinformation can be born.
And if you can just explain to us precisely how can we really using such tools based on AI like many companies like yours who are trying to fit into, so we are one step further. Because some years ago, there was no such possibility, we were always a little bit late disabling those who started this information.
Now we can really compete, right? We can really fight using AI but also with human beings. So if you can just demonstrate or give us an example how you do it.
>> DOBROMIR CLAS: Yes, of course. So the first thing is actually catching up the narrative really early and finding out if the narrative is built up organically or if it is somehow created by some boosted traffic or, you know --
>> MODERATOR: Inspire bots.
>> DOBROMIR CLAS: Inspire bots and so on. In the internet, we can see on average if we tackle some social topic currently, if it's LGBT or if it's pandemic and vaccines, we see about 30-40% of artificial traffic that is involved really. So you can see fake accounts and you can check statistics of Facebook. You know, quarterly Facebook reports over 1.5 billion of fake accounts.
>> MODERATOR: Quarterly, 1.5 billion fake accounts.
>> DOBROMIR CLAS: Billion accounts. Whereas, the official number of regular users are two billion. So imagine proportions.
And so there are mechanism that create those fake accounts and they are boosting up some things. And right now we can really see that these things are happening. And before they go to the level of the mass media reach, we can actually hijack this.
So when you are talking about entire disinformation systems, there is a sort of technology that creates or serves as a rater, early detection rater. This is what we do. But you need to have processes that really allow you to really fast implement some sort of anti-disinformation strategy and it's really based on cooperation of, you know, NGOs.
>> MODERATOR: Think tanks, yes.
>> DOBROMIR CLAS: Think tanks and so on. Technology would not solve all of the issues and of course it must work on the level of PR, communication of, you know, of companies, public institution and so on.
So this is what we need to learn together apart from the AI that of course is very important how to cooperate within the wider scope of different players to really, you know, do this kind of pushback when we know and observe this information. The question is how to react within 24-48 hours because after this it is going to be just too expensive to cope with, all right. So many players need to be involved in this.
>> MODERATOR: So that is recommendation number one. And I want in the second round of now our discussion to go into the recommendations especially to the public sector itself, to the politicians, to the government who meet during this event IGF here in Katowice how to find this balance, right?
So your recommendation will be, first of all, to cooperate. That is not only the IT companies can do it alone, and they can't do it just to show the service to the client. It has to be also an analytical part, qualitative, quantitative analysis and also communication strategies to be able to find the right solution to the disinformation campaigns.
Dalia, if you can comment on that because you do this in the Center for International Relations with us, but also with other think tanks you had the experience.
How would you -- what could be your recommendation then?
>> DALIA BANKAUSKAITE: I would --
>> MODERATOR: Dalia, microphone.
>> DALIA BANKAUSKAITE: There are more than one point should be kept in mind when we are talking about countering disinformation.
One thing that any disinformation operation is an integral part of any influencer operation, any attack. So that can be to the infrastructure, and this combined with the disinformation. It can be cyber attacks, but it is always involves the content that is disinformation.
Another thing is that disinformation is not a single action. So it is a systemic, it's a strategic communication for the maligned purpose, goals. So it is systematic, and the narratives are constructed by structures, architecture of different messages, that are used and abused and using our vulnerabilities of this society.
>> MODERATOR: So, in other words, it could be like other States playing with that? It could be also some companies vis-a-vis other companies who can harm?
>> DALIA BANKAUSKAITE: Depends on the goal. You might be manipulating, you might be persuading.
Because what is communication? It's conveying certain information and expecting that your target group will behave according to the information that receives and believes in it.
So it's about really fighting, fighting for minds and hearts of the society.
>> MODERATOR: We say that we are in the state of war already, it is information war we are into. And Europe became really a battlefield of this.
And, of course, it is not only necessarily the States involved, it could be also be disinformation about our --
>> DALIA BANKAUSKAITE: Different players, different --
>> MODERATOR: Like anti-vaccine is a very good example.
>> DALIA BANKAUSKAITE: Absolutely. And it's not necessarily everything is very purposeful. If we talk about countering disinformation, if we are always in a countering position, we will never win because it is a defensive position. Whether it is infrastructure or whether it is debunking things, these are tools. Content is a matter.
So far, AI as we see is extremely important but still we are living beings and our brains is the value. So the content, our identity and our strategic communication, our narratives and we use this term positive narratives, that is we have to know our story. And believing.
And then what about technologists is excellence. It is excellent when we talk in military terms we have logistics. So in a digital world we have information logistics because we see how different messages have locked together and what can be a signal. But again, you have to have the brains to know what to do with it. So that's my recommendation.
>> MODERATOR: So instead of answer to the threats to the narratives presented by somebody else, we should produce our own version.
>> DALIA BANKAUSKAITE: You have to believe the story.
>> MODERATOR: I would like to get back to Edward now because democracy is itself a question mark. There are some countries who say, you know, democracies during pandemic didn't deliver.
Democracies are not really systems. It's not a system adapted to 21st century challenges. You can imagine who says these kind of things.
So how we, democracies could really prove that we deliver and how dem-tech, the thing you are into this domain, technology in use of democracy helps to prove that democracy can work?
>> EDWARD STRASSER: A couple of questions following on what you said, Dalia.
This defensive democracy, I completely agree. It is more of a self-confident democracy that we need, a self-confident democracy where citizens and politicians and institutions together try new things to get ahead and not just to run behind completely.
And when we talk about a self-confident democracy that tries new things that is courageous and creative in the political system, we need to educate political operatives, political professionals to being able to develop new technologies together with companies and to use these technologies. We have seen that.
>> MODERATOR: But what kind of communication?
>> EDWARD STRASSER: An example, artificial intelligence, for instance.
The legislative bodies in many countries do not know what kind of challenges they will face in the legislative process with artificial intelligence in all fields of policies, in all fields.
And so what we are the institute doing right now is for parliaments educating members of parliament how to rethink the process and what kind of technologies in the field of artificial intelligence will change their work as members of parliament. And this is what we are doing with different countries.
>> MODERATOR: And they are willing to really be educated?
>> EDWARD STRASSER: Many understand that they are lagging behind with their knowledge and they desperately need and also accept the offer if given in the right way.
>> MODERATOR: So again education will be the sector. When we discuss these things concerning democracy sooner or later we end up with the education because it is obvious because we cannot have democracy without really educated citizens.
>> EDWARD STRASSER: There is a gap, they are really working on these kinds of fields. But on the legislative level, plus on the regional and local level they -- most of these people elected by the citizens have certainly the need to get more information about how these technology is used.
There is already two fields, a field that is very well educated and knowing what to do in the executive branch of government and the rest in the political field.
>> MODERATOR: Okay, that will be the answer of somebody who deals with dem-tech, meaning spreading out technology into the democratic processes.
We politicians and political scientists try to change also a little bit the democracy itself because we feel that democracy has to also be adapted to new times.
For example, not only to use in many new ways technology, but maybe also to think about engaging more people. And, you know, from technological point of view, democracy could be direct today. Could really engage everyone. But, of course, not everyone is willing always to be asked about everything.
So we need maybe new institutions, new processes. That is also another level we have to think about, not technological one.
>> DALIA BANKAUSKAITE: And we say that technologies not necessarily guarantees more democracy.
>> MODERATOR: Yes, exactly. And that is why I wanted to share with you our recommendation.
We as a Center for International Relations, we are operator in Poland and Baltic countries with Dalia of a project called Everyone which is a new initiative born, by the way, in Vienna and in Germany as well to spread out the idea to add to existing charter of human rights which is a part of our Lisbon Treaty in the European Union. So it is a charter of human rights of the EU right now.
Enlarge it. Add new articles. New rights. And, of course, it is very much -- it is connected with 21st century because most of these new articles concerns internet and right, for example, of people to decide what kind of knowledge about themselves will last in the internet and what will be deleted by them as owners of this information.
So this will be the right for let's say decisions about our own profile in the internet. Or another article, it will be -- very interesting one, I would be very curious how you react on that. The right to be said the truth by the politicians, by those who are on the top and are paid by public money from public budget. From, you know, our own people's money, let's say, because the State doesn't have any money, as you know. It's our money. Citizens' money.
So the right to expect that these people -- Presidents, Prime Ministers, Ministers, tell us truth. If not, they could be persecuted for, you know, misleading us, disinforming. That is, of course, an open question and we will be very happy to discuss it further. But this could be one of the answers to the challenges we see.
Jaroslaw, to you the question would be a little bit similar but maybe from the other point of view. Also you serve the public sector very often and you also have to implement technologies which necessarily are not always understood, and you have to also protect the clients.
What would be your recommendation from your point of view to be able to, you know, obtain both the contracts and also the protection of the clients?
>> JAROSLAW PACEWICZ: Right. I mean I fully agree what was said before. I mean that this education is really important because if we take into consideration the whole cybersecurity world, the weakest element are people.
Because we could have very sophisticated from technology point of view systems, but if I will put on a parking lot USB stick with the malware or any virus and somebody is not educated and he or she did not put into the corporate computer, nobody helps.
So what -- and the second thing I fully agree is that the speed of technology development is much, much faster than the legislation.
So that's why what we are trying to -- how we are trying to convince the customers is always to use the state-of-the-art or somebody saying the best practices. It means that we cannot find what to do in the legislation, but we can find it in, for example, some norms or the best practices. Like the ISO27K from the cybersecurity point of view or from the industrial point of view the IEC 62.443.
So that is why we are -- together with the customers are trying to go through the whole process on one hand to educate them and to show them. The second thing is what we are doing internally. I mean all of the process, internal process in Siemens is we call it from the marketing point of view, the cybersecurity in design.
So what we are trying to do is to from the starting from design step to prepare our product and solution that are the most secure. Of course, not 100%, it's not possible. But this is what we --
>> MODERATOR: In your opinion, is it possible to really secure, you know, a critical infrastructure such as energy, for example, which is absolutely key today because we don't need any conventional war.
We don't need, let's take an example of Russian Army invading Ukraine, that is what we are discussing in the Polish media right now and in the European media as well.
Russia doesn't need to do that. If they are so good in cyber attacks, they could harm us by, for example, switching off the power.
>> JAROSLAW PACEWICZ: Well, that's why based on my experience even in Poland the alertness level is really high. And what the customers from the critical infrastructure expecting from Siemens is that we will implement some best practices like ISO27K or others.
So if we are providing to them our products or services, they expect that we are doing this job based on the best state-of-the-art processes, procedures, whatsoever.
>> MODERATOR: So, in other words, you can say that that is absolutely necessarily element if we want to really implement sophisticated new technologies in the infrastructure which is critical without really securing that they are safe, that they -- that they can really trust that they can really operate and be connected to, for example internet, it is not dangerous. It will not be developed this way, right?
>> JAROSLAW PACEWICZ: Yes, of course. I mean, frankly speaking, the digitalization more and more devices or people connected to the network, it means more risk on the cybersecurity side.
>> MODERATOR: Exactly. That means that, you know, this connection will be the best way. But that doesn't resolve the problem either.
We have, sir, first question, I think, in the audience. If you can just introduce yourself. And I encourage very much all of the participants to add to the discussion and questions or comments, please.
>> AUDIENCE: Thank you very much. My name is Al Kapuls from Netherlands working as a consultant AI with KPMG.
I was triggered with the argument of the weakest link in cybersecurity would be the human. But relationships and people and how we act in life is based on trust as well. So we also need to tend to trust the other side. For instance, today, in my goody bag I received a power bank, a power bank with USB cable.
>> MODERATOR: Even through the participates to this IGF summit, yes.
>> AUDIENCE: Yeah. And now I have to make a decision, okay. It's Poland. Hmm, okay, is that total secure? I come from Netherlands. That is one consideration.
>> MODERATOR: What is wrong with Poland? They are still in the European Union.
>> AUDIENCE: In democracy we can have a whole other discussion about that, but let's keep on topic.
>> MODERATOR: I think our government is on this.
>> AUDIENCE: I'm not making this discussion right now.
>> MODERATOR: But I trust you.
>> AUDIENCE: Okay. But we receive this. And I have to make a distinction according to you and also according to my own knowledge because I think that I am educated enough to make a distinction.
If we trust, for instance, now this defies. But on the other hand, if a company provides me with a USB stick, for instance, my own company, I must hope it's safe.
But I don't know if at the IT department or maybe before at the parcel guy or girl who delivered that to the company has already put in some malware or ransomware into the USB stick.
It's all about trust in this. But Education is key in this, but it is a balance. I just wanted to stress that and maybe we can start the debate about that as well.
But it's -- it's about trust. But on the other hand, I worked at another company, and they had the whole folder infrastructure public for all employees. So me as a curious employee of the company just clicking through the folders and reading everything. And then at one moment, our approach you are not allowed to see this confidential information. But I said okay, I can see it, I'm an employee, I hope we trust each other.
And that triggered also a conversation within the company. But my point was trust and education, I think it is a balance.
>> MODERATOR: Thank you for this comment. Because I think it is a very important thing you raised.
Actually, first thing what we did, you know, I am Polish, so I have no problem being here, but I checked who produced this power bank. We checked that. Because the producer can also be an issue if we don't trust all of the producers of such devices, right? Without naming them.
Okay. Somebody wants to comment on the comment? Jaroslaw?
>> JAROSLAW PACEWICZ: Well, I mean I agree with you that we should trust each other.
I was not talking about let's say trusting between the employees. However, you probably know that sometimes it could happen that we have angry employee who is leaving company and he wants to make some damage to this company because he didn't agree that he was fired or something like that.
I was -- when I was telling about the weakest point of the human being, I was thinking about the criminals who are using, for example, the social engineering to convince us to take this stick and put to the company computer. Rather than your friend from the next desk will give you hey, I have a new movie so just check it.
>> AUDIENCE: Yeah, but I could possibly trust my colleague sitting next to me but maybe, yeah, he is the weakest link in this.
>> JAROSLAW PACEWICZ: Can be. So that is why, for example, in the -- let's say even in Siemens in some departments who are taking care about the cybersecurity every computer does not have a USB port. So you cannot copy company documents or company files and take them with you outside of the office.
>> MODERATOR: He is also right, you know, because during pandemic we still are having pandemic and online and so natural. And we discovered that in many public institutions because of the protection we couldn't really connect. Certain, you know, tools were not allowed because the laptops were protected.
So the human element would be to avoid this problem and to be able to connect the people were using private equipment. They were using private laptops. They were using smartphones. So it is very much about your comment. It is -- well, we have to trust each other.
But there are also some levels of protections which could not be too high because otherwise we cannot really work. We cannot really be --
>> JAROSLAW PACEWICZ: I mean the most critical is that always what, for example, we are doing at Siemens is always to make the risk assessment.
So, for example, if the company will allow you to look at every file on the servers, so if this is not a critical for them if the information will be made public, that's totally fine.
>> AUDIENCE: Make a note it was my former employee.
>> JAROSLAW PACEWICZ: Okay. And if, for example, at Siemens, we are producing a blade of the gas turbine and the precision can be 10.0001 millimeter and some criminals will change technical files and the dimension will be one single millimeter bigger, then it could cost people's lives because this gas turbine will damage infrastructure and sometimes can kill the people.
So that is why we always think about each single piece of information from three points of view. Confidentiality. Does it mean I will release something. Integrity. If someone will change something within. And last is availability. If I lose this information, am I still able to continue my business or not?
>> MODERATOR: That's also a good recommendation. Dobromir then, and we have one more question or comment.
>> DOBROMIR CLAS: Yeah, I'd like to comment. I loved your comment, you know, because you just tackled this idea of trust.
And you asked a question can I trust this country that I'm coming to. Can I trust the company and my colleagues. And so on and so on.
And there is this very wide term of digital trust as a framework. And, you know, I think it's -- it's one of the elements that provide a lot of, you know, recommendations to how we should really act right now in this technological reality.
Because we are very much in love in innovations. And most of the innovations are just acquired and implemented. And not so many I think companies do this balance of just balance testing, asking a question, you know, whether it's not only good for the profitability short term or is it good what's in the long-term for the humanity. So this is one thing.
And I'm coming to the point where we started talking about disinformation, but the real problem of disinformation is actually a business model of the media. You know, because no one is really interested in getting rid of bots and trolls and all that stuff we don't trust because it works to the profitability of the social media, of the media itself and so on.
You know, you have a situation where the digital advertising market rises every year double digit like 20, twenty-something percent. And if you look at inventory that's available to the consumers, it is not possible that it's growing because the population is stable, and the number of mobile phones is stable and so on and so on.
And only due to the fact that there is this growth factor that all of the players want to be part of, they are interested in keeping all this false, you know, all of this false traffic to their advantage.
And this is where digital trust is sort of have boundaries, right? So we could say we don't really trust this media because their algorithms not serve our public, let's say interest, but only serve themselves, you know, as an income process.
So it is exactly the same. You know, we can, of course, discuss big corporations. We had a lot of examples from the corporate sector including let's say Volkswagen. You know, they were cheating on emissions for the cars and so on because they had short-term interest in providing something that would result in a profitable business.
But this is -- this is something I would use as a very strong recommendation to really implement digital trust framework.
So require from every company and from every player on the market truly build a transparent framework for building this balance test. So we know that there is a Facebook. We know that there is this short-term interest in building revenue. But Facebook should provide, for instance --
>> MODERATOR: Meta now.
>> DOBROMIR CLAS: Or Meta, right. Should provide some information how the algorithm works and then do some balance testing whether it is good -- of course it's good for the company. On the other hand, does it really provide something good for the society, right?
>> MODERATOR: Let's put the dot here because otherwise we will not have time for another comment. Please, let's just stop here because we have another gentleman who is willing to comment as well. Okay.
>> AUDIENCE: If I were sitting on the other side, I would also, indeed, encourage you make an impact assessment on all stakeholders involved.
So the company, the public. And if you do that in the multi-stakeholder way, then you can see if AI system would be harmed or not.
>> MODERATOR: Thank you for this recommendation. Please introduce yourself.
>> AUDIENCE: Thank you very much for this opportunity. Is it working? Okay. So my name is (?), and I'm an IT consultant.
I come from Pakistan, but I've worked worldwide. Right now I'm based in Germany. My concern about all of this topic was we started talking about AI and its misinformation and miscommunication, all those aspects.
We know that there are vulnerabilities, then we move to cybersecurity and threats. But the most important is how we bring the strategies particularly from the policy maker's point of view as you raised about the human rights.
So I coined a term a few days ago, digital human rights. So might be that would be the right caption for you. And it is not about digital human rights. It is also about that how we create the awareness. And, for example, I can drive a car and, but everyone cannot drive a car.
So are everyone does not want to drive a car. So but still they can use a car. Just like we are using internet, but we are not aware about that how the cookies are behaving with it, how the information is getting in our newsfeed.
So that all information filtration process for every human is not easy. The legislation, as you mentioned, pre-Bible, there is a whole history there that we always had ways of making laws. We were making laws either on the basis of religion or the culture or the basis of need.
So now this is the time to make the digital laws and implement them no matter if it is a governance on the democratic basis or a regime basis. But still there should be the legislations which should be enforced or dictated following the standardization and then awareness should be created for the common purpose.
>> MODERATOR: Thank you for that because it is exactly what is happening in the European Union and in the western world definitely.
>> AUDIENCE: Especially GDPR.
>> MODERATOR: Yes. But maybe, you know, Edward and Dalia, please, if you want to react on this comment.
>> DALIA BANKAUSKAITE: Short answer is yes is the reaction.
I'm fully on your side and I'm saying yes. On the other hand the reality is as we know internet happened to us. It is running before in front, before and we are chasing.
So the legal basis or the rules that should be agreed and that we agree upon the rules how we are going to navigate and behave and be accountable for the digital world, this is now in the process.
For example, there is a good practice code initiated by the commission together with the social platforms with media companies. It is absolutely clear, it's a good -- it's a nice step forward. But it is not enough. It didn't work. It does not really, you cannot rely. So the code book or legal basis are under the drafting. It is extremely, extremely demanding.
More important, I would say when I hear about infrastructure, Siemens infrastructure examples, I immediately think about the energy sector infrastructure. And trust is extremely important. Rules and agreements as well. And agree upon what world we live.
If it is the democratic world, it is based on trust. But on the transparency and accountability. Certain moral values or moral compass. It is not a naive talk. Otherwise, we get lost. So it's important really the Member States trust each other.
Within the companies, yes, it is extremely important. But at the same time, this is to do a lot with intelligence with our national securities that unfortunately even allies do not like to share.
>> MODERATOR: But I understand his point was also that even if we trust certain rules of the games have to be put like the same way with cars. You gave the example of cars, we have to have rules on the internet how to drive, let's say, for the internet, how to exist in the space.
And if you want to know more about that, the Center for International Relations has a project called Start to Think.info and we propose certain code like the roads, you know, the code of signs for the roads was implemented one day.
So now we propose the code for the internet, for the internets to be able to browse among the sites in the internet to stress what is really important to remember, right?
One more question to Edward. Maybe he wants to react and then I will get back to you.
>> I really agree.
>> Just like the technology is agile and it is evolving every day, we just keep thinking in the way of legislation as it used to be it will never work.
We have to also come up with the solution with the agile legislation, maybe an AI-based legislation which will evolve as the evolvement of the technology continuously.
>> MODERATOR: Thank you for this comment. We have another comment. Yes, please.
>> AUDIENCE: I'm from the Center of International Relations, and just wanted to refer about the legislation.
And you mentioned the Project of Everyone. And it is exactly about that. And just wanted to tell you or the article about the artificial intelligence and it says everyone has the right to know that an algorithm imposed on them are transferred, verifiable and fair. And the major decision must be taken by the human being. So this is exactly related to the --
>> MODERATOR: This is very much about, you know, at the end of the day, even if we use algorithms, even if we use AI, it can't be that the AI decide. The decision has to be given to the humans. And the AI has to be treated as a tool, right, as an instrument in order to help us, not to in one day to take control over us.
Dobromir, do you disagree?
>> DOBROMIR CLAS: I have little doubts about the, you know, this capability of humans in terms of making conscious decisions on whether -- because, you know, as you look on the perspective of using let's say social media, social media say but listen, you have everything written down in our regulations, you confirmed how your data will be processed.
And after, you know, a couple of years, people woke up in this strange reality, you know, thinking about how and why this model provided this kind of -- this kind of results. You know, looking at disinformation, looking at tensions between different groups.
>> MODERATOR: So what would be your recommendation then?
>> DOBROMIR CLAS: I'm not sure if I would really -- I understand that people should be informed, and I understand, I believe in transparency in terms of access to information, whether we are contacted by, you know, another human or AI and so on and so on.
But I really believe in this world, I don't believe that average human, average consumer will ever understand technology. It is beyond comprehension.
So I believe more in the role of public institutions, to be honest. This is what I think. The public institutions are actually securing consumers who are unable to do conscious decisions about themselves.
This is unfortunately -- this is, you know, in many areas of our lives starting from technology maybe ending on, I don't know, medical treatments we are not able to make decisions for ourselves as consumers.
>> MODERATOR: Maybe it's not about, you know, knowing every single detail of the new technologies.
Maybe it's like with the cars. The example was given that we don't need to repair the car, we don't need even to know what is inside the engine, but we have to know where we are going and we have to know what are the rules of the roads we are driving upon, right?
So in the internet, still everything is open. We have one more comment, the last one because we are already over the time. Please introduce yourself.
>> AUDIENCE: Yes, my name is Xavier Brandow. So I work for Iamhere International. And we fight disinformation, misinformation online and hate speech.
We believe AI is great, it's fantastic, but it's far from being enough. And we think we should be careful about techno optimism because in the coming years or decades probably we will still need humans to fight misinformation.
And what we see most of the time on comment sections because we go on comment sections to counter hate speech and misinformation, we see that it is people who are sharing it on good faith, so they believe in it actually.
So we think the human factor is really important because we need to be talking to these people, you know, human to human. And they expect human to talk to them. And, you know, fellow citizens. So I wanted to highlight this important aspect.
>> MODERATOR: Thank you. It is very important.
And then my final comment will be what we've discussed here of course we'll discuss many more times because we are at the beginning of this process of finding the right approach to new technologies.
Because I think we are overcoming already this space that we were super enthusiastic. We are now very realistic, and we want to use new technologies definitely because they make our life easier. But in the same time we have to find the right ways to control them and control maybe ourselves in this world full of new technologies everywhere. Thank you for the discussion. Let me say just once again say Dalia Bankauskaite, thank you very much, Lithuania. Jaroslaw Pacewicz, Siemens, Poland. Dobromir Clas, Polska Edge. Edward Strasser, Austria, Vienna, Innovation in Politics Institute.
>> EDWARD STRASSER: My pleasure.
>> MODERATOR: Thank you all for watching.