IGF 2021 – Day 0 – Event #107 INTERNET OF BEHAVIOR – NEW WAY OF THINKING?

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

(Video plays:)

>> We all live in a digital world. We all need it to be open and safe.

We all want to trust.

>> And to be trusted.

>> We all despise control.

>> And desire freedom.

>> BOTH SPEAKERS: We are all united.

(End video.)

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: Hello, everyone. Welcome to Internet of Behavior ‑ New Way of Thinking?

>> MACIEJ NIEZGODA: Hello. It's great to be part of the Internet Governance Forum 2021.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: My name is Dominika Kaczorowska‑Spychalska, Director of Mixer of Smart Technologies Centre, and co‑organizer of this seminar. And I'm co‑hosting it with Maciej Niezgoda.

>> MACIEJ NIEZGODA: Thank you. As Dominika said, I'm Maciej Niezgoda, Program Director for the Center for Ethics of Technology at Humanites Institute, the second institution co‑organizing this wonderful event, and we're really glad to be here and be part of this event.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: Thank you, Maciej. Big thanks to our panelists and participants. We're very privileged to have excellent guests with us this evening. It's my great pleasure to welcome and to introduce Zofia Dzik, in part investor, Founder and CEO of Humanites Institute, humans and technology. Hello, Zofia.

>> ZOFIA DZIK: Hello, Dominika. Hello, everyone.

>> MACIEJ NIEZGODA: Hello. Our second panelist is Robert Kroplewski, Minister of Digital Affairs for Information Society, expert of future industry platforms, State Treasury Foundation. Hello, Robert.

>> ROBERT KROPLEWSKI: Good morning. Nice to be here. Thank you for this presentation.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: Okay. Our next fantastic expert is Katarzyna Paliwoda, Head of Emerging Markets Central & Eastern Europe at Meta, previously Facebook. Hello, Katarzyna.

>> KATARZYNA PALIWODA: Good evening. Great to be with you here.

>> MACIEJ NIEZGODA: And our other guest is Dr. Katarzyna Sanak, Assistant Professor at the Department of Marketing, Cracow University of Economics. Hello, Katarzyna. Great to have you here.

>> KATARZYNA PALIWODA: Hello. Thank you for having me here. Very nice to see you all.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: And our next expert is Kinga Stopczynska, Assistant Professor at the Department of Marketing, University of Lodz, PR & Marketing Director in Marinex International. Hello, Kinga. Nice to see you.

>> KINGA STOPCZYNSKA: Hello. Good evening. Thank you for having me. It's a great privilege to be here with you.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: Before we start, together with Maciej, we have prepared a short introduction to the topic, because today we're talking about internet behavior and the increasing polarization of the Internet of Things, and that systematic increase in the number of devices connected to the network, generating a huge amount of data. It's estimated the number of smart devices will be around, wow, 31 billion globally in just four years. And, wow, the Internet of Behavior is a solution that combines the potential of IoB, to the collection of data, and the achievement of behavioral psychology, allowing us to explain and understanding of data.

And well, the potential of IoB is large. And behavioral data is collected by social networks, online stores, wearable technologies, many, many others. And thanks to IoB, business has reliable knowledge about consumer attitudes as a result from our real behaviors. And on the other hand, customers receive information that speaks out about our decision‑making process and depends on our experience. Wow. It's wonderful. But Maciej, what about the challenges?

>> MACIEJ NIEZGODA: Yeah. I would like to add a couple of things to what Dominika just said and I'll focus on threats so that we can better grasp the multidimensional facet of IoB challenges. I'd like to point to some problematic features of the IoT itself, because IoB is to a certain extent part of IoT. First of all, I would like to just mention ubiquity or omnipresence, which simply means that all the devices, internet‑connected devices, smarter devices, will be everywhere around us, both in public and private spaces.

This is the first feature.

Another thing is a large connection of things and people will generate enormous volume of data, which will or can be used to increase the scale of the surveillance capitalism as Shoshana Zuboff puts it. The third thing is miniaturization or invisibility. All the IoT devices will become smaller and smaller in time. And as a result, it can result, in fact, in avoiding audits or control. But there are also other characteristics. I would like to say a couple of words. IoT can be used for public good, for example, to steer people towards socially desirable behaviors like, for example, pro‑ecological initiatives. But it can also be used to increase the engagement of users, of consumers, and this in turn may increase their vulnerability or exposition to new marketing practices.

And this was very quick overview of some of the possible problems we might have with IoT. And I'm sure that we will discuss them in further details during our panel. So now without further ado, I think it's time to start our discussion. And Dominika, I hand the stage over to you. Go ahead.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: Thank you very much. First, we're going to have a discussion and then about ten minutes for questions from the audience, and you can write your questions in chat all the time. And well, now let's start with a question to all participants. And everybody knows the different opinions that have arisen around IoB. Well, in your opinion, is it the result of misunderstanding of its potential? Or maybe it's a consequence of analytical behavior in the context of data or privacy which we have seen in repeat years. Robert, I will start with you. What's your opinion?

>> ROBERT KROPLEWSKI: Thank you for the questions. Thank you for asking me first. I didn't expect that, but that question is put on the table a big tension among the usage of Internet of Behavior and ethical challenges in front of that kind of technology and maybe it's worth to say now even business model.

It still depends how we use the kind of business model and how we engage the internet of behaving with the business model, and as we can observe with different technology, technology still is neutral. It is the aim and the result is depending on our usage as a company or our usage as a users. Of course, it's a mixture of expectations of interpret of behaving and fears of many users. If we see the actual state of privacy, we of course have many failures. In that situation, because of the topic of artificial intelligence, and lastly there were very historical moments and organizations like UNESCO, for example, and Council of Europe approved the first special mechanism which is to empower human dignity. If we put human dignity in the center of internet behaving, everything could be managed on the beneficial side of that kind of methodology, that kind of business model. I still believe that that kind of venue, its root and even aim of the challenging business model goes to internet of behaving. Behaving, if we try to measure the behaving of internet, the behaving of people, it is, of course, we need to think from the visual sources and the bodies sources and mobility and things like that. And how we can manage the conflicts or the ethical value or the most specific business usage, in many opinion, finally, we can engage that kind of business model more beneficially for people. Thank you very much.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: Thank you. Zofia, what do you think about it?

>> ZOFIA DZIK: Dominika, as Robert was saying, the technology is neither good nor bad. It's a question of what we do with it. If you ask the question whether this opinion is a result of misunderstanding or actually it's arisen from misusing of some data or unethical behavior, I could simply answer it's pretty well‑earned by giving at least two examples. First of all, in the recent years we've seen a lot of cases where Internet of Things were used, actually, as a gate for hacking internal systems. And we could see pretty well that IoT was from a way the weakest link of ICT. This is the first element, to make the answer short and to give the chance to sow some other opinion. The second thing, actually what you were mentioning, and also Maciej was mentioning, while referring to Shoshana Zuboff, that IoT, it actually has created a lot of small gates for unethical data collection. It's not only the question of many devices that we know that look pretty innocent for voice controlling, like TV or some other home assistance or voice assistance in the phone, but also going as far as toys for children. Giving a lot of examples from even well‑known world‑wide toys producers. And I think that still very few parents are aware when they are playing with an innocent toy, they're not aware that all the conversation is going around the toy in the house, it's recorded and kept somewhere in the unknown place in the world.

Then I think it's great to have the discussion, because and also there's the reason why we are talking so much about ethics in technology. Because the pandemic even speeded up the technological revolution. We know that the legislation or regulations usually don't follow so quickly, the changes in the innovation.

But we're facing the challenge that humanity or the world was facing when we had to design actually the new road sign system and now I think that we are in the similar situation where we have to design the system of these warnings or prohibition but also the mandatory signs.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: Okay. It's very interesting. What is your opinion about IoB.

>> KATARZYNA PALIWODA: Thank you so much. So referring to your question, I'm representing business here, obviously, so from the business standpoint, and starting from as a company, I think what coming back to what Zofia has just mentioned, reaching for creating standard rules for the internet. We do believe it's badly needed right now. Exactly because everything we have will eventually be 30 years old. Now, if we think about coming back to us as the business and our obligations, we definitely believe we need to improve people's awareness, understanding. That's an ongoing process. That's not something that will end eventually because that's also ongoing education we need to do for everyone, 3 billion plus people using our platforms.

Now, I think the rules we should not kind of forget from that perspective, starting from it's people who should own their data. From that perspective, if I own my data, I can also access it, edit it, delete it, or move it somewhere else.

And then eventually the companies, like ours, and everyone else, should be simply held accountable for sticking to the rules. We need the legislation to follow, the global, because internet is global, obvious to say, to help us to do that. On the industry side, we can obviously involve industry, but we need academics and regulators to actually help us to get there from that perspective, so that people and our consumers, our users, feel more safe.

And then eventually over last years, I think we've all learned our lesson on the privacy overall. From the business standpoint, Robert mentioned business models, our business model is X, yeah? We sell X. From that perspective, we do believe that both advertising and privacy can go together. And that can actually boost economy because, why? Because it's actually allowing the advertising that was only allowed for the biggest companies, but now others can effectively compete with the biggest companies in a way. We do believe it goes further, but, again, it's all around the rules of people owning their data, eventually being able ‑‑ apologies ‑‑ being able for them to access it. And for us as a company, the obligation that we should educate everyone how to do that from that perspective.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: Okay. Kinga, do you agree with it?

>> KINGA STOPCZYNSKA: Of course, as a marketer, of course, I shouldn't, and I cannot look at this topic in a different way like from the perspective of the client. On one hand, the customer is a person who loves to share his life, for example, on social media. On the other hand, he has violation of his privacy. That's the problem that we have. We love to share the information, but on the other hand when we hear the company is collecting data on us, we feel not comfortable. That's why data management is a very delicate subject. Many times, obtaining them takes place almost without the client's conscious knowledge about this. This is the problem. That is why it is so important for the future of the internet of behavior to build customer awareness in this area and educate them. How can we help them? What can we do for them? How can we improve the life of the customer with the information that we gather from the market?

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: Okay. We must regulate and educate. Katarzyna Sanak, what do you think about this education and Internet of Behavior?  

>> KATARZYNA SANAK: You probably all know The Market for 'Lemons', yes? You're talking about economics but I think we can do it copy/paste and try to analyze this at the same level. We have two actors: The company that owns the data about the consumer, and the consumer knows less and the business knows more. Consumer very often doesn't really understand the process, doesn't even know that he's being tracked.

The thing is there's no clear solution for that. Obviously, it won't be easy. I see definitely three steps or three parallel tracks that we should follow. The first one has been already mentioned. So the legislation. But as you also said, it's not that easy. Usually, innovation is faster and then allow government and all of the regulations and legislation processes take longer, not only in Poland. So the first path is the legislation. The second path is definitely transparency. But the thing is that well for transparency customer needs to be at least aware of this act, what he or she should double check. What I mean, very often, while consumers are able to get this data, even right now, that they are being behaviorally tracked or targeted or whatever, but they simply don't know where to find it. Yes? That's the second thing. They agree on all of the agreements without reading them, because there's too long and so on and so on. So here, the first path we should start with is the education. The problem is the education is that while in Poland, and I think that in many other countries is the same, because well we work on this with my colleagues all over Europe, and the education starts late. Our customers are very often children age 2 or 3 when they start to watch Peppa the Pig on their iPods, right? They're being tracked very early on in their childhood. Parents don't really know how it happens. And companies don't share this knowledge with the parents or the children. Last year, we had this project at the university where we're doing this third mission of the educational program for children at the primary school and we had classes about the fact‑checking and classes about, well, targeting, generally speaking. Yes? Trying to explain what is the target, so on and so on. A funny thing happened. First of all, we were supposed to work with children only, age 9 or 12. So we had classes for kids. Then teachers asked us to have a class for teachers. Then finally we were invited by a parents group to have classes for parents. So the need is huge but I don't really see any gap for this education in formal programs. Right? I think this is pretty important because we are not teaching about these rules to these people who are just becoming consumers. So I definitely agree that the problem exists, that the technology is neutral, but there is a moral hazard. So obviously, if someone can use the data, it probably will use it, because, well, at the very end, marketing. We are still working on profit margins and conversions, yes? So we don't really worry that much about social hazards. Yeah? Thank you.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: Okay. Now, I must ask you about the future and the world, of course, in the European world, and in an area like market sectors, and any particular risk. And now I will start with you, Kinga.

>> KINGA STOPCZYNSKA: Knowledge is power. So knowledge about the customer will certainly be the strength of the brand. In case of their IoB, we're talking about the highest level of knowledge. It is difficult right now to imagine industries that will not want to build their competitive advantages based on the full information about the customer behavior. At this level, that was unable years ago. When we look at the customer years ago, we just ask them the questions. Right now, we know exactly how they can behave. We know exactly when they return a call. We know, did they exercise or not? We know, are they hungry or not? This knowledge is like particularly one of the most important factors for the companies and right now from my point of view areas of IoB will be for sure all the areas in a beauty categories, beauty sectors, for sure IT, and health. Health is a part of the market that I'm taking and dealing with everyday working for a pharmaceutical company. I know how important it is to know all, all the stuff, about the customers. And I also know that when we have a good conversation, like with relationship with the customers, we can like connect both the Internet of Behavior and the information from the customer. So for me, these are the areas that I think that Internet of Behavior will rise.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: Okay. Health, beauty, and IT. Professor Sanak, do you think IoB will develop faster in these areas?

>> KATARZYNA SANAK: I think all areas of daily living will be first. We always as a human being, we always want to discover ourselves better and hoping that the tracking of ABLs gives us this opportunity. So I would say this sector would be the first one. Health care, of course, yes. This is actually a bright side of the Internet of Behavior, yes? That that actually can help a lot. Later, I think, also the education sector there might be great significance of Internet of Behavior, especially nowadays we can see more and more people learning remotely. I think that while this experience of the online learning is still very flat. So I would assume that probably tracking some sort of the data might help in data performance. Yes. So I would say probably these areas, but from marketing perspective all kinds of social commerce will be in great usage of interpret of behavior just because that's the place where the profits will come from. Yes? And that actually happens already. While I'm always trying to explain to my students we used to have no data in marketing. We had huge problems to measure the efficacy of the campaigns in offline marketing and so on and so on. Right now in online marketing we have too many data. Yes? We don't really know how to analyze it. This switch probably will change soon. We'll get better and better working with the big data. I think the social commerce, e‑commerce in general, will be the first one to benefit. Thank you.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: Thank you. Katarzyna Paliwoda, what sectors do you agree?

>> KATARZYNA PALIWODA: I agree with social commerce and health commerce. Also well‑being, from my own personal perspective. Coming back to the risks, simply people not being aware that people might not know this data will be used somewhere.

But the question is how we will be doing it.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: Okay. Maciej, time for your question.

>> MACIEJ NIEZGODA: My question is regulation related for Robert. The European proposal on the AI Act, introducing various risk, including the unacceptable risk. Do you believe that we can also expect that IoB to be banned from certain applications in the future as it is now with regard to AI?

>> ROBERT KROPLEWSKI: Thank you for the question. I believe simply answering the question that that work is not realized now. The European Commission, if you talk about European Union, started to work on the privacy act new proposal. From that perspective, we rather could expect a new regulation. The same regulation as artificial intelligence proposal is not developed still. But I think that that will be very good benchmark for any new regulation related to the internet of behaving. And it is a proper benchmark in my opinion, because still we talk about the privacy. Still we talk about the surveillance. For example, if I could jump from the European Union to the Council of Europe, the mass surveillance is subject of the ban of that application. And social storing realized by private company, also. If that kind of new samples appeared, it is very quick and short distance to produce a new proposal for internet of behaving, in my opinion. And in my opinion, also, it will be good to take this kind of developing in next year, even, because I said many years ago, just now, about dopamine economy. The dopamine economy is still related to Internet of Behavior. If I put a jab to the previous question, I would like to add, because of the dopamine economy, entertainment sector. Everything will be entertainment. And we will be entertainment subject. And finally, the mass, the huge risk, is to break our autonomy.

That kind of risk, the regulation is very needed.

No regulation could help us if we don't develop the tool, some very practical mechanism which can support people. And support, for example, and only to mention, Poland's joining to the Global Partnership on Artificial Intelligence Alliance last December.

That alliance now works on special mechanism of privacy ‑‑ using privacy enhanced technologies, and that is very prospective in my opinion. And the idea is to give people some very practical tool to help them in front of the privacy risk and challenges.

And so with this, with that kind of tool, we can start the education, very successfully.

Because the application, as we can observe, since many years, don't give us results because still we quick accept any rules and jump to the ocean of our activity. And that activity, it is subject of risk, also, because finally the question and the problem is where is our knowledge? This is, of course, I agree that the company has big data and creates knowledge, that's asymmetric, but the key to the future is to bring the knowledge back to people, because the knowledge it's an environment of creativity and without creativity we could lose our autonomy. This is my opinion.

>> MACIEJ NIEZGODA: Thank you very much. I think this dopamine economy and this as part of that, as a crucial part of the IoB, is very important and worth mentioning. Thanks a lot. Dominika, once again, your turn now.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: Thank you, Maciej.

I don't want to lose my autonomy.

I must ask about the future once again. I think that the Internet of Behavior, both consumers and users, as well as managers, what are some solutions that undermine our human decisions and our human behavior, beyond which we will be unable to go? Zofia? Is this a realistic scenario?

>> ZOFIA DZIK: I wish I could draw a very positive picture, because in general everybody who knows me knows I'm very optimistic. But first of all I'd like to make a statement to maybe say something obvious that we are experiencing a huge transformation. Every part of our life, but also for the first time we experience the transformation that is touching also us as human. And maybe I know it sounds a while ago as a more science fiction, today we know it's not. That actually might be one of the last generations of so far defined human. I also would like to answer your question, draw a big wider picture. I see two vectors or maybe two forces that actually drive these phenomena or more or higher pace of enslaving, enslaving people, by the new technology. One of them is, of course, the race between the big tech companies and, also, the companies that we might even not know, and they're in the race. The companies, first of all, I raise because they know that the pandemic speeded up the process, but also they know that some of the changes, regulation changes, ethic discussions, are coming. That they are in the race, in my opinion, to lock even a bigger portion of the population in some kind of bubble or surround them with some kind of a solution, applications, easy one, I would say, button solutions that will give them the promise of some benefits, but actually it's locking them for making any exist or the way out. Not really in the case of the cost, because we see for example the strategy that is already long‑time taken by Apple or others. What's the big ‑‑ how big cost it is to change for other system. But also in terms of time‑consuming and all the easiness that is so important for people today in this speeding‑up world. So that's one factor.

Another factor for which we're already drawing attention for more than a decade at the Humanites Institute looking at the human being in a very complex and systemic way in the whole ecosystem in which a way that a human being is born, educated, and consumes the media and the culture, we see the gradual downgrade of a human being. This is very important factor. Because the human today is overstimulated. We know that mental condition of a human being is very bad. One‑third of the population in the Western world is actually experiencing either depression or growing loneliness. And this downgrade is also visible in, let's say, the down decreasing tendency of IQ coefficient in the recent years. It's also a symptom that shows that we are becoming actually dumber, if I can say. And more and more, being actually put in the place of being inputs, rather, and being managed from the outside by the smarter and smarter algorithms that don't give us bigger space for self‑reflection. And actually we're not managing ourselves.

Through the science, for years, we learned how to live longer, but actually no one was teaching us how to live.

And that's why in the recent years, we raised so many times this question of a purpose, of a sense of life, that is coming quite often. Why it's so important and it's correlated. Because if we're talking about this weak condition of the human being, this means that the human doesn't have today much strength for leading. And sometimes is even becoming even weaker link and being cooked slowly, as the frog, and given this sometimes deceptive promise for some solution that will ease their life, and they go for it because they don't have the internal power to judge it or sometimes they are not given even the time for making the decision on their own. And they might not have strength for taking the decision on their own.

The positive, and putting together these two elements, what's the future? Outside of the positive element is the fact that we see some waking up of the conscious consumer, more conscious user. We could see this waking up in our research that we do as a institute. We also see this waking up on this higher level of managers of big tech companies. We can see in repeat years the Social Dilemma film where the top managers coming from Google and Pfizer and other institutions, they have this waking up moment of seeing the flip of the coin. Then I hope that the future is actually from going from two sides. One side is, of course, the more responsible decision makers.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: And the second one?

>> ZOFIA DZIK: And the second one is growing awareness of the users. And that's why in the Humanites Institute we are taking the approach of both elements working on this coherence leadership model, talking about the integrity, but also looking at the societies through their whole ecosystem as I said.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: Okay. This ecosystem is very important. I must agree with you. This scenario, when the Internet of Behavior can be the element becomes solutions that undermine our behavior, do you think, Kinga?

>> KINGA STOPCZYNSKA: When you ask a person who deals with marketing strategies everyday, you will hear one answer: It depends. I fully agree with Zofia, that we are as a client or people, we are overstimulated. But on the other hand, we are searching for the information and we are addicted to the information, and we want it. And we cannot live without the information. Of course, that we have some trends and this is like a small part of the clients that they think that they are left out and they don't need any information. They're focused on themselves and they think that they live happily and this will be enough for them, but like the development of the technology, there will certainly be progress and we cannot stop it. We can try, but probably we will not succeed. That's why sometimes as a client we are unaware of how this makes our life easier and allows us to make faster decisions and easier choices. If we were hostages, yes, but sometimes the jail that we are living in is not so obscure. It can be very comfortable. So it depends.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: It depends. Okay. Maciej, the floor is yours.

>> MACIEJ NIEZGODA: Thank you. The next question is about values, conflicting values, in fact. Are we going to be forced to choose between very important values in the case of Internet of Behaviors, namely privacy over security, or security over privacy? We'll start with Zofia.

>> ZOFIA DZIK: I think as mentioned before, I think that each of us on every level, the decision level will be more and more exposed to everyday decisions and the conflicts of values. And I hope that also coming with it self‑reflection. And that's why I mentioned before that in Humanities, for example, we still believe it's not too late to wake up the human in human, and we believe the human is something more than just the reason, brain, instinct, and algorithm. And this coherent leadership model we showed of the human, there's physical, mental, emotional, and spiritual zones. That's why I think it's a very important discussion. Robert was mentioning it also before, that we are coming to the point where we have to raise it one level up. To the question, who actually, who will define who human being actually is? And that's why, from this perspective, I believe that this is the first very important thing to mention.

If someone would like to put security over the privacy, will always find a reason to justify it. Yes? And there is the question we can always put the very big picture that we will find one pocket picker in the big crowd of the people. But there's the question whether there's a need for hacking dozens of thousands of people and collecting their data to save us from this one little, let's say, crime. We know also that, answering your question, I believe in some areas we shouldn't choose either/or. We should just make a decision that we don't make something. That we don't take certain decision. We know the European AI Act actually there are some areas that they are labeled as unacceptable.

>> MACIEJ NIEZGODA: Great. Thank you for this answer. The same question goes to Robert Kroplewski.

>> ROBERT KROPLEWSKI: Thank you very much. It's a very important topic. But before I answer, in my opinion, that kind of dilemma doesn't exist. We need to choose two, privacy and security. But it's not at the main layer. The main layer, it's a question related to the artificial intelligence, it's a question about the transparency. If we could develop and empower transparency, on the beginning we need to define what is it, in the relation of internet of behaving, and how we can realize the right of information. This is the very constitutional value. Right? Of information. But finally, any conflicts of values, like, for example, transparency, like privacy, like cybersecurity, could be shaped, but by special compass which we defined in UNESCO recommendation on artificial intelligence. That compass consists of four elements: Human dignity, well‑being, no harm, and autonomy of people.

And finally, that kind of four meta values are super important to make a decision on the layer of cybersecurity, privacy, and so on. And from that perspective, I still believe that UNESCO work and the European Commission work and Council of Europe work will be a benchmark for new privacy. Yes? Right up to the new words of Metaverse or Internet of Things or our new bubble. I believe, also, that in that kind of framework, we need to use that diversity, also the element of creativity and empowerment of people. But talking about the values and the conflicts, especially in the topic of Internet of Behaving, we don't need to forget about the SMEs and access of them to that big data. Because that new model is invisible, as you mentioned at the beginning. And if in the special bubble of Internet of Behaving, SMEs could naturalize their own business model, don't observe their own activity in that model, it will be very risky for national economic productivity. Finally, the third, competition is under the risk of being broken. From that perspective, we also suggest to realize some new legislation. Thank you.

>> ZOFIA DZIK: Robert, if I could quickly, because you said a very nice word, and we have great values like well‑being or autonomy of the people, but the question is to make them live, not just on the paper. Yeah?

>> ROBERT KROPLEWSKI: Yeah. Yes. Okay. One sentence more. How to do it? Firstly, define, sorry, excuse me, maybe first, concept of organization. Second is define. Third is to write on the paper. And fourth element is to execute, to enforce that framework. Finally, without the tools, without the tools of measuring, without the tools of enforcement, we will be like blind people, even blind governments and blind companies. The competition on access will be the most important topic on my opinion in the next year among the companies, and in the centers of systems of economics. Among that, China. Among the United States. Among the European Union as a market. Also.

>> ZOFIA DZIK: Yeah, but all this won't work without the integrity of the leaders and decision‑makers, yeah?

>> ROBERT KROPLEWSKI: Absolutely. Yeah.

>> MACIEJ NIEZGODA: Thank you, Zofia, and thank you, Robert. We have ten minutes left. It's the final question in this round. To everybody. To everyone. The question is the topic of the responsible development of technology is getting more and more popular nowadays, but at the same time the possibility of using consumer data to influence their behavior also seems really tempting. The question is does the use of IoB in nudging tactics, which are in fact some sort of behavioral steering towards the desired behaviors, the desired results, does it carry a risk of reducing human to mechanical reactions? And thus limiting human freedom? What do you think? Let's start with Katarzyna Paliwoda.

>> KATARZYNA PALIWODA: I think this is very much related to what we were talking about. There's a fuzzy framework where core values are integrity, safety, and privacy. Now, we as a company might have it. Not necessarily all our users might have. So we obviously need to work and do our best to actually keep, for example, our platform secure. Yeah? Like in our case, I think this year alone we're investing 5 billion, which is even more than our total revenue when we first entered the Stock Exchange. Then we have many people hired. We need to have both AI and sort of like a manual, call it, human connection, to make it work.

That's definitely not the end. We will not get that to be fully safe. Eventually, there will be newer and newer tactics on how to go around all that workforce management, but the idea is to simply aim for perfection, but from the perspective of the overall, that's definitely the values, exactly what we discussed a second ago. From my perspective, there's no other way of building it around integrity, safety, and privacy.

>> MACIEJ NIEZGODA: Great. Thanks so much. The same question goes to Katarzyna Sanak.

>> KATARZYNA SANAK: Thank you. I fully agree with the company perspective. I would like to focus on the consumer perspective here. As you mentioned, as human beings, we have one sure strategy, which is cognitive saving. Yes? Obviously, we like very much heuristics. We are allowing cognitive bias. We're saving our mental energy. When we have a shortcut, we're obviously using that. Yes? This is why we definitely are in the information bubble and the problem with the bubble is that bubble can bubble. It means that here probably, well, if something goes wrong with the government legislation, there still will be a space for companies to train us as a consumers. Yes? Just to make this bubble smaller or larger and give us some sort of dos and don'ts. Yes? While that definitely happens, I'm not quite sure we're facing reduction of human reactions. Not at all. I think that well, for example, social proof. All of the marketing can base on the social proof or the endorphin or influence marketing. This is actually an improvement of human reactions. Yes? The thing is they're being manipulated. Yes? So the problem is somewhere else, I would say. So here, the difference between the persuasion and the manipulation is very tight. Yes? From the Finnish perspective, it depends on the level of the knowledge of the consumer, if he knows or don't know that he's being at some point manipulated or he knows the rule of the game that he's playing with the company. And the thing is that while I probably come back to my first sentence I've said, that the education here is the essence. Yes? And even though company will give space to, say, that well you're just being targeted because you click this and this, people will still be too lazy to read it. Or we won't have enough mental space to read it because they'll be chasing the product or watching something interesting. They will just eliminate this information. So this is something we really need to carry on. And I think that while there will be common space, yes, for the business, then I think that while we might not maybe eliminate but minimize the risk. And I very like what Robert said before, that we are mostly talking about large companies. Yes? Large enterprises. But we cannot forget about small businesses. Obviously, what happens, and what used to happen in marketing before, that while all of the tools that were used by the large corporations finally are getting cheaper and being adapted by small business. Yes? And in small business, it is a bonanza at some point. They don't have any departments of transparency. They don't even have knowledge about it. Yes? So we need to do something with that, we should probably do it right now, to develop to create some clear and transparent rules and focus on the privacy. Then it can be adapted to smaller enterprises.

>> MACIEJ NIEZGODA: Thank you so much. I've just got a message we have only two minutes left. Before we wrap up our conversation, I'll ask for answering this question, Kinga, but please put it in a nutshell, okay?

>> KINGA STOPCZYNSKA: I will do this a short way. For me, being ethical is one of the best ways to be on the market for a company. This is the base for me, as my company. But also the moment we start to treat our customer, our client, as numbers or just analytics or just a journey that they made, it means for me the end of the business. The customer is more. These are emotions that we should value, emotions we should build strategies on. For me, personally, manipulation is the worst way of creating the business that we can imagine. So I would love our future market to be ethical and focusing on the client, especially in this last two years, that the client really needs us. And they are reaching for the information and for the relationship with the company. So like, this is, for me, the future. I hope it will be like this.

>> MACIEJ NIEZGODA: Thank you so much. I'm afraid we've run out of time. I don't know, Dominika, if you agree, that I think we should finish our interesting discussion.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: Yes. I'm afraid that we must finish.

>> ZOFIA DZIK: I will only say it is worth to wake up the human in human.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: That is something that's very important. IoB, helpful or a curse? You must decide for yourself. Thank you very much for your time.

>> MACIEJ NIEZGODA: Yeah. I would like to say, just to try to summarize it, I think lots of interesting issues raised today. One of the things I find especially important is this interconnection between raising awareness or gaining some new kind of insight into how technology works, and the proper education. So it's absolutely crucial. But I also would like to add one thing that we should be very careful when we are implementing technologies that can influence different aspects of our lives. So I do believe that the approach or matters developed within the AI community will also disperse and reach also those people who are responsible for developing new technologies. And saying that, thank you so much for being with us today. We would like to name, again, and thank very much our experts today: Zofia Dzik, Robert Kroplewski, Katrazyna Paliwoda, Katarzyna Sanak, and Kinga Stopczynska. It was a pleasure to meet you today. Thank you very much. My name is Maciej Chojnowski. Dominika, the floor is yours to end this meeting.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: Thank you very much. It was a great pleasure to meet with you. Well, thank you to our panelists and to our members of our seminar. Yes, thank you very much. And goodbye.

>> ROBERT KROPLEWSKI: Hopefully we don't finish but we're starting.

>> DOMINIKA KACZOROWSKA-SPYCHALSKA: Yes. I hope so.

>> MACIEJ NIEZGODA: That's a good conclusion. And keep working. Thank you so much. Thank you to our viewers.

>> ZOFIA DZIK: Let's educate.

>> MACIEJ NIEZGODA: Definitely. See you soon. Thank you. Bye‑bye.

(Session ends at  12:12 PM Central Time.)