The following are the outputs of the real-time captioning taken during the virtual Fifteenth Annual Meeting of the Internet Governance Forum (IGF), from 2 to 17 November 2020. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
>> MODERATOR: We are almost ready to start. We only miss one panelist. Okay, we also have the YouTube link if we want to share it. Julie is online, perfect. So could the tech support please give speaker features also to Julie Owono, so we can start. Okay. Excellent.
Bon jour, Julie. So we now -- I think we can now officially start this 2020 session of the dynamic coalition on platform responsibility.
I would ask my co-conspirator Nicolo Zingales to open the banks with his introductory remarks and we can move to the very crowded panel we have.
>> CO-MODERATOR: Okay, so welcome everyone, again. It's a pleasure to have so many excellent speakers with us to discuss platform governance. We know platform regulation and platform governance, more generally, is coming, not only to Europe but many jurisdictions around the world. And we are delighted to have engaged with some key stakeholders on a project where we attempted to provide, as the title of this session states, a common vocabulary. So the reason that we engaged in this project with the coalition was precisely because we thought that there were conversations that were taking place in different environments.
Running in parallel to each other without having necessarily a consistent use of some of the key terms of this discussion. So the idea with this project was kind of to bring expertise from different fields together and try to, at least identify, these range of terms. This doesn't mean that we are suggesting that the interpretation of a particular word should be X or Y but in this glossary we wanted to basically capture the picture of all the different competing and alternative definitions that are being offered.
There were people referring to proactive measures to fight copy right infringement as measures ta are taken even when the law doesn't require you to do so. Even when you receive notification, you take it down regardless if this was a requirement. Other interpretation of proactive measures would refer to a platform looking for content that is potential infringing without having been notified. Similar distinction between interpretability. Being able to send packets to each other and interpretability to actually understand the meaning of each packet in the same nuanced way it's processed, original platform where it is produced. Because of these and other examples we thought it was a helpful experience, possibly for policy makers and regulators to have a point of reference for future efforts and discussions. So today we are just going to discuss some of the issues.
The number of authors was extensive. But we hope that everyone after hearing the session will be interested. And we will go to our shared document where it is possible to make comments and improve the existing document. So with that, I would thank you again, everyone. And pass the floor to Luca for a couple of words.
>> Yes, thank you very much for introducing the debate. Let me also reiterate what you just mentioned. The fundamental goal of the project that we present today. Is to possibly contribute to a well-informed debate. But to include different readings of the same terms, precisely it was not the roll of the IGF to be descriptive to take decisions but to help decision makers have smart high-quality debate. The goal in its mandate, very well to discuss public policy issues.
Issues aiming at regulating platforms around the world. We maxed and discussed many of them in last year outcome for values.
We know that a lot of them have been very recently abled by legislators.
You legislators are going to present the proposal in the coming days. Brazilian congress has been debating the so-called fake news over the past months and it has been harshly criticized also for some misconception in some very broad terms many of us have been involved in these processes. The same term has radically different concepts in different jurisdictions.
This is not an issue if those conceptions are rooted in national law. It's a problem when the conceptions are rooted in misconceptions of technology and how it works and how it should be regulated efficiently. So that is precisely the goal of this glossary we are going to provide. It's for all kind of stakeholders.
We have also referenced different views.
We have tried to do the most academically refined output as possible. But of course, it still needs improvement and that is the reason why we are seeking performance. Not only from academics but very diverse range of stakeholders.
That is the reason why to open the debate today we have invited two guest speakers that have not been directly working on this, on the glossary but are working on this coalition and are extremely interesting and have a global impact on how platforms are regulated. So without further ado. If one by one we will introduce our distinguished speaker. The first one, Julie, not only Executive Director. She is also a member of the oversight board that's been recently launched by Facebook and will start working in the upcoming weeks, or months. We will know more about this. Let me give the floor to Julie and thanking her again for this and for her time.
>> Thank you very much Luca and Nicolo for the invitation. It's great to be here, hopefully we will be able to get back to normal business. As close to normal as possible. Yes, the oversight board which was, whose first members were announced in May, earlier this year, which include not only me but other distinguished colleagues to name a few. Ed, or Mr. carman and other very distinguished professionals.
Actually started accepting cases. So as a reminder, it's a sort of we really would like to see it as an institution that will make finding decision, on Facebook and Instagram. We have two ways to oversee cases. On one hand Facebook is allowed to refer cases to us, cases that prove difficulty for them in terms of their own community standards but also towards international law. That's a very important thing to mention. And also, we will be able to oversee cases referred to us by users.
Users whose content has been taken down and who are not happy with the decision that has been made. In the near future, we will take cases either referred by Facebook or by users who are not happy to see such content online. This is not, I mean this feature is not available until now. Something the oversight board is working on and it involves technical issues that require some time. So this is for the principle. Now for the procedure itself, the board has, which is composed of 20 members. The board has in a normal setting 90 days to make a decision. So from the day we received the case, the user and Facebook have some time to provide additional explanation. Also allow, enable to request bridge from organizations. Expert organizations in the particular fields in which, you know, the particular cases requesting us to look at. But also, we also -- we will also be able to receive public comments on cases. So there will be an opening for comments beyond the user and Facebook who are concerned by the case. And, well, all this should allow us to make a decision in normally 90 days. But there are also external reviews.
Particularly one that allows, I mean requires us to make a decision in 45 days. So half of the normal time line. And our co-chairs. We have four co-chairs who have also worked on other review. I think we call it expedited review within the expedited review process. Which would require us to make a decision in seven days maximum. So these are really for cases that have, well, that require imminent decision given for instance that the danger and threat that the content might pose for, I mean, real world violence and any other possibilities. Yes, what is important to mention and I started touching upon this earlier is not only will the board oversee cases based on Facebook's values and community standards which we honor and which have themselves limitations. But what's interesting is we also, we will also look at international law. There's a big discussion at the moment whether or not, first of all, international law is feat to defend these new rights. On the one hand. On the other hand, it's also a challenge for platforms themselves who have been, I mean asked by many experts including the former Executive Director, David Kai. Who have been asked or strongly advised to apply the United Nations guiding principles on business and human rights which not require, but strongly invite companies to protect human rights and also provide remedies whenever they, well, to mitigate, to provide mitigating effects whenever they infringe on these rights. So it's an interesting challenge also from that perspective. And yes, a challenge that the board is ready to take. Working for the past, since May, working behind the scenes.
First of all to make sure we are all on the same page when it comes to international human rights law. For many colleagues, it was not a specialty. Colleagues who were really experts in constitutional law, for instance who might not have had as much familiarity with the law, so we have had many sessions on that, with the help of one of our co-chair, Catalina Botero. We also have worked on the technicality. We have a web platform that allows us to oversee the cases in a very secured manner. And we are also working on issues around privacy. The cases we see involve real persons and required to make the necessary adjustments.
Not only to protect the privacy of the individual but also protect it from also outside interferences including government interferences.
All this to say that from my perspective and that's probably why, that's the main reason I chose to join, is that we are really at a crossroads of platforms.
I like to tell them that focusing on the other profits is not possible, and threatens.
We are seeing issues around censorship, internet shut downs. We are seeing a lot of governments who decide to shut down platforms because of the problems around content and lack of responsibility from platforms.
It was interesting for me at this very special moment to take part in an initiative that would come with hopefully interesting solutions that could be applied of course to Facebook and Instagram and why not. That could inspire and give an idea of what the principles and standards, international human rights standards we all work on could be applied to such an evolving theme, which is online content moderation. This is basically a big picture. I hope I'm still in time. Luca?
>> MODERATOR: Yeah, you are almost done with the time. I'm sorry for being not really a gentleman with this but we will have to strictly enforce timing so everybody can speak. And we have also time for questions. Sorry for the interruption, Julie. I see there is already a question for you. Let me give the floor to Lofred Madzou of the world economic forum. He has very interesting visions how artificial intelligence can be applied to platforms.
I don't want to give too much of a spoiler what he is going to say. Lofred, the floor is yours.
>> Good morning, good evening, everyone. Thank you for having me. Luca, I will try to keep it short and on point. A little background, I'm working for the center for the fourth industrial revolution. Put simply it’s a group, corridor multi-stakeholder collaboration on AI governance, and more explicit what we do is co-design governance frameworks around specific projects. I will give you an example. Like facial recognition, how do you ensure responsible use of facial recognition. And then work with businesses, governments, civil society and academics, the multi-stakeholder fashion.
The thing I want to really focus on today with you guys is this notion of trustworthy A.I. You perfectly know that. Most of the platforms are A.I. driven. So A.I. really powers all the applications and their products.
So the question becomes how do we make sure that A.I. is trustworthy? Let me elaborate. There are so many frameworks out there, so I want to be really clear. What I mean by trustworthy is the behavior of the systems are consistent with a set of, you know, expectations, or requirements to be more specific. You see where I'm going. I think audit of services and products is one of the means we can use to ensure that the systems are trustworthy. I think it's really important for two reasons. The first one is the sure scale of the AI products.
Think of Facebook. Some platforms have over one billion users. Regardless of the size or funding of the regulator, it will be really hard to match that scale. The question is how can you scale the enforcement mechanisms.
Be like, nope. Assuming we already know what is the right regulation. Let me quickly elaborate on this. On the requirements itself, I will give you an example there are four key steps. The first one is defined. What is the definition of risk use in that context? It has to be contextual, right? I'm looking at the use of facial recognition at airports. Here, I mean we use cases seem to be quite straight forward but it raises issues.
Let me give you an example. If your face is your means of access for that service and your face data is compromised, it’s a big deal. If your password is compromised, you change it. If your face is compromised, you don't change your face. So the challenges are really different and the stakes are much higher. But simply, we define this across free access. Governance of that data in that case. The second is access performance and accuracy. Really important. How do you ensure the system is performant across demographics? For females, blacks, regardless race, gender and so on. And last but not least, is perception. How do you -- could the right civilization of the airports, the rights, human in the loop, rights fall back system, the system is not working. So now what is the parallel with online platforms? One of the defining challenges that we have then is regardless of what they claim, it’s really hard to make sure that we do what they’re doing. Assuming they are doing the right thing or assuming that they want to comply, there’s no bad intent and I think that there’s no bad intent. How do you as regulators or trusted third party double check their claims by directly sending queries to the system and getting data out to make sure it's effectively complying with ABC, whatever it is might be legal requirements or as we see more and more AI principles being drafted by these companies. So organizational guidelines. That's what my work is really focusing on. I would say it's a promising area. Not only because we are making tremendous progress around explainability around also I would say enforcement of existing regulation, many companies that filled up this space of A.I. risk and compliance from different angles know when to double up these tools. The thing I want to close on is that regardless of the rules we agree on, regardless of the jurisdictions that you operate in, enforcement is key. I think something sometimes we overlook. People spend amount of time figuring out what should be the right regulatory framework. They don't pay attention to how do you enforce it at scale. Again, the framework has to match the scale of the platforms.
That's what our work is really focusing on. Open for questions at the end but back to you, Luca. I hope it was clear.
>> MODERATOR: Yes.
Now this first part of the open debate will be moderated by Nicolo. I see there are already a couple of questions in the chat. So please, Nicolo, go ahead. You are muted.
>> CO-MODERATOR: Apologies. Yes. We already have a question from Julie here. From Mary. So she asked, does the board's mandate include any work with advising on the community guidelines? Will this decision be used to reshape the guidelines? Thank you.
>> Yes, thank you, Mary for asking. It's important to specify in addition to content decisions, the board will, also has a mandate to make recommendations to Facebook upon Facebook's request. Well, in two cases. So within the decision, the board will have the ability to include a policy recommendation. It's not mandatory. It's upon, you know, the board's assessments of the need of such recommendation. And other cases, Facebook will be able to refer questions to us, recommendations on very specific pieces of policy. Could it maybe you know, Facebook's policy on --
They can refer that to us. We can make recommendations, which are recommendations.
Facebook does not have to take them, I mean, they don't have to apply those recommendations but they do have to take them into consideration and record publicly on what they have done with the recommendation that we make.
>> MODERATOR: Thank you. I may add another question using my moderator’s privilege. Directed to you again, Julie. What we have been trying to emphasize with this effort is that some concept, different interpretation, different context. I would like to understand what is the balance that the oversight board strikes between values that recognizes the importance for the platform and interpretation of those values in different context. How does Facebook deal with that? Or the oversight board?
>> Yes. The first thing I must mention that I did not say, we review cases that did not have legal implications.
So if there is a law that says this is, this is what we think should be treated, we cannot review that. To respond to your questions indeed, there is a need for context. There's also a need for overarching principles which is what we need now. That's why we are seeing so much inconsistency. Platforms and particularly Facebook consistency applying standards. That’s the void we hope to fill to provide those principles that will guide and make the platform more consistent how it applies its own community standards. Consistency with regards to international human rights standards. I think that's the way the overarching principles will come from and will guide us as well.
>> MODERATOR: Brilliant, thank you. I see there's a comment, it's not probably a question for Lofred. I would just mention one issue that has come up with regard to facial recognition is the inclusiveness. Basically the databases that have been used and you know, the potential discriminatory effects it might have on certain communities. So how can the work of the world economic forum in that regard help us find the right way to approach the systems so that there is no disparate impacts on communities in the minority and not as privileged. So I guess that is one question that connects to the comment that's been made. The other question I will have for you just so we get on the same number of questions for Julie is, with regard to explainability. Do you believe we can achieve again, some general common principles that could be used to explain A.I. without, you know, basically might be up to circumstances. In certain cases you may need more explanation depending on what type of users and targets you have. For the explainability. It would be interested to see how the world economic forum explain principles.
>> These are very important questions. Let's talk to the first one regarding risk. One thing I will insist on when it comes to facial recognition or any technology, risk manifests, in the context which requires expertise. Obviously it's involved in that process. The user of the company use the technology, procuring the technology. People tend to focus on the most risk regarding fairness, accessibility to data privacy infringement and so forth. What I would insist on there's a much more variety of risk out there. I will give a concrete example. If you use facial recognition for accessing a plane or finding a person of interest using technology very different outcomes. The way things can go wrong are very, very different. You will have different people around the room.
The left try to bring the rights around the table to -- the risks. So try to make sure we don't have any blind spots. Something we would have missed, right. Human beings like us varies, they won't perceive or prioritize but create a space for open discussion and challenge you have better mapping of a risk. The second thing about explainability. I think there's too much on this. What I will insist on is if something goes wrong I want to make sure they are comfortable in that theme. Being comfortable is being able to articulate what went wrong and why. This means I'm looking at accountability mechanism on the whole and not necessarily on explainability. The progress being done, basically when I look at a system, the behavior of a system, I'm looking to the system two aspects are important to us. The first is documents -- it means simply how that system has been put together. What is the function. What is the purpose of the system. What are the tests that are running and so forth. Given this nature how that system came about. First point. When the system is deployed in operation, keep track of its behavior. Continued use and check impacts on different demographics across risk mapping we have done together. In doing these two things you are going to reduce or mitigate risks. The key point that's a learning system. It's important to make sure across time it remains consistent or compliant with our requirements.
>> MODERATOR: Thank you. Sorry to interrupt the discussion again but I will be very strict with time so that everyone will be able to provide input into the discussion. Now let's start first (presentation slot and over to you, Nico.
>> CO-MODERATOR: Yes. Thank you, Luca. I think that was a perfect closing point. Also need to monitor over time. Because this is precisely what we should be doing with this set of definition we are given. We should go back after some time and make sure that they are sufficient reflecting the discussions taking place. So now, let's move onto the second part of the discussion. The first one in our list is our colleague, Ivar Hartmann, director at center for technology society in Rio de Janeiro.
>> Thank you, Nico. Quickly I would like to thank and congratulate you and Luca for editing the glossary and this session. I was responsible for the entry on terrorist content. I want to make three very quick points that I think underscore the relevance of the contribution offered by this glossary. First is in terms of focus. When we talk about we think of cyber terrorism. It used to be focused on attacks against infrastructure. In the very early days, 20 years ago when people discussed cyber terrorism, that is the first thing that came to mind. That's the first type of threat we worried about. Nowadays it seems the focus has switched to terrorist content on mediums, including platforms.
Media where narratives are developed, strengthened and spread with a global potential but a very precise target. And this is a very powerful tool for cyber terrorism. So that's the first point in terms of focus. The second point has to do with the concept of terrorist content. Which obviously relates to what one means by cyber terrorism and cyber terrorist activities. Cyber terrorist content is related to fear-inducing, and extremist. If you are inducing information and narratives and discourse, as well as extremist mobilization online, this makes it very difficult to distinguish terrorist content from hate speech. Especially more radical hate speech. Now this is a very important distinction, however. Given the punishment is much, much harsher than the punishment for hate speech. Some countries hate speech is not even punished. So there's a distinguish between difficult to make is essential. The third has to do with enforcement. So even for people it is very hard to separate terrorist content from hate speech. So that's for people. Especially people with a legal education, that's already a hard job. But for artificial intelligence this is an unsurmountable task. And we know next to nothing about the accuracy of social media automated moderation of terrorist content. We have no idea if Facebook, Facebook system, or YouTube system is 90% precise in identifying terrorist content or merely 60% precise in which case it would be almost as good as a coin toss. We just don't know. Terrorist content, at the same time it's the type of abusive content governments have demanded platforms to identify and remove the fastest, with the harshest punishments against platforms.
But at the same time, it's also one of the abusive content that is the hardest for automated mechanisms, artificial intelligence to correctly identify. So that's a very worrying paradox there. So those are the three points I wanted to make. Thank you for organizing this session and for the glossary.
>> CO-MODERATOR: Thank you Ivar. Good dialogue with Lofred's presentation. Perfect. Now, we are moving to another presentation by Paddy Leerseen from University of Amsterdam. Ph.D. candidate. I think on exactly the very topic, the title of our session. So platform governance. I would like to hear then, what are we talking about when we talk about platform governance?
>> That's a great question, Nicolo, like all great questions, it doesn't have an easy answer. I have the privilege of defining governance for this exercise. It's a term we see using increasingly, I think. I try to define what is governance. It's a term that is often associated with regulation. In some cases, some people even consider the concepts interchangeable. So there's some disagreement what precisely the distinction is but the typical interpretation seems to be following Julia Black, governance is like regulation but de-centered perspective, we don't focus on as much on governance as much as the actor who's carryout regulation. We also look at more private actors.
This is why governance has become such an important and often-used term in the context of the internet. Whereas you know, so much of our conduct is guided by private actors.
So that's the most important distinction, I think between the two concepts. And additionally, there are discussions about, there's a debate about the meaning of internet governance and in that literature, one of the important distinctions that is made, for example, highlight that governance should not only be understood as formal forms of regulation. That is to say rule making and enforcement by entities such as ICANN or ATF but also less formalized routines and technical practices that end up guiding our conduct in practice.
So, this requires for example to also look at technical things like interconnection between service providers if we really want to understand how the internet is governed. Those are the insights I bring to my understanding of platform governance. By the way, I'm not monitoring the chat, so please interrupt if there is anything there. That's what I bring to my definition of platform governance, where I think we can make a similar analysis. First of all, an important distinction made by Tarles Gillespie, thinks it's juvenilistic. It could refer to governance occurs by platforms, how they govern us in digital ecosystems but also refer to how platforms themselves are govern bid other actors.
I think that's a helpful distinction that is not often made. Or not often enough, I should say. And here, you know, based on our definition of governance, on the one hand we can see how platforms govern in more conventional terms of regulation of rulemaking and enforcement. So they draft terms of service as rules and they enforce them in practices we now know content moderation. But a broader understanding that looks beyond these classical forms of regulation also emphasizes how platforms govern through technical architecture. Through algorithms, API's design of user interfaces and how they interact with users and coordinate with users.
For example, the regular engagement has with the "New York times", all these back channels, for example, how Google news initiative is subsidizing journalists.
All these things can be incorporated into an understanding of platform governance not captured in platform regulation. Moving toward platform governance you can make similar remarks. First of all, we have regulation of various kinds. Public regulation co-and self-regulation. In addition we can also look at how platforms are governed not just by the law and self-regulatory institutions but also by other forms of pressure third parties can bring to bear. Think of how advertisers boycotted last year. Is not regulation but a form of governance. And similar things could be looked at when we look at how users try to contest platform policies. In light of the time I will leave it there. In short it's a broad category. Much broader than regulation. But it's benefit then it helps to grasp the complexities how platform power is shaped in practice.
>> MODERATOR: Excellent. This was very useful. Also with the examples you provided. It was reminiscent, as you were going through the introductory part. I'm sure -- will agree. There's not much time so I will just move quickly into the next presentation. We have Richard Wingfield from global partners digital.
>> Thank you. I will take a slightly different approach from the previous two speakers and talk about challenges that exist when trying to come up with definitions for a glossary such as these. There are three in particular, when I came across when I was looking at the various terms I was trying to provide definitions for. I would be interested in other panelists if they are similar or different than the ones they found themselves. The first is, often we can't even agree what the right term is for a particular phenomenon. Sometimes we might disagree on the definition but sometimes the same term isn't even used by the same actors.
I don't want to speak to other panelists. We have numerous terms used by different actors.
A couple of the ones that came across in the terms I was defining were interesting in the terminology has changed over time. Child sexual abuse. Past language was child pornography. If you look at instruments, protocol, etc.. The term child pornography is used there. That's generally no longer considered to be an appropriate term. The idea this is not pornography and no issue of consent for people involved in this and this is a form of child abuse. So do we use a best practice term that we use nowadays, child sexual abuse imagery. Or a term well documented and well used for a number of years and international instrument, child pornography. That's the question I think is important to consider. Policy makers may be more comfortable using term that's are internationally agreed rather than new ones. Another example, revenge pornography, versus non-consensual sharing of intimate images.
Well intentioned people will use the term revenge pornography as a shorthand, compared to what we should consider the term used. We might agree on the term but there are lots of different definitions. In legal terms are rooted in national law. Defamation. No one disagrees it's a good term when we are talking about someone harming another's reputation. But across the world there are definitions that aren't consistent. Some require intention to harm another's reputation, others don't. Some say if it's a truthful comment it wouldn't be. Some say it has to be economic harm that is caused if you want to defame them, others say emotional harm is sufficient. So if we are trying to reach a common consensus how do we deal with the fact there are hundreds of well-established definitions which are not consistent with each other. Do we choose the one we like the most or reflect the actual usage of the term in international legal frameworks? The final challenge, there are some terms which have a particular meaning in the term of platform governance. Example would be transparency. This is a term well used within corporate social responsibility, business and human rights, broadly referring to openness about what companies are doing. If we talk about transparency in the form of platform governance we talk about details on the content or data shared by a platform. If you look at transparency reports by platforms it's those kinds of things you will often see. Again, does that mean we should look at a narrow approach what transparency means in context of this glossary and the audience. Or reflect that it has a more general term within a broader field of business and human rights, which of course is relevant to the issue of platform governance and what extent should we try to match those two potentially different approaches? So going to stop there. I hope I'm on time still. And look forward to hearing from my fellow panelists.
>> MODERATOR: You are perfect timing. And this was great. It felt like a peer review from reviewer one. I think for all of us, it was very useful thinking about the way forward with this glossary. So thanks for that. Now we have Rossana Ducato from University of Aberdeen where she is an assistant professor and affiliate at UC Louvain.
>> I promise I won't be reviewer number two. We will talk about the methodology in a way you suggested us to use. And why I think this glossary is relevant. I think that two of the most relevant problems in the debate about platform governance but also emerging technologies are essentially two. First of all, the complexity of the object. We have to define platform economy, artificial intelligence and so on. Second is lack of mutual understanding between experts or people with different backgrounds.
The object of study, there is nothing to do. Sometimes we have to recognize that it's a complexity so this complexity exists and we need to legal, technical, social expertise. But as for the specific problem, the lack of mutual understanding is something where we have to intervene. And to be honest, I think system academic knowledge, academic and -- is not acceptable any more. If it ever was. So in order to foster mutual understanding, it's necessary to build, to find a common frame of reference. And using this comparative law, methodology not by accident, think the kind of exercise we needed from the glossary, in a way reminds me crucial lessons I learned from my legal status in particular about the problem of legal translations.
In the sense that, each technical language is strictly intertwines with its own domain and system of knowledge, science and culture to learn a language really means to learn culture. Therefore we cannot simply translate one term from one technical language to the other. And we have to explain what are the foundational mechanism, or particular concept and it's necessary to provide context. This is essentially what we did. This is what they tried to do in the interest of the glossary where they were ultimately making decision making. First of all I started from the I.T. phenomenon. What are the types of systems now used. Saw they are being regulated so far in both sectors, jurisdiction. As what Richard was suggesting there are differences in some legal domains about the same term. Another point is to consider, what are the current proposals, if any. And of course, many cases it does not say this is the new recommended system so the exercise was to find the applicable law in the particular context. The glossary mentioned by Nicolo and Luca. Investigate different meanings and applications behind each term. I would like to thank again the two editors of this initiative. I would like to stress this because sometimes we tend to forget we did this during a pandemic. So thanks to everyone and thanks particularly to Luca and Nicolo. I will conclude with a critique and proposal. Coming to the dark side, starting from the dark side. Cray critique is about the name of the glossary. I think if someone opens a file called glossary and sees 230 pages we are going -- rate of heart attack on readers.
Seriously. I see this was the initial aim. Since the beginning it was clear we were going to do to write a glossary. But I think that during the writing, during the course of this initiative, now it has become something richer. So I think it's more of a dictionary rather than a mere glossary. And then coming to the proposal, for a number of reasons, limitation probably, all my entries is they are too Euro centric. A proposal maybe for next year, in my opinion would be fantastic to upgrade the entry with input coming from different legal systems. This is an idea, to continue working with this team. Thank you.
>> MODERATOR: Fantastic. Yes, I think that's an excellent proposal. That's indeed one of the reasons we are coming together to have other people from different constituencies, different stakeholder groups, different jurisdictions becoming aware of this and contributing hopefully. We hope this session serves to stimulate that. Okay. I will now move to the last speaker in this series. Which is Rolf Weber, professor from the University of Zurich. You have the floor.
>> Thank, Luca and Nicolo for organizing this and doing the work of the glossary. My main interest in connection with platforms is directly towards the regulatory models, probably very close to Chris. As well as literacy of rules. Having listened to the presentation of Paddy, I'm of course not digging any more into the need for view, literacy when designing and assessing platform models. But I may state that -- obviously linked to regulation. And my contribution to the glossary mainly concerns self-regulation and co-regulation. The legitimacy of self-regulation is based on the fact that leads to a needs driven rule-setting process. And self-regulation could be quite responsive to changes in the environment. Finally, self-regulation is justified if the application leads to a higher efficiency than provided by governmental laws and less likely than compliance with private rules. If you look into the legal environment of platform regulation, we rarely find any multi-lateral treaties and global provisions applicable to the platforms.
We do have a couple of national laws. It's obviously restricted and in so far at least per se self-regulatory mechanisms appear to have certain merits.
Indeed so far we don't see too many standards apart, from efforts which have been taken by this coalition about two years ago. And therefore probably -- the horizon. To give you an example of a segment of society which usually hasn't been in the past so active, I would like to draw your attention to European law institute. A couple of law professors have published model rules for outline platforms and the purpose of these model rules is to provide for concrete self-regulatory norms, the preparation has been done in view of the expected publication of the additional services act by the European commission to reannounce for end of 2020. -- the hope to impact the contents of this package. It doesn't suffice. We do have core regulation as a model. In other words if the level of regulation by private actors doesn't seem to be sufficient a certain involvement of governmental agencies becomes desirable, the state legislator can state -- and leave different principles by way of specific rules to the private parties. There by regulation could remain flexible and innovation friendly. Core regulation, in other words is a regulatory term leading to actual regulation independent from the government as long as rules remain in the legislative framework. Codes of conduct having been prepared by private actors could eventually be either acknowledged, it's been often done in financial markets also media markets and governmental agencies. It could be seen in compliance with the respect of private standards. Theoretical work on the usefulness of current regulatory standards remains to be followed up. Work for the glossary. Thank you very much.
>> MODERATOR: Thank you. Thank you, Rolf. That's another great discussion around regulations, core regulation. And I'm reminded in the chat by Chris Martin it is not just Laurence, actually before him, late Joel Ridingberg who wrote Lex Grammatica. Okay, now I think we have good range of perspective there. There were a few questions. In the interest of time we will read them all and maybe you can respond. There's one directed to Ivar Hartmann. How should or may be solved the task, I guess of fighting terrorism, taking into account the need to gather the terrorist language and vocabulary in different languages. And which languages would be used for filtering such term in practice? Your use cases. And then two comment questions. Bertrand says we should talk not just about governance off and by platforms but governance on platforms.
For example, administrators or groups on Facebook, according to the rights and responsibilities given by the platforms itself. And I would add to that perhaps, I guess this is directed to Paddy, we should also take into account governance of platform workers given we are talking about the work they do to moderate. So that's perhaps further perspective to be included in this. Another point by Bertrand saying what Richard is referring to is international normality convergence. The interceptability of certain types of conduct and similarity of criteria and thresholds around jurisdictions.
And then finally, there's a final comment by -- Francis. Does this approach on platform governance -- on platforms and not just collaborative platforms? And governance of and by must also be taken into account on. I guess that's similar to Bertrand's point directed to Paddy. But please, anyone feel free to respond to any of these questions. We have only a couple minutes for this discussion.
>> If I may, I think there was one addressed to me. There was a good point released. At least two methods we know for sure being used by platforms in terms of terrorist content, flagging and identifying. Aside from crowd sourcing flagging, there is someone hired by a contractor, private moderators working. And in that case, we can only assume but we don't know for sure that there are enough, as there should be enough private moderators who are natives of the country, the language of which is being processed and checked and moderated for terrorist content. If Facebook, for instance has the goal of identifying flagging and removing censoring terrorist content in, let's say Farci, then we can only assume that they have enough private moderators that are fluent in Farci and furthermore, this is not only about the language, it's also about the culture, of course. In terms of A.I. and automated decision-making. I think the same concern is valid that you cannot have a tool or an algorithm, a model that was trained to identify terrorist content in English than being applied to identify terrorist content in a different language or something which is not, which is equally as bad but you have a tool or a model that was trained to identify terrorist content within a specific country. Even if it's within the same language but a specific culture and country and then you try to use that model to identify in the same language but in a different culture using A.I. to identify terrorist content. We know that will not work. So part of the mission here I think we are increasingly having to acknowledge. Creating public training sets for these types of abuse of content in multiple languages, training sets that are not, that have not succumbed to bias for instance. That can then be used to publicly test the accuracy of the models that different platforms employ creating these data sets is a matter of public interest now.
>> MODERATOR: Sorry for the interruption. We really have to start with the second segment of presentations.
I apologize if we are a little bit in delay. The first one for the second segment is Catalina Goanta.
>> Good evening, from the Netherlands. Thank you, Luca and Nicolo for your hard work convening this workshop and spear heading the idea of putting together this glossary, dictionary. I think that's up for debate. The content I would like to focus is content modernization. The early days of the internet when e-commerce was dominating the business model. Maybe some of you remember the unfortunate trend that emerges around the year 2000. It used to be called skinvertising. They were encouraging people to get tattoos of their logos.
This is an example how social media is becoming the home of social commerce in many new ways. In the past five years to say the lead a very wide assortment of social media users have started to make serious money off their internet presence and we are talking really millions of dollars. These users started being called influencers because they amass army of followers and whose attention is turned into views for the purpose of programs like Google's absence. This has in a way professionalized some of these users, turning them into and now I would like to use a word that I know Chris very much likes, prosumers.
May be considered consumers but also could be considered traitors at the same time. There are a lot of models. I won't go into them right now. But you can find reference not only in the glossary but a book where she looks at consumer regulation and the disclosure of advertising. Maybe a note which I find very interesting, there is absolutely no platform right now. No social media platform that allows users to report hidden advertising. There is no actionable content that deals with hidden advertising although there are mandatory rules in many jurisdictions that definitely mandate the disclosure of advertising. This lacks in the content moderation discussion. The last comment I want to make with monetization for this panel, monetized content can pretty much be anything. Videos of people filming themselves eating. Of travel blogs, gaming. Like e-sports. You can monetize pretty much anything. The good side of this, it opens a lot of opportunities for every single social media user. And this is also one of the reasons why more and more young people aspire to become influencers or content creators on these social media platforms.
However, because you can monetize anything, you can even make money making election-prediction videos and this kind of content has been taken down by YouTube as recent as yesterday as misinformation or disinformation. We have a draft paper Giovanni will share in the chat. This is showing itself to be a serious trend in the recent elections. Recent webinar Alex from Stanford internet observatory were mentioning unlike 2016 where during the elections the main source of information might have been some external foreign involvement. This year it's been mainly done by influencers.
That's why we need to look from a regulatory perspective into the commercial interest into their activities because they warrant a lot of further insight. I will leave it to this though we can spend ages just talking about monetization. Thank you, Luca and Nicolo.
>> MODERATOR: Thank you for the excellent points raised. And the questions in the final debate. Without further ado, let me give to my friend, Chris Marsden from Sussex University. I unmuted and then remuted myself. I guess I have four minutes still. I put up a link, a link to an example of common carriers. My contribution to the glossary is about common carriage. Extremely long history throughout tort law. The United Kingdom and United States. It actually goes back to the pub. The pub for those who don't know is a place where people insult each other, speak violently, engage in all kinds of conspiracies in an unguarded and illegal fashion without being recorded for posterity for the entire world. Apart from the last part it's the internet. The last part does change things when it comes to the internet. The interesting thing about common carriage, now historical debate about net neutrality. If you remember is the idea there is no discrimination by carriers, between types of content on the internet so they shouldn't be throttling zoom. Interesting empirical work about the ways which the different carriers, mobile carriers have been dealing with Zoom over the years. I think it's extraordinary a company as small as Zoom has been able to survive the pandemic without being -- by voice over IP. I should say net neutrality which Luca and I have spent too much of our life dealing with is an area for debate as well. All carriers believe they believe in net neutrality even though they have spent billions of Euros stopping it from being applied properly. So everyone agrees but privately every sun disagrees.
The common carriage debate is essentially a much more nuanced debate than realized by some of the policy makers engaged in this field. It doesn't say you can't have some kinds of discrimination. It says the discrimination needs to be clear and between types not between individuals packages being carried on the internet. So common carriage, not just about people going into pubs and drink but also people crossing bridges and it's actually got a very long history looking at the courage of goods by sea. So the idea behind the way in which you can carry packages essentially by ship. That's why a lot of it goes back to. It doesn't say you have to carry everything even if the ship will sink. It says you will have to offer the same circumstances for one as another. Normally go on in great detail. I won't comment on other things in terms of co-regulation and platform law and so on. Except to say in lockdown, lockdown just resumed in the U.K., which by the way in the U.K., the great controversy around lockdown is the closing of the pub. The closing of the common, as it were, carrier of alcoholics and other people who like to socialize in public. I spent time thinking of legal history of the internet. It's only 30 years old but astonishing how people have a short-term view of it. After the digital tornado, this is a book edited by Kevin Werbach. Retrospective of one of the policies on the internet. Look at way it's strobing, it's beautiful. It's also about the way we go forward the next ten years and another book to share with you is this. Which is, you can see it there, private regulatory enforcement in the European union. There's a chapter on internet regulation, just me poking over the top of it which is by me. That's a couple things to look forward to. I also posted on the chat a reference to an article for Georgetown technology looking at concerns Paddy had, which I won't go over now, because Luca, my four minutes is pretty much up, I'm sure. Thanks Toluca and Nicolo for making it feel we are all together even though we can't be this year. I look forward to being back with everybody, certainly next year or hopefully sooner.
>> MODERATOR: Thank you, Chris, for the nice presentation. And also for the nice wishes of course. We all share to be back together soon. Let me now give the floor to Giovanni De Gregorio. From Milano Bicocca university.
>> Yes, thank you. Happy to be with everybody. It's been amazing work. It's still a draft. The ideas recommend that. I will focus on two contentious issues we addressed in the glossary. The two phenomenon are quite pretty much. I will tell you there is so much connected but what is interesting is that, I will say the disinformation is one of the main critical way why we need a glossary because there is no way to define this phenomenon in a unique framework. I think the debate on this information even if we don't have time to go into the debate is interesting to understand how much is different, how much we need this kind of exercise, also. Legal and policy exercise in the field. Policy and technology. But what is important is that one of course we look at this information, what is important, no matter where you are in the world, regulating this information, addressing this information means dealing with the free speech. But at the same time in terms of the pandemic, it doesn't just concern right to free speech. Concerns other kind of human rights, consumer rights. but also clashing interests. Think about what happens with this information, where have been also around the world where some conspiracy theorists about 5G have led to some attacks on tech engineers ash the world. So this is also online cannot translate in something that is offline hard. When we look also this information around the globe we see so many different approaches. Task force, regulation. The way we look at free speech and balancing between different interests around the world is so different. What Catalina said before. The market. We can see how free speech is perceived in a different way. Not just the Atlantic but other regimes.
This information is critical because it's spread online. This is why content moderation exists.
It's a broad framework. Not just about removal. It's about soft moderation. Also banning, and recommending and understanding what is behind the screen, not just what is removed. Because we can complain about what is removed but if we don't know how it is organized we can't complain because we don't know what happens behind the screen. Of course there are plenty of information -- self-regulation are very important at the same time. At the same time also they are contributing to increase and compete with public authority. Fundamental rights. The boundary, what you should ask is how to find a balance between private authority and public authority in information society. This is what content moderation provide us when we deal with platform regulation. This also leads to us to understand the logic. Not just content but also data. The difference between content and data is not so much. I would say it's very thin. When we look at especially the solution, policy perspective, because the glossary also works providing what the kind of policy could be implemented in this field. We see different approaches to content moderation. At the same time we can see increasingly call for transparency. I would say -- I say so much things. So thank you so much. And I enjoy so much this panel and more than happy to hear and contribute to this group also in the next months. Thank you.
>> MODERATOR: Thank you, Giovanni for packing in a lot of things into these five minutes.
A lot of elements for the debate. And really to be able to have means of debate at the end. Let me quickly give the floor to Enguerrant Marique. Please, the floor is yours.
>> Thank you very much, Luca. So I will try to keep it short because I fully subscribe to most in terms of methodology to what Richard and Ivar said earlier. Two changes I had, glossary entries. I was in charge of market, and social network. My first change I had in mind was to keep it simple and almost stupid so as many people could get the idea without entering too much in the technicals nor being too superficial. So that was kind of keeping it short. Which was difficult. Doing research you can very quickly enter into rabbit holes where you kind of start making so much -- between the models or different kind of social networks that at the end of the day you make so much distinction it's not relevant anymore because you just have one instance of a platform in mind trying to define what is social network. We have maybe three or four social networks in mind. Whereas there are many, many more social networks on media in place. We need to stay as general as possible. My fear is if we stay too general we aren't relevant any more. We may -- adequate level of generality to examine. The second challenge I would like to share with you is terms are so deceptive that you end up writing so many things that is contradictory because the literature is not unanimous. So if you think about, I was in charge of the words, share economy. There was a day I refused writing an entry on share economy. I asked Luca and Nikola about that but if we start defining these notions, it needs to be a notion of marketplace because the basic trajectories or the basic instructions are the same as in marketplaces.
Yet there are some specific features which may deserve specific paragraph between need to try to stay as general as possible and not getting into defining all the words that exist into internet governance otherwise at the end of the day we will completely get lost. So I will stay there so we don't lose too much time.
>> MODERATOR: Thank you very much for your points and confessing your behavior in refusing to describe the [breaking up]
Of course, now last but not least we have Yasmin Curzi.
>> Thank you. Hate speech and pornography entries. Both alone contributions from our dear colleagues, Terry and sin cha co. All these topics have much broader perspective but we meant to focus on the gender and standpoint cyber harassment for some can involve offenses in general. We also describe government white paper from 2009, described as broad approach where harmful content is understood as content related sexual exploitation and abuse, intimidation and others in the hate speech entry, I talk about how it's not a new phenomenon in our society. But the internet has enabled it to be carried out on a much broader scale. I also pointed out the discussion shouldn't be centered on waiving the free speech but instead should be centered on how the vulnerable groups such as women and minorities are powerless against violations.
Lastly in the pornography entry I discussed toward the lens of feminist -- and also explain how the platforms are dealing with the issue in different ways from screening and filtering to excessive censoring. I think that's all I have to say. Thank you, a lot. Once again.
>> MODERATOR: Thank you very much for perfectly respecting timing so we still have finally time for open debate. I would like to invite also all the participants, attendees according to Zoom to raise their hand in case they want to chip in and provide comments or ask questions, I will make sure they maybe are allowed to talk and to speak freely. If we have any attendees willing to provide comments or ask questions please use the "raise hand" option on Zoom. If not, -- well, there's a nice comment about the need for a glossary for cyber lo and internet lo in the chat. That's perhaps a little over ambitious, knowing how long it took to realize only this glossary/dictionary/encyclopedia of platform
>> I see the attendees is quite shy. No one is raising hands. I would well there's one question. Maybe terms be relevant also be on platforms like ISB, etc.
Of course. Of course. Many of the terms that are used one example in my mind is Chris's entry on common carriage is a perfect example of how things may be used not only for platform regulation but also for either internet law, or cyber law or you name it as you prefer. We have maybe final remark. Otherwise I will take advantage --
>> If you could share what you mentioned about the D.S.A.
>> MODERATOR: To share in the chat. What we will do and this is a public announcement is to download all the text of the chat and then share it on our list so everyone can have all the links and everything. A lot of people have been sharing a lot of interesting resources in the chat. So I will make sure we will download it and then share it with everyone on the list. As there is no other questions shared in the chat, I would like to, yes, I'm seeing Chris raising his hand. So please, Chris.
>> I was looking for the little emoticon but we don't have it for the Zoom chat. Just to say, I think one of the great things about the glossary is to get a common understanding. Take a basic example, the term ISP, of course, is meaningless in legal terms in most countries around the world. Not a term we use in the European union or North American legislation either. So I think it's really important to have an understanding. If you like, that's almost the most common baseline understanding people have about internet platforms and tend to be meaningless. We use I.S.S.P. in Europe. But just to say I think one of the really important elements that we need to think about going forward for the DCPR more generally, is how we can use that common understanding to explain and to translate to each other for next year what this smoke and mirrors in United States means about regulating section 230 on which we all pretend we base our own laws, even though we don't. And also to explain a little bit what the DSA and the DMA will mean next year, so that we understand if there’s an actual divergence taking place between the two, I suppose you might say standard sectors and legal terms. But I think the glossary is so probably perfectly defined for that debate. Because without the glossary we are speaking completely at a different language to each other. I say ISSP, you say IAP, you say ECSP and there are lots of other things that Luca knows that we have this private language we use that uses acronyms that no one else it seems, even an internet Laureate understands and I think it’s absolutely essential. So, congratulations, guys, on the glossary and I think it’s a really important contribution.
>> MODERATOR: Thank you very much, Chris. And actually, there was an initial conversation I had with email with Rossana, she also suggested an excellent term, which is a new coin name, meaning, using Greek word for the common language could be a very good choice also at least as subtitle for the glossary/dictionary.
I see that participants are now less shy and we have three raised hands. So first one is Courtney Raj, you already have a speaker privileges, so please go ahead when you want.
>> Okay, thank you so much. Can you hear me? Great. Okay. So I just wanted to say, I think this is such a fantastic initiative and it was really interesting to be involved. I think a couple of just thoughts as we kind of conclude the session. One is, as was mentioned earlier, there are kind of a lot of -- this is maybe more of an encyclopedia than a glossary in many sense, so thinking about what we call it will be important. And I think there’s an opportunity to reach out to other groups where having this common terminology is really only going to have an impact if we are all using it beyond just, for example, this dynamic coalition. So as the outreach and partnerships share for Giganet, I wanted to offer and think about how we can reach the internet governance community and how we can reach out to civil society groups for example and some of the advisory networks on, you know, Christchurch college or the GIBCT, or other things like that. So I think, you know, as this draft becomes more finalized, just thinking about how we then take the steps to make this implemented as a common sense of terminology will be an exciting opportunity.
>> MODERATOR: Thank you very much, Courtney for this and actually, yes. Well, our initial purpose was not only to start sharing a pdf at the IGF but also share it by using the IGF website from allowing to comment. Then the pandemics arrived and our plans and timeline were a little bit disrupted. But the goal is indeed to then now start outreach also to other groups that are dealing with this. We are started to share it and seek for comments not only from people from this coalition, broad range of stakeholders of groups that are maybe well beyond the IGF community. So anyone interested in using this to do outreach and ask feedback and convey the feedback in the header of the document, it is specified to provide them to the coalition directly by email to me or Nico.
But we will centralize this until the end of the year and we will try to review this and maybe we can also ask to help support of the IGF secretariat to have directly the possibility of this on the platform. Unfortunately, it was not possible before because of all the issues that we know but we will make sure to have a sustainable approach to this as soon as possible.
I see Bertrand also has a raised hand. So, please, Bertrand, go ahead.
>> Yeah, thank you, Luca. It was a really great panel. Two quick points. One, I would be glad to have feedback, maybe from Paddy, on this addition of third layer of the governance by and governance on. Whether this is a concept that gets traction because there is a responsibility increasingly for the people who manage the hundreds of millions of groups that are taking place on such platforms as Facebook. And the second thing was an additional comment to what Chris was mentioning regarding the coordination, the compatibility, the concept of legal interoperability between basically the regulations that may come out of the U.S. in section 230 revision and the digital services act.
One thing I am particularly concerned about is the terminology that is going to be used for the different layers in the stack of intermediaries from the ones that are very top of the stack that have full capacity to influence the amplitude of the content to the ones that are much lower in the stack. The cloud players of this world and thing and whether there is a useful terminology that could emerge to be more sophisticated than what is currently in the e-commerce directive clearly not sufficient. How do you see this work being conducted?
>> Should I jump in on the first question?
>> MODERATOR: Yes, please, go ahead.
>> Yes, I'm glad I get to talk about this briefly, because I think it’s a very interesting comment. It’s not a phrase I’ve seen yet, so perhaps you may be the first to use it. In which case, congratulations. The way you slice this thing theoretically, you know, you can go both ways, right? You can also say that the power that moderators have is delegated to them by the platform, as a feature of the platform, and therefore is part of the governance by the platform. But I think there are important ways that users also have power independent of the platform, especially when they are in the high position as moderators, for example of subreddits, or Facebook groups etc.. If you want to draw attention to those dynamics then you think governance on platforms could be a potentially useful metaphor for that. So, I think that's a term that deserves further explanation for sure. So the way I approach it for this glossary project is literature review. How do people use it until now. For that purpose, I would have to find usage to do that. Or you could clarify there are limitations. But you know, hypothetically if you wrote a blog post about this and I could refer to Chappelle has used governance on platforms, then that would work quite well.
>> MODERATOR: Excellent, just a couple two quick comments on this. First is as I was stressing at the very beginning we aim at consolidating a lot of different views.
Indeed we still have time to add views and what you were stressing, Bertrand, was a good point that should be added. I also shared in the chat doing this panel, another point I think should be added to this as governance, it is not really translate in English literature but is very, develop well-known concept in French language literature. Distinction between governance as the self-processes that leads to regulation and regulation on the other hand, set of techniques of tools that allow to create equilibrium in a system that is naturally not in advance format. So this distinction between governance as process and regulation as a set of tools and technique may also be useful for the debate. And by coincidence I wrote a book on this so I can also share it in the chat. If you are interested. Now, last but not least, we have still an intervention. We have Natalie Van Hamdun who wanted to share something.
>> Hi, thanks, Luca, and thanks everyone on this panel. I had a question that might have come up in your previous dynamic coalition papers. But I’m coming from a cyber security angle where I met Luca before as well. And the global conversation on cyber security and stability in cyberspace, we have developed these norms on a U.N. level but also on a private sector level that have a lot of commonality, that they have become you know, universally applicable forms. I wonder where this process, where does it stand for platform governance, is there a universally applicable principles of platform governance and content moderation. I’m not talking about illegal content. I think what Julie on Facebook said herself they also lack overarching principles which make it a bit sometimes, yeah, random, in the application. So I’m just wondering if there are, in the few initiatives that exist, I’ve seen about the Santa Clara principles on content moderation and safe networking principles, if there’s a commonality and where does this process stand. Sorry for such an elaborate question at the end of such a very interesting conversation.
>> MODERATOR: I would ask Nicolo to provide his reply on these, plus the final wrap-up remarks so that then we will finalize this session because we are already ten minutes, besides our final deadline. So please, Nico, provide your view on this. I will take advantage of my position as moderator we have all celebrated recommendations and best practices that could be useful for platforms.
I know that Nico also has more views on these, so please, Nico go ahead.
>> CO-MODERATOR: I think we are already out of time. That is an excellent answer. I can refer to our output in 2017 on platform regulations, why they are regulated and why they regulate us. Where we go through, you know, all the different normative references that were relevant in this space and we try to suggest, you know, some way forward. I guess, you know, we can talk privately, more in detail and now I just wanted to thank everyone and also say it was very useful comment. This one about the languages. And I think we should be thinking if we want to include stakeholder’s view we should be thinking of translating this at some point in some of the other languages, for example, French, Portuguese and maybe Chinese so we can reach a broad spectrum of stakeholders.
But yeah, when is the right moment that is something that we should also ask ourselves. We should move forward with the comments and at some point decide this is something that should be shared more widely to other communities, other languages. But I think overall this was a fantastic exercise and it’s just the beginning. So thank you for being with us and we will be together on this journey, at least until next year, when we will publish the final outcome. Thank you very much, everyone.
>> MODERATOR: Thank you, bye-bye.
>> Thanks, everyone. And especially Nico and Luca. Shout out.
>> Thanks so much, guys.