IGF 2019 – Day 2 – Estrel Saal B – DC on Platform Responsibility

The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> MODERATOR: Good morning, everyone. Welcome to the 2019 meeting of the IGF commission on platform responsibility. It's a great pleasure to be here to celebrate the fifth anniversary with you guys.

I'm the professor of regulation and together with my partnering crime, Niccolo Zingales, I had the pleasure of co-chairing the coalition for five years. It's interesting for us to see that despite the critiques that one may rightfully do to the IGF being a talking shop and not having concrete outputs, we actually demonstrate with concrete evidence that this is not complete ly through. The special issue you may find there available on-line thanks also has agreed to keep it open access for one year demonstrates that when people gather and analyze things together, they may have concrete result, increase concrete research, inputs and outputs and concrete suggestions.

Before we start and we give the floor to our distinguished panelist s and, of course, to Nico who had the burden of organizing the session and therefore needs a lot of -- not a lot -- some time to provide some thoughts on the the work we are doing. I want to demonstrate that IGF is not just about talking. We started five years ago to demonstrate platform responsibility, to respect fundamental rights. That's why we coined the term "platform responsibility" also utilized by other people, not always quoting us, but who cares. What matters is to progress in thinking.

And after some months of work, actually, we have understood that we needed a common understanding of what -- of how platform could respect human rights, we found recommendations on terms of service, sorry, in human rights, in 2015, those recommendations were also a study in service and human rights co-sponsored by the council of Europe and LGB and many of those recommendations may be seen in the council of Europe on the roles and responsibilities of intermediaries. So, again, another -- further evidence to support the claim that actually the IGF can do things to inference policy making.

Then we published two years ago a book on regulations. And what emerged from that research, the name of the book is regulations, how platforms are regulated and how they regulate us. The second part of the title is most interesting, of course, is we emerged with the research we had together is that platforms are requiring a role of private regulators. They define unilaterally conditions in terms of service that regulate how users behave. They implement them through Al go rhythmic tools or by defining the architecture of the platform. And they can also judge our conflicts, our dispute between users according to the terms that they establish. They have a quasi normative, quasi executive, and quasi judicial power. It's interesting to us, but that's what's triggered our final step in our work -- not final, but the last one, what are created by the regulators. We now regulators are bound by institutional law. They have to respect fundamental rights and respect competition. How are private regulators behaving? And what are the values they're conveying and they're baking into their architectures and how they are instructing value.

>> NICOLO ZINGALES: I'm nick Nicolo Zingales. This is a moment we should recognize we've come a long way. And we managed to identify different notions of our responsibility for our work. And at this point, I think what we asked ourselves is how can we make sure that when regulators in the private sphere, they will be enlightened by the notion of that all of us understand and appreciate as the ones that should be guiding this. So in doing that, we have tried to connect two notions of value that are particularly important. So, on the one hand, the economic value. So, who is getting value out of the operations on the platforms, how much of this value is being created, and how much of it is extraction of value from other players that are somehow stuck in the ecosystem.

And, on the other hand, social values. So, values like democracy, the respect of fundamental rights, you know, but even labor protection, environmental protections. So, how can we make sure that the operation of private entities fits in a framework where the pursuit of one value does not undermine the system of guarantees we have in place for the other value, so the social and economic dimensions are harmonious and they're not clashing against each other.

So, in this special issue, we tried to highlight the link between the two. There's often insufficient recognition for the economic spillover in regard to platforms and also by platforms themselves. I think we've seen more play on the responsibility. The more the market power, the more the responsibility arises to make sure that competition is not distorted. So we started exploring what role this has to play with regard to human rights connection. We have regulation playing out also with the respect to fundamental rights.

And I think if we don't emphasize this linkage, we fall into the law of unintended consequences. So quite often we impose some regulation that has minimal floor that everybody needs to respect it. It's a play better suited to comply with the regulation and basically hinder the ability of the smaller players to compete on an equal footing.

I think what we're going to hear today, we're going to analyze different forms of value. And I think for our discussion, it would be useful to maybe at the end reflect upon how can we make sure that the value that is conveyed in the social sphere is in line and is -- harmonically developed in cop junction with our need to protect fundamental rights and other social values.

So, we have fundamental rights. We had some discussion about taxation that is not going to be discussed today. But obviously a lot of issues also with regard to making sure that the allocation of benefits is distributed equally in the economy. So, taxation pays a special role in that regard. And platforms, both tend to evade the traditional understanding of taxation based on presence. But also, very powerful tool to make sure the users will pay the taxes on the activities for the platform. That's a discussion we're not going to today. But we're going to start with a keynote speech with a special rapporteur on the freedom of expression on American commission of human rights. Then we'll have reaction from other panelists we'll introduce as we follow along.

>> Thank you, everyone. Thank you to the commission for this invitation. And, yes, in fact, in the -- I want to say that the content of the moderation by the platform is perhaps the most controversial issue, now, actually in the human rights field in particular the case of expression. The international role, we agreed that the states -- the principal obligation to respect human rights and to enforce and implement the recommendation of the international standards and international bodies.

But, in the last five, six years, in particular, the special rapporteur of expression, the OCE, and my -- my office and the African condition of human rights, we agree that the human rights are for the power of the companies for the tech company that control the platform and the public space. And, you know, through -- through the platforms, the attempts of condition and the decision making that they in fact the platform does adopted were -- now sharpened the -- you know, the scope and the field

We recognize it's not only a responsibility from this company in the one hand, the state take action and, you know, pass a law and decision to press to the platform to this, you know, a content decision and take down a copyright, hate speech, and take down different -- under different scope. And the -- and on the other hand, the platform tech decision my own terrible condition.

I'm sure the civil society and you know the societies in general call for the platform to take decision in some issues that the rights like violence, violence against women and vulnerability groups. And this is a very complex situation in this case.

And, in fact, in the last -- we have a -- released the expression. And I want to share with all of you the last decision in this sense. It's, you know, the 20th declaration of -- of special rapporteur. And we have the challenges for freedom of expression for the next ten years, in three chapters. The first, the creation of the environment of manners and this for link and the decision of -- the second one is building a free and open and inclusive internet that many issues discusses here. The neutrality and ensure the access.

And final, first time in this joint declaration, we called the -- a threat to freedom of expression. And first time we looked at the special recommendation, from the -- from the platform. I look ed and, you know, in fact given that the UNC -- I looked at the rule of -- the companies also have a duty to respect human rights in the directive of business and human rights. And I tried to -- I tried to, as I said, have this set of recommendations very quick. But in order to protect against the accountable domination of the environment, we used the following recommendation.

And first, that declaration of independence multi-stakeholder oversight and addressed them. The second, the measure that addressed the ways in which they advertise the business models of some digital technology companies, create an environment which can also be used for the dissemination of misinformation and hateful expression. First, the company should be called implementation the principles on business and human right s. For the legal solution that allows for the transparent declaration and the moderation -- and finally human rights sensitive solution for this information include the possibility of the new phenomenon of polarization. Finally, and very important, look at relation to the service and concentration of all practices with the abuse of the dominant market position. The link between economic and human right values.

And in the scope of this resolution, we call also to have, you know, a decision to appeal the content. And in some way to generate jurisprudence about the decision to discuss in -- thank you so much.

[ Applause ]

>> MODERATOR: Thank you. That's perfect introduction, also reminding us of the importance of procedural values there. So, appeal transparency. And I think this ties in very well with speech by Niccolò Suzor of Australia. We invited him because he published a book on moderation practices that looks to what are the values that inform content moderation. And this would be a team that would be explored. But more broadly, I think this tie s in with our question about what are the values? The person would like to propose is first, we understand what the values are. And those that have been listed by the special rapporteurs are a good ground to start this inquiry. And then once we understand the values and whether they are created or obstructed or conveyed, you know, then we can think about how to shape regulation in a way that recognizes the production of value.

So, Nic Suzor, the floor to you.

>> NIC SUZOR: Thank you for the warm introduction and the opportunity to plug the book. "Lawless: The Secret World the Government our Digital World."  I'm happy to share a pdf or get it on the twitter page. Thank you.

In all seriousness, grateful to be here, see such a full room of people who are interest in this question of how we operationalize values in the governance of the internet particularly at a platform level.

I think what's really interesting being here at the IGF, being here in Europe, is that it's clear that we're at a moment of real opportunity for change. In how we embed human values and due process for safeguards in the way that decisions are made about content on-line. Clearly states are much more willing to intervene than in the past. There's a lot of new laws about new content guidelines and new structures to actually require platforms to make take a more careful role. They need to be designed for human rights values. Platform themselves are facing a moment of crisis, a crisis of legitimacy in the way they have governed their networks in the past is no longer often considered sufficient to account for the real impact that they have on fundamental human rights. That means that platforms are more willing than they have been before to into the discussion of how they might embed new procedural safeguards and substantive protection for rights in their networks.

Civil society, we're seeing civil society become a lot more organize in the way that we develop or articulate the values we want to see or to try to take on the next step of figuring out how we put that in to practice. So, the fundamental argument of my book and my talk today is that there is an opportunity here to institutionalize. To develop real safeguards for how platforms deal with the expression of their users. That's fundamentally important. It's time now we can reflect on the progress and think together as a multi-stakeholder community to talk about how we might move forward.

A couple of key points here, first, the platforms are going to have a key role on governing on-line content, regardless of the laws we create, there's always going to be both a zone of discretion with platforms are free to create and enforce their own terms of service and safeguards. And when interpreting public rules, there's going to be discretions and obligations on platforms to be able to apply those democratically developed rules to their particular circumstances.

So, I think what we need to push for here as we start moving forward to operationalize the values that are represented in this -- in this great special issue, is to think about how you do due process in a way that can be operationalized at scale by private platforms. Clearly, I think the old institutions we have to safeguard the due process, the courts and the civil society that accompany them are not quite currently suited to doing that day-to-day work of accountability to ensure that decisions are made in a way that's legitimate.

I think really, I want to close on two key avenues for collaboration. Then we have all of us together in this room thinking about when we leave the IGF, where we might work together next. The first question is how do we take the high-level values that exist and proliferate in all of these declarations, about rights, new ones released today by this week, by the one way foundation, but continuing a long history of what you can think of as digital constitutionalism documents.Documents that articulate the principles that we want to see, not just binding on states, but we want to see platforms adhered to when they make decisions that affect individuals. A lot of work to be done here on working on how to translate the high-level values to more concrete rules, legal -- went with real legal binding force, and self-regulatory processes that help to hold companies to account.

The second, is, figuring out how civil society organizations can work to do the accountability work. I think we need a better institutional process that we can actually monitor how well platforms are performing against the high-level values. And we hold them to account against basic human rights. That means a lot more access to data that we can track things at scale over time and there's work for researchers to do that and for civil society to take that research and put it in practice in a way that can actually move us from transparency towards real accountability.

And so, two quick plugs as I finish. The first is the international communications association, post conference on platform governance. Anyone wants to go through Brisbane in May, it's beautiful. The paper is out now and closes on the 20th of December and that's a chance for us to think through how to implement regulation and when self-regulation can work effectively.

And, second, the Santa Clara principles on transparency and accountability on content moderation are entering a review phase next year. We encourage everyone here to get involve in that process as we start to work out exactly how we do this work of holding platforms accountable against public interest values in a way that protects freedom of speech, innovation, and everything that we care about. Thank you very much.

>> Thank you very much, Nic. We'll take some questions at the end of the session. First, before moving to the next set of speakers, I would like to thank also Monica Resina from Facebook, a very important player in this discussion and it's been thinking a lot over the last year or more even about mechanisms for oversight and accountability. So, you can tell us something about this initiative.

>> MONICA RESINA: Thank you for the invitation. It's a pleasure to be here today. I'm Monica. I've opinion working for Facebook Brazil for a little over three years now. And we have very few minutes to talk, so I'm going to highlight what I want to say in three main points. So over my past three years at Facebook, I have been able to witness pretty amazing shift towards how the company looks at content issues, not only content issues, but because this is the focus of this, we're going to focus on content. In terms of value, and also in terms of accountability.

So, the first point I would like to make is transparency, the company has really, really worked towards the last couple of years towards becoming more transparent. Our community standards have very easy, very accessible language. They are available on-line, in depth for those who want to research and learn about it further.

These are standards or internal policies that are constantly evolving. And we've been working hard to make that process transparent as well. Now you can hear of all of the internal discussions withe have at Facebook regarding content and content regulation because we focus minutes on every minute that takes place and that reflects on policy changes. We've been consulting with experts on the ground. That includes academics, civil society members, civil society community, that's taken into account every single decision we made in the company. For those that are interested, I welcome you to learn a little bit more about the process and to understand and to visit our minutes.

A second point I would like to make is in terms of values. You now see human rights fundamental values embedded in Facebook's approach, especially towards content moderation. So one example that I -- one of many examples that I would like to bring up today is the latest update that we did to the values that guide our community standards that was published in -- and you'll find that on Facebook's newsroom place published on September 12. And it was by our Vice President of global policy management, Monica Pickert. And it updates the values that Facebook holds itself accountable in terms of -- and guide our community standards in terms of first of all commitment to free expression. The commitment to free speech really guides Facebook's approach towards this.

And, you know, we're -- this is a very explicit and specific text that recognizes that the internet does bring new and increased opportunities for abuse. And whenever Facebook needs to act to limit free expression, then it will guide itself, you know, on principles such as authenticity, safety, safety is number one nowadays at the company. Privacy. And also dignity. And then just one thing I would like to highlight, on the dignity piece, you'll find human rights language embed in the language of our own updates to our community standards, such as we believe that all people are equal in dignity and rights and we respect people will respect this dignity of others and not harass or degrade others. So that's a little bit of just gives you a glimpse of what I'm talking about.

Then finally, I would like to bring up the external oversight board for those of you who don't know what Facebook's external oversight board. It's an independent external body of experts that will revise content decisions that are made by Facebook and the decision by that external oversight board will be binding on Facebook, meaning that Facebook will not be able to reverse that.

The process through which this oversight board has been established is a beautiful one. The company has been consulting with experts for a little bit over a year now. It's been extremely transparent. I know a lot of people who are here today have been part of that process. And hopefully towards as we enter 2020 and have the board operationally working on concrete cases and hopefully those decisions will also help us guide and improve our internal policies.

This is unprecedented in the industry. And I'm extremely proud to be able to be part of this work. And looking forward to see how that is playing out in the future. Thank you.

>> MODERATOR: Thank you, Monica. We'll take two questions before we move to that one over there. Yes, do you have the microphone?

>> AUDIENCE MEMBER: Hi. I was wondering -- I hope you know that Microsoft has something called transparency centers. So that means the trusted -- the trusted partners, I think there's seven trusted partners, the EU is one, can go to five centers, I think one's in Brussels, one is in Singapore, and really see what's actually happening. Will Facebook do something like this? Will this be a standard in the industry?

>> MODERATOR: The gentleman next to you?

>> AUDIENCE MEMBER: Thank you very much. For Facebook, you mentioned that freedom of expression principles underlie all of your moderation guidelines and policies. Could you say which freedom of expression principles? Are these under the Universal declaration of a -- are they under regional human right s instruments such as the organization of American states or the European convention of human rights? And could you comment on the difference among those standards if you have any views on that.

>> MODERATOR: I see another hand, I'll take another quick one.

>> AUDIENCE MEMBER: A question for Nic. I haven't read your entire work yet. But I take note of the two avenues for collaboration. The first thing, translate the values to concrete rules, the second is to hold platforms to an account. I want to encourage you, if you listened to the discussions in the main room, platforms themselves face an extensible risk of disappearing if they are held liable for things we do, see, say, and sell on platforms.

We heard a few speakers in the anti-terrorism session talk about that. So the civil society that values the ability to do free expression on platforms probably has to help avoid having those platforms suddenly become directly liable for government action and civil lawsuits for the things we do and say. Thank you.

>> Good morning.

>> Hi, so on the first question, I'm not familiar with Microsoft program or process that you mentioned. So, I'm unable to speak to it. We can follow up.

Also, on the freedom of expression principles, I don't think I have time. Nico is looking fiercely at me across the table to go into detail. But we can definitely follow up as well offline. Happy to connect you with some people who are working specifically on these standards. But what I meant is the company will always err on the side of free speech, but there are, of course, there are spaces in which if we want to preserve the safety of our users in which we will need to -- the company will need to put some limits such as, you know, any kind of expression that might lead to, you know, real world harm or bullying or harassment. So, these are difficult standards to -- a difficult standard to find the exact right balance. But we always look at those and try to work on those areas taking into account that free speech and free expression is in the DNA of the company and we're always having that as our main goal, for sure.

>> MODERATOR: Nic?

>> NIC SUZOR: Yes, thank you very much for the question. And I agree wholeheartedly that limitations on liability are fundamentally important to promoting freedom of speech and the viability of the platforms that we rely on. I think one of the key challenges is that the lack of legitimacy in how platforms have made decisions about sitting and enforcing the rules to date has led to all of the pressure to carve out the safe harbors and impose more liabilities. So I think the answer is we have to do both. That we have to keep advocating for strong intermediary reliability protection rules and clear -- clear and effective rules for when platforms have to take action according to law. But also, the tradeoff then is that platform s need to impress more and they need to do that to help make the processes more efficient in order to increase trust and legitimacy.

>> MODERATOR: You wanted to comment?

>> YSEULT MARIQUE: Yeah, I think I agree with the intervention and I forgot to say that the -- the special rapporteur recommended the platform adopt the international law in -- and this mind look at the test of necessity and proportionality and indecisions. You know, reacting in a more narrow way in terms of condition. And finally we understand that we need the -- need the decision and the capacity building inside of the companies. Because you know how you participate in Mexico in one of these consulting and we discussed some decision that the companies take before in the -- internal and we are in this meeting that the company have a lack of, you know, capacities to adopt this kind of international law.

And I'm finally this is idea that the self-regulation that the regulation of the state and the policies that improve, you know, the make more transparency will align finally with the freedom of expression that there are some concerns about, you know, what this means, no?

>> MODERATOR: Now we'll move to the second segment of the session, I would like to ask the panelists to have short and sharp presentations so we can move expeditiously -- not real expeditiously but in a way that guarantees we will reach the third segment of the session if we have time. So platform values and -- and content moderation is the second segment. Analyzing how -- what is the impact of both algorithmic and hand made by moderators, moderation on people's ability to share information, receive information, but also on companies and businesses and rely on visibility for the business model, like journals.

So, first, let's start with Chris Marsden from Sussex University, also to provide some insight on their contribution, what is the impact of content moderation, on election?

>> CHRIS MARSDEN: Thank you. I know how hard it is to get together. To get this monumental effort together. It's not paginated. It's halfway through. I'll mention in a second. First of all, I have to note that in the United Kingdom, there's a national professor strike taking place this week and next week, where are we in the United Kingdom. I will be on strike. I want to acknowledge that and acknowledge the colleagues on the picket lines today on a cold, miserable day in the UK. If you want to sit here more about that e-mail and a sense of reply. We'll tell you about it.

So, I want to note the Obstacle is called the democratic elections, how can the law regulate digital disregulation. We know about the thanks that the Presidents such, Facebook, prime minister Modi and others, a critical issue going forward to see how they're being abused in order to spread the misinformation. Ian Brown, who is both the visiting professor and also research ICT Africa. Ian, you will note, will put his name last on the article that reiterates the wonderful collaborator but also that this is work that we're doing as part of the report for the European parliament that came out last year. That continues with the commonwealth.

Very quickly, the article examines how government can regulate the social media companies such as Facebook that regulate code, in fact, they, themselves, regulate the disinformation on the platforms. We used disinformation to refer to motivating faking of news, we have a definition here. We examine the effects that the disinformation initiatives of these platforms have on freedom of expression, defined under article 10 of the European Commission on Human Rights, very different, which, of course, Facebook uses the guiding principle, media pluralism and democracy. From the wider lens of attach tacking on-line, and the concerns for the requests for proactive, automated measures to remove intermediaries, I know that. We looked at the automated decision making using the decision making. Let us know billions of accounts, and many billions of comments. We have a real danger that the machine is driving the decision making.

I'm going to skip directly despite the deliciousness of section four looking for six options for regulation to the conclusions. It's there that we're shaped by the comments that were made when we were drafting by the articles saying we want to look at these specific things. So, I'll explain very, very briefly these very specific things. The value of things here, the detection of representative and democracy. The size of the economic actors involved to a vast existential threat less for the company. They're affected by the regulation, but we maintain the issue s for the democratic and social values are paramount over the economic health of individual companies. We respect the public choice theory theorizes that the politicians will pursue the self-interested course and the conduct of the elections are their primary concern. And given that distinction between that and other forms of public policy, we find it unsurprising that electoral reform is central to the concerns of politicians. Politicians make electoral law after winning elections, let's remember that.

We caution that elections are in a multimedia department, that's converging on digital media, but existing forms of media still predominate. So we don't maintain on-line is the only reform.

At the last minute, a set of important questions, what kind of regulatory tools can I uphold in the policy values of regulation, the processes, to restore the democratic values of social justice are important and in particular forms of human centered co-regulation. So we do not propose self-regulation, we think self-regulation as a ship has failed. We think co-regulation has required and make sure that the companies maintain transparency as to trust that we have binding forms of self-organization.

To what entities do we apply rules based on specific values. Who should be the recipients of the regulation to protect what we choose and that we create. We spread the information, but also the social media platforms and the electoral system itself is a sociotechnical system, it's not just a technical system, it's the deployment of technology and the urge of context. Disinformation is as old as the written word. We explain this to that. It cannot be solved. Disinformation cannot be removed, but its worst effects can be somewhat ameliorated using the policy outlines in section 4 in particular of the co-regulation. A finer point to put it in a wider context. As with so many regulatory problems from railways to the nuclear power to the internet, the lessons of regulatory history are important to adapting existing and deploy ing new regulation. The complex socioeconomic deployment of innovations is what creates regulatory issues, not the technology itself.

>> Thank you very much.

>> CHRIS MARSDEN: Context, context, context.

>> MODERATOR: Thank you. I urge all the panelists to stick to the five minute maximum so we have some time for discussion. Now, the coordinator for the certainty of quality in society.

>> Thank you. Let me thank your and Nicola's work. In the interest of time and the discussion here, I picked four topics. I'll briefly go over I think are relevant for our panel.

So, what would like to propose is we need to shift the focus of conversation about content moderation and platform responsibility in four ways. One is the talk of responsibilities for platforms, less of legal speech that platforms fail to remove and more about legal speech that platforms illegally or unconstitutionally sensor. In certain countries make clear that it violates free speech and community standards as contractual clauses are in some instances abusive.

In other countries as is the case of Germany or Brazil, the constitution states buying parties, directly or indirectly. In such countries, courts could derive such obligation directly from the constitution and oppose this obligation to respect free speech on private platforms.

Second, we should consider state regulation that is less about the merits of individual instances of speech and more about the protection of procedural rules, some people here in the panel have already talked about this. The best, latest example of the way to go is the Santa Clara principles. So we need transparency about moderation, practice about the specific removals and the opportunity to appeal. Platform liability in this sense needs to be for failures in terms of overarching procedure and architecture and not for individual instances of speech.

Third, this means that there's a previously unthinkable role for federal agencies to exercise oversight of platform compliance with basic procedural rules of moderation. Now, this is certainly a challenge to the influence of the executive branch of government in the decisions on the merits of speech. But they are better equipped than courts that evaluate the use of machine decision making tool s in moderation to evaluate the working conditions of moderators around the world and the intricacies of platform architecture.

And, fourth, I believe we should stimulate less decisions on the merits of specific instances of speech by platforms, by platforms, or courts, and more by users themselves. We need to separate the discussion of who makes the rules about what can and cannot be said. And the issue of who enforces from the -- from the issue of who enforces such rules. Platforms should no longer be alone in this effort, something Facebook has acknowledged with the establishment of the independent oversight board.

This is about user empowerment. There are no perfect answers in terms of speech. It's subjective. But it's important to have processes and processes that reflect local cultures, values, and realities. Thank you.

>> MODERATOR: Excellent. Thank you for respecting the time. You deserve the applause. Truly. Now, how to democratize on-line content moderation? Geovanni? Please?

>> GIOVANNI DE GREGORIO: Good afternoon, everyone. First of all, let me say what all the others have done, these are very important for all of us to start -- to start and continue talking about, you know, what is the role and platform responsibility in the last years.

My paper has been focused on our weekend to democratize on-line. So my paper tried to understand how the right to free speech that's been conceived from a negative perspective as the right not to be subject to interference from public actors, probably need to recontextualize the free speech in the platform. Requiring actors to intervene and now we understand we can intervene in this field and on-line content moderation just with the right to free speech. Because we know that, of course, the right of free speech is on the value for our democratic society.

So, this is important not let's say the environment and in the digital environment. But as we know, content flows and platforms according to business properties. So, the moderation of content based on business purposes and based on maximization, user profiling and common from this profile ing. So, in this perspective, probably, we should think about whether the right to free speech that basically -- basically concerned with the public actors is still enough in information to protect users. And this is enough to protect democratic values on-line.

And so what I tried to argue in my paper is that probably we need to move from a negative to a positive to the right to free speech on-line. That doesn't mean moving from a negative to a positive basically requiring the actors and the positive obligation if you look for example at the case load to intervene when there is a big drive for the protection of fundamental rights in this case on-line.

And basically, what the -- this approach, this positive approach of freedom of expression leads basically to wonder what kind of regulation of content we can try to draft and think about, you know, individual environment. I tried to divide the process of deliberation in three main parts. The first part is the phase -- the content -- so basically platform explains to users how the presence of content moderation basically without the criteria as been already said during the -- for what real standards platform used when they moderate content. So this is something that required sums in the platforms in order to allow users or even understand how the content is moderated, first of all.

Then in the second phase, really receiving user notice and, of course, based on their notion of awareness.

And, of course, the second phase includes content provider, the novice provider, the platform should be in some way required to give notice about what is happening, when the content is -- it's been plugged and so forth. The second concern is especially decision making. What happens after the user notice, and, of course, what kind of safeguards can be implement ed especially since it's usually performed by using the decision-making processes.

And the last point, basically concerned -- what happens after receiving the outcome of content moderation, what a user can do after basically having received a decision coming from the on-line platform. And which condition users can access the mechanism.

But, all of this kind of this, in my opinion, should be based on a principle. And basically I'm almost finished. The four principles should be of course -- the first principle is the respect of -- so platform would not -- should not be required to manage the cob tent, of course. The second way, this involves the transparency and accountability. So, users should be more aware of what happens when content flows on-line in social media -- in the social media landscape. But, of course, this includes functionality as platforms it could undermine the platform and the social media market, of course, for others it's more platforms. Last but not least sometimes it's important to focus on the role of human, the role with moderation taking some clues and some perspective from the field of better protection. That's it. Thanks.

>> Thank you very much. We'll add we should comment what you were saying about the implementation of the right of the effective remedy, in this special issue and the IGF websites, you will find the best practices with regard to the effective remedy that we elaborated collectively last year over the time of one year, the entire year of consultation. And both here and on the website of the IGF, it's freely available for any platform willing to have guidance somehow to implement the right to an effective remedy properly.

Now, something very important. Those that are affected by moderation are not only individuals are also businesses. And especially journalists, and media outlets. So, I would like for Dragan who I don't see? So, thank you, Dragan for being with us and sharing insights on how platforms are affecting your work and your life.

>> DRAGAN OBRADOVIC: Thank you for the invitation. I'm coming from Serbia. As you can conclude from the name, the VR investigative journalistic portal. So, I will address this topic from the consequences that we are facing in our everyday work. They're not small. So, thinking about the conditions in which we are working, this is like I would say without functional democracy meaning without institutions that you'll protect us. But without functional media market and the capture of mainstream media. So that basically for us means that the internet is the only free space that we can promote our content and basically inform thousands of our readers about the corruption, about the organized crime, about the issues of position.

And basically, you know that today internet means platforms like for my generation. Google for my son's generation for Facebook. So that means that we are very dependent on the platform's policies and vulnerable on the changes that are happening to them. And I'll just briefly mention two examples to illustrate that kind of dependency. In 2007, Facebook basically introduces a special feature that is called explore feed. And it was introduced in several countries, among those countries, there was also -- and it meant that all of the media content is one click away from your regular feed. It was separated to another feed. And that for us meant a drop of 30% of the traffic on our website.

Fortunately, that practice was abandoned like they decided the experiment was not suck -- successful. The other -- the other example that I will mention is more recent. It's from the last month, basically. When we published the story about a high-level corruption that was happening in Serbia, that was the story about a high-ranked official that was followed by the media. We were informed by YouTube that the media would be removed, taken down, based on privacy complaints without information like who complained, over what, exactly, we got 48 hours' notice to fix the content. Even though it would not say what was wrong with it. We went back to YouTube, we tried to defend our case. The content was taken down. We managed to get it back on-line only eight days later, which is in the media business, is quite late, you'll understand.

So, these are examples that can illustrate like this kind of vulnerability that we have algorithms and the rules and basically experiments that you're subjected in that you don't know enough of. And those taking advantage benefit ed from that and don't have the money. In the case of Serbia, those countries that are not fully democratic, that means governments, local tycoon s, that means people that we are, in fact, reporting about. They have enough money to targeted ads to buy boats and services that are totally distorting public discourse and social networks and basically are affecting public opinions.

What is happening? It's happening that platforms are making profit from that. The same time during the situation, you cannot report these things to anyone because platforms that are making profit in our countries consider there are small markets to have their local representatives. So, something like that is happening to you, basically we'll send an e-mail that you'll come back, because you're sending it to the e-mail that no one is checking.

And they're speaking about a small country, understand that it's a small market. But the fact is that the percentage of the people that are on social networks is very high, like, in the generation for 16 to 25 years old, like over 90% of young people are having accounts on social networks.

So, as the end result of this situation is the journalists are primary targets of on-line threats and harassments. The second are human rights defenders. And at the same time those that are attacking us can easily pay to get your report like 1,000 times and then your content is blocked and your profile is offline. So, thank you.

>> MODERATOR: Thank you very much. In the interest of time management, I will only take one question so we can pass to the other segment. Do we have a quick question or otherwise, yes, please?

>> AUDIENCE MEMBER: Thank you very much. My name is WIZETTE from Nigeria. The discussion that saved -- and now we need to do core regression. And I'm wondering how do you do core regression in the context in which you are dealing with the power and power pressures, and also the resources. So how do you manage that in such a way that civil society voices limits and not slowed down by other elements in the framework.

>> I'll be brief. We ran a workshop in September including the Nigerian representatives. People will be aware two years ago in Kenya, the chief information officer was murdered a week before the election. This is a very real, very serious issue. It reminds us that the regulation is a joke when it comes to something quite as extreme as that. The election in Kenya, they had to rerun the election. So co-regulation really does two things. It makes sure that the platform providers have to play ball, the legislation is in place. They don't have an option whether or not to be more transparent or to set up these nice -- they have to do it. But the other thing is it gives the government flexibility to be able to know that they can rely on technical expertise to be in the process. A multi-stakeholder thing. You can get independent experts in universities and other places to assess what's going on to decide what is the self-regulation the black box of the companies. It's not perfect. It needs a lot of technical detail. Some of us have worked the technical detail, and it helps with the self-regulation and avoids some of the pitfalls to have regulation in place that is aim in the right direction. So, it's a hybrid, it's a compromise. It's not perfect. But there's a book called internet co-regulation talking about it.

>> MODERATOR: Okay, thank you very much. Now, we move to a third part of the discussion. Here we have a number of papers pointing out how we can reconcile the values and balancing one value with a different sort of value and how platforms can create a framework that makes them accountable in that regard.

So, with two different sets of paper in this part of the discussion goes, one has to do with artificial intelligence and how the conflicting rights are balancing that context. And the second part more like platformization more generally.

So, we start with the paper artificial intelligence. And the commission of ethics and legal norms. So it's professor for the University of Zurich.

>> Thank you, I would like to say under the heading of convicting rights and values not by summarizing my article, but trying to use somehow fresh perspective, the observation at least as a lawyer, I'm very much used to trying to solve conflicts in society. We heard about freedom of expression on the one hand, censorship, algorithmic intervention, the flow of information on the other side. We do have the well-known discrepancy between property rights, rights to aggregate, for example, collect the data versus privacy rights.

We do have specific provision now in Article 22, treaty PR the way how automated decision-making could comply with the data principles. This is not a topic of my four minutes, but I would like to look more closely to the question, how to reconcile legal rights, fundamental rights with economic and social values. And insofar as I think we have to depart from the situation that data as such is also a value and here we do have tension between the person who builds private data silo based on the collection of data on the one hand. And the interest of the larger society to share the data and a bridge between these two situations can be seen in data access rules and data access rules and principle should lead to a balance of inches based on balancing -- interest balancing tests which would compare the rights of the data creator and the rights of the original holder of the data.

Usually, if you look at the economic perspective, we of course call upon the competition law which has not been mention in the workshop, so regulatory interference into the process or while you change appears to be justified if the creator of the data is earning something like an economic rent which exceeds the amount adequate under economic or social considerations following the exploitation. This is likely to happen in case of automated platforms since the owner, controller often enjoys a market dominant position. And the higher the market power is, the higher should also be the responsibility of the respective market participant. But equally the more likely a pro data access rule could come to the market position of the respective creator of the data.

This is one aspect. I mean, happy to pass the cautions about the competition law. I cannot -- at this moment. And I would like to look more at the ethical or social ethical intervention of the whole problem. This dimension would require the implementation of the new principles or we mentioned in this afternoon, certainly access and also accountability. And auditability of data processes including validation and testing. We might have to consider to implement new instruments, for example, ethics having an influence on the -- on the platform owners upon a national level, on an international level. And we might have to look for strengthening of human autonomy principles as well as of fairness and trust principles. Thank you very much.

>> MODERATOR: Thank you. In the interest of time, please refrain from thanking us again and just proceed with the presentation. Catalina next from Maastricht University.

>> CATHERINE CARNOVALE: No, thanks. I'll do that later. --

>> CATALINA GOANTA: The new city regulators platform and public values in sharing cities. Present this in three brief acts. First of all, moving everybody's attention from the content moderation we discussed in social media platforms to the actual physical realities of the city. So that's a space that has been actually historically affected by privatization, so a lot of companies operating but now it's a lot of internet platforms operating there. A very good example, just for you to understand the issue of all of these public and actually local values that come at play here in the city of Munich earlier this year, someone at the municipality received a call two days before lime came with a fleet of 12,000 e-scooters in the city. Then they had to deal with whatever happened out of that. This is the first point. So, the privatization of the public space with platforms.

The second point is what is the state of the art when it comes to the local values that can be mentioned in this situation. It's very difficult to really pinpoint at what values we're talking about for two particular reasons. On the firsthand, if you're speaking about municipalities, everything is very political. So, a lot of values are expressed in terms of very electoral discourse narrative method. And also, with platforms, one thing you see on the -- in their general terms and conditions and another thing you see in their marketing. So, for instance, with lime, my example for today, you can see they really advertise simple and accessible mobility. And if you look at the general terms and conditions, they actually have a liability waiver that even covers public authorities. So, lime even says, we're not liable for whatever happens if you use our service, but even our public authorities that we signed contracts with are not liable either. So that's very, very interesting. There's a lot of controversy there.

So, and then the idea is that it's very difficult to really pinpoint these kinds of values. We do so in the paper, more on that there. But, there are also clearly some values that are very similar. So, as we heard earlier on, safety is a value that no micromobility company is going to say they're not backing up. Because they are definitely very interested in that as well.

Now, what can we do about the conflict between public values and local values? In the paper we have some daring proposals. Those are to build on what Chris was mentioning earlier, the idea of co-regulation. I think that also to build on one of the questions that we heard earlier, co-regulation maybe hasn't worked that much because of a lack of framework for that. So, an example, a very clear, concrete example of what we see in the paper is why don't municipalities have an obligation -- impose a legal framework, for instance for the platforms to negotiate in good faith the way in which their services are going to be deployed on the streets of a city like Munich.

Second of all, co-regulation is not just about substantive rights, but it's also about enforcement. And it's very important to keep in mind. Bruce and his work that we build on in the paper, he has an amazing concept of public interest technology that encapsulates the need of creating more interaction also in terms of enforcement, also present in this article of this special issue has written a lot about the co-regulation of Amsterdam, the city of Amsterdam, and air B&B. And the fact that that has actually failed simply because on the one hand, Amsterdam really needed to be able to get the -- to get the data infrastructure for air B & B. And on the other hand, air B&B didn't need the municipality. So these are two ideas that we put forth in the article, though, and, of course, they can always be built upon especially with different context like for instance social media. So this is the story that I want to share with you today, thank you.

>> Fantastic, thank you very much.

Now, concretely how to balance this process, this is a thing that comes up more and more. And we have an excellent paper by Marika, correct me if I mispronounce it. We'd love to hear more about your paper.

>> YSEULT MARIQUE: We see the complexity, underlying values, we see it's very ambitious for a one size fits all solution and a way to move forward might be to look at more specific tools effective in the particular consideration in the decision taken by the digital operators. So what we try to do is doing -- going into three steps from the more general approach to the more specific ones, and we mentioned already a few times in the first step, this is important to reflect on the unique role played by the platform operators when they restore contents in similar action. Because for us, we think that mostly to be awarded on the big square of protecting in some ways the democratic values to -- when they police invisible behavior on the platforms. You can see some blurring between the private laws that the operators play.

So, in the second step, we think it's important to keep a legal level because we talk about the different acts, that's true. But we should also as lawyers, we are lawyers, to give a specific label to these actions. And we think legally speaking these are sanctions, so they are the expression of powers taken by one agent, the platform creator and the platform users. And in general, there are consequences at the two sanctions. We mentioned already one big -- that is proportionate we can think of the other consequences such as the predictability or the need to review the mechanisms.

So, in the paper we looked at the proportionality test that has been discussed by other authors, but mostly looking at the relationship between the interference of election with privacy on the one hand and freedom of expression. And in the paper, we tried to go a little bit further in the test in pinpointing a few issues and questions linked to the proportionated tests such as could we really break down the test feeding to the decision-making process. And how can we try to do the breaking down a little bit more specifically? When can the proportionality test or should the proportionality test be used? And we have also have a question that maybe some of you have input about is it's very good to think about to think about the proportionality test. But is it not going to become too individualistic approach on the left -- on the internet and how do we account for more collective aspect of the life on the internet. So, we think that we need to go back to the specific rules of operators on this to build further, thank you.

[ Applause ]

>> MODERATOR: Thank you. In order to have some discussion, I propose we leave the last word to the publisher of this special issue, Catherine Carnovale from Elsevier. They are so kind to -- they're a platform. And they have to balance and take into account different values. They have agreed to give a few words about the policies in particular regard with artificial intelligence. We can open up the floor after that.

>> CATHERINE CARNOVALE: I will be quick. My thanks to you both. As the publisher, I'm proud to represent the journal supporting the races that we are discussing here today. It's a forerunner this area, punishing legal analysis and policy development since 1985. When I first heard about this, made me think about the responsibility, one of the largest publishers of scientific research in the world. The articles in this special issue have joined 16 million articles on science direct. And as the provider of that digital platform, we bear responsibility to our authors and readers, we recognize there's growing demand for open access with practical and ideological motivations, with that said, over 80% of our authors chose to publish under a subscription model last year and we see growth in both fields.

For this reason, we're dedicated to providing our researchers with choices on how they publish their articles. We recognize different communities have different needs and are open to testing new concepts that give value to our parties. We recognize researchers in developing countries have different parties. 18 years ago, Elsevier with the world health valuation. It's central to enhancing scholarship teaching and research of thousands of individuals and reduces the knowledge gap between developing and industrialized countries. The role within the community is changing. We've played a part in communication for 140 years primarily as a publisher and now an information analytics business and we recognize the power of the knowledge that we hold.

We employ technologists and data scientists to have machine living for insights. We support clinical support application to utilize technologies. We're committed to bringing transparency to our platforms especially at a time when it's hard to know what to believe. In the interest of time, I'll leave it there. Thank you for the opportunity to speak today. And please pick up the copy of the special issue and I look forward to receiving your submissions in the future.

[ Applause ]

>> MODERATOR: Thank you very much to all. It's been an impressive set of presentations. I'm sure there's some burning questions and probably you want in order to clarify or go a little bit more in depth about one of the topics -- if I will give you the floor. Is there any handout? If there is no question that you can address, I think it's perhaps good to recognize, you know, we have tried to bring together very different contributions so what we would like to propose moving forward in this discussion is to ask ourselves three important questions. The first is, what are the values? Do we have a common understanding?

I think we have been talking about different sets of values. Do we have some minimum core? Some understanding about what are the values that we want platforms to promote? What platforms in particular. The second is what are the best strategies to ensure that not only maximize shareholder value, but take into account the broader set of concerns, you know, of societal players, so stake holder value. The third, what extent are they, the best entities, to identify what the values should be and how they should be promoted?

So, this has come up with -- come up with a couple of questions to what extent is self-regulation sufficient to deal with certain sets of value creation, and regulation. And to what extent the framework is a solution. These are some of the questions that we wanted to put out for discussion through the special issue. But, of course, also, in the coming year with the continue the discussion.

If you are not a member, you can join our mailing list, and we'll have some ideas about what could be the output for next year. Is there any question or suggestion with regard to the dynamic coalitions work? And how would you like to par 'tis pate or see us achieving the concrete outputs in the next year.

Or even any suggestion of outcome or output that you would like a group of people working on platforms to work on to propose something that could be concretely meaningful both for platforms, for regulators, for users. We have been working for five years. We intend to work more. So if you have any suggestions, something that -- burning issues that you think would benefit from people working on existing or potential solutions, don't be shy. And this is the time to share with us your thoughts. Or, you can also do it on our mailing list.

>> I have a thought. What Nic said about Santa Clara principles and how they'll be rewritten is fascinating. Collaboration is encouraged there. The second thing is the digital services act, which presumably won't be passed in a year's time. But nevertheless, we have the new commission confirmed. And given that we will be -- we're more so next year, I believe, so we'll continue with that European conversation. And the third thing is, disinformation is not about to go away. It's been kicking around for 10,000 years so I'm presuming it's not going to be eliminated in the next year and I think it will be a much, much bigger rather than smaller issue, sadly, over that year.

>> Yes, indeed I think the digital service act is very important regulatory battle that we're going to see unraveling over the next year.

I should mention we had a meeting, informal meeting yesterday and we thought some concrete way of shaping platform regulation is to create concrete and simple policy makers book. Like a handbook for regulators and we could focus it on specific topics like the ones you mention. That could be a good solution.

>> It could be interesting to look at how can you get -- how can you deal with the negative impact of content moderation through non- content-related regulations. I think that's a challenge, I would note neither of the mailing list links, neither the ones that you sent me are working.

>> I defer the responsibility. I've notified several times of our change of mailing list but has not updated the webpage.So, we will TWEET right now the -- the correct link to subscribe to the mailing list. I see the reason there was another lady there willing to say something? You -- with the orange? The orange sweater.

>> AUDIENCE MEMBER: Yeah, thank you. I'm Senna, a research, I work at the network society here in Berlin. I'm researching on the outsource model of content moderation and the business practices in India. I think it would really help to actually unravel what a platform really is. And I'm sure that it's more than technical infrastructure and how this sort of neutral category has allowed these businesses to sort of not allowed for any transparency for that matter. So I think it would help to unravel what a platform is in context especially of the social networking sites. Thank you.

>> MODERATOR: A quick reply to this. Actually, in our recommendations in terms of service and human rights, that was exactly the first question we had when we started to work five years ago. And actually, in the recommendation, we define it as any application allowing to seek in part or receive information according -- the terms of service defined by the platform provider. So, a very broad definition that encompasses everything from a blog to a social network or a commerce vendor.

You may propose to update it or to have something different if you want. You are always welcome to inputs and positive or negative critiques on what we do. So very welcome to provide your feedback on this.

>> There will be a chapter, a part of the handbook that focuses on the definition of platform.

That's an essential term, like other terms like what does it mean to be neutral. You know, some myth busting that I think that the IGF has nicely done in this book that we have received. That's something that we would like to also deal with in the context of platform specifically.

We have another question over there? First, one --

>> AUDIENCE MEMBER: I think being involved in the group, we have to force ourselves to work more in a -- (indiscernible) we haven't done so in the past but we should do more.

>> MODERATOR: Noted.

>> AUDIENCE MEMBER: Question about your definition of platform. Catalina's article talked about the scooter companies. And that is a completely different notion of platform.

That's in a peer-to-peer sharing that you're sharing the scooters with others, it's the company that owns the scooters and puts them in place. It's not peer-to-peer. It's not about people's expressing. I'm confused why it fits in to your dynamic coalition, thank you.

>> CATALINA GOANTA: This really links to the conversation around the sharing economy, right? But there is no sharing from the perspective that it is a -- it's a -- a peer-to-peer sustainability initiative. It's not. I'm agreeing with you, it's not. But if you think about platforms like air B & B, like Uber, like Lime, they basically perform the same role of being an intermediary and providing a platform where people can come together.

So, while indeed the Lime does own the scooters, it facilitates, basically, the -- the use of these scooters, and moreover, there's another party that I didn't mention in this constellation, the actual juicers who take the scooters from the streets and actually have to load them with electricity. That's a very interesting parallel with outsource of the content moderation job we heard of earlier. I agree with you, there are different types of platforms. This really is something that can be clarify in the handbook.

>> MODERATOR: Thank you for the clarification. We have to draw this to a close. I -- I want to mention -- I mean, these days everything can be platformized. So, indeed, even the city itself can become a platform. So with smart cities. Today we provide a platform for discussion. And we hope to continue that over e-mail and also see you in attendance in the next IGF. And please come to us after the session if you want to be added to the mailing list and you have trouble with the link. Thanks to all of the authors and the participants.

[ Applause ]