Preview of the 2017 DCPR Outcome: Platform Regulations (DC on Platform Responsibility)

Preview of the 2017 DCPR Outcome: Platform Regulations (DC on Platform Responsibility)

Preview prepared by Luca Belli and Nicolo Zingales

Since the World Summit on Information Society (WSIS) in 2005, Internet governance has been widely understood as the development and application by Governments, the private sector and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet. This definition has fostered a lively and interdisciplinary discussion debate what roles and responsibilities might be attributed to different stakeholder groups and in different contexts, particularly considering the extent to which their actions affect Internet users and society more broadly. In that regard, one of the most fertile grounds of discussion has been the evolving notion of liability of Internet intermediaries, defined by the OECD as entities that “bring together or facilitate transactions between third parties on the Internet”[1]. Originally, the focus of that discussion was on the need to provide intermediaries with legislative protections from liability for third party content, which appeared insufficient and inconsistent across domains and jurisdictions. Then gradually, the initial scepticism by some stakeholders matured into a shared understanding of the importance of these protections and the recognition of best practices, thanks al so to consensus-building civil society initiatives such as those led by the Association for Progressive Communication[2] and by the Electronic Frontier Foundation, ultimately producing a set of guidelines entitled “Manila Principles on Intermediary Liability.”[3]

While the need for the spreading of those best practices remains current and even increased after the submission of certain legislative proposals under consideration in a number of jurisdictions around the globe, a parallel discussion began to unfold concerning the potential effects on individuals of the private actions taken by intermediaries -in response to liability threats or otherwise-, in particular when it comes to the exercise of their fundamental rights. Participants in this discussion observe the negative consequences arising from the proliferation of private governance regimes, and interrogate themselves about conceptual issues concerning the moral, social and human rights responsibility of the private entities that set up such regimes. The increasing importance of this notion of   “responsibility” has not gone unnoticed, having been captured for example by the special report prepared by UNESCO in 2014[4], the study on self-regulation of the Institute for Information Law of the University of Amsterdam[5], the 2016 Report of the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression[6], the Center for Law and Democracy’s Recommendations on Responsible Tech[7] and most recently, the Council of Europe’s Recommendation on the roles and responsibilities of Internet intermediaries[8].

At the same time, the notion of “intermediary” is increasingly replaced in common parlance by the more palatable term of “platform”, which evokes a role that goes beyond one of mere messenger or connector, and extends to the provision of a shared space within which users can carry out their activities and generate value. It is at this juncture that, at the 2014 Internet Governance Forum, the Dynamic Coalition on Platform Responsibility was created. The DCPR is a multistakeholder group established under the auspices of the United Nations Internet Governance Forum dedicated to the analysis of the role and responsibilities of online platforms from a technical, legal, social or economic perspective. Since its inception, DCPR has facilitated and nurtured a cross-disciplinary analysis of the challenges linked to the emergence of digital platforms and has promoted a participatory effort aimed at suggesting policy solutions.

The Recommendations on Terms of Service and Human Rights,[9] whose development was facilitated by the DCPR in 2015, constitute a prime example of such efforts. The Recommendations represent a first important step in identifying criteria through which platforms’ private orderings can be held accountable for their impact on users’ fundamental rights to freedom of expression, privacy and due process. More efforts of this type are encouraged to extend the discussion to other rights, recognise the appropriate role for public policy, and define sound mechanisms guiding platforms in their response to requests for removal, including any balancing of conflicting rights and interests. While the extent to which this type of work should be conducted at the global, regional or national level remains one of the governance challenges of our generation[10], the urgency of this discussion can hardly be overstated.

Hence, this book offers a response to the DCPR’s call for multistakeholder dialogue, made ever more pressing by the diverse and raising challenges generated by the platformisation of our economy and, more generally, our society. Despite the evident need to address these challenges, finding consensus and a sense of shared purpose is not always an easy task. For example, significant controversy exists concerning the very notion of “platform,” and the type of actors whose responsibilities should take the centre stage in this discussion.[11] The above-mentioned DCPR Recommendations adopted a high-level definition, which is neutral as to the type of involvement in content creation or distribution, but refers to a specific type of intermediation that runs at the application and content layers, allowing users to “seek, impart and receive information or ideas according to the rules defined into a contractual agreement”.

This definition excludes prima facie, from this particular discussion, telecommunications companies and Internet Access Providers (IAPs), which remain at the core of other forums such as the Telecommunications Industry Dialogue and the Global Network Initiative. Nevertheless, as an attentive reader of the present volume will notice, legal developments on the rights and obligations of “upstream” intermediaries such as IAPs and domain name registrars (and registries) are considered to the extent they inform, corroborate or anticipate the emergence of analogous legal issues “downstream”. By way of example, the discussion arising from the pulling out of neo-Nazi content from certain domain name providers and content delivery networks (see e.g. David Kaye’s mention of Cloudflare) closely follows the thread of combating “hate speech” that led to the adoption of similar measures by social media companies; it should therefore be considered as part of that broader tendency. Discussing in isolation from parallel developments at the upstream level carries the risk of missing important insights on legal remedies available to users affected by private measures, as is illustrated by the evolution of the legal framework concerning injunctions against innocent third parties in chapter 2.

The increasing centrality of digital platforms, both, in the collection and processing of personal data and in the production and dissemination of content, has attracted growing political and regulatory pressure over rights and responsibilities that ought to be attributed to them; and expectations are increasingly being placed on the role of large platform operators to provide “safe” online spaces for user engagement. This trend is visible in the legislative proposals that have emerged in various countries demanding social media companies to prevent hate speech, incitement to violence or hatred, and “dangerous terrorist recruitment material.” In that regard, this volume offers some reflections on online platforms’ roles and responsibilities in the eyes of regulators, warning about the dangers associated with an increasing instrumentalisation of these entities for the pursuit of a wide range of (often ill-conceived) public policy measures.

Over the last year, one of the most visible manifest trends of platform regulation has manifested itself in the context of the identification and prevention of “fake news”, stirring controversy over the role and impact of online platforms in influencing the shape and content of relevant discussions the public sphere. This discussion offers a perfect example of a recurring problem with platform regulation: an important part of the content that is supposed to be prohibited escapes clear legal definition. It comprises a variety of different phenomena and, therefore, arguably requires a combination of a wide range of measures that should not be based on vague terminology. While some proposals have called for special legislation to restore trust and create a level playing field, major platforms such as Google and Facebook have been quicker in addressing those concerns, including structural responses and tools for users to limit their exposure to such misinformation.

A different but related problem has arisen regarding “brand safety”, i.e. the concerns of advertisers in relation to the association of their ads with certain types of content deemed to be “inappropriate”. In March 2017, following a letter by the Guardian and many brands pulling their ads from YouTube, Google announced to have heard concerns “loud and clear” and raised its bar for “hateful, offensive and derogatory content” which will be excluded from the association with Google ads. Much like in the context of fake news, swift response by the platforms to a pressing societal problem serves as a backstop to the spreading of harm, preventing possible legislative intervention. Yet important questions remain regarding the transparency, proportionality and effectiveness of the measures these companies have taken, and of their impact on small and independent news providers and for content creators, some of whom (particularly those with content characterised as “sensitive”) have seen their ad revenues dramatically reduced since Google adopted this revised policy. Similar questions arise in relation to the recent emphasis by the European Commission on platforms’ responsibilities to protect users and society at large against the exploitation of their services for the dissemination of “illegal content”, a concept which is left for platforms to determine on the basis of EU and national law[12].

In addition to these content-related trends, platforms are increasingly under the scrutiny of regulators for various concerns relating to market power, information asymmetry and use and collection of personal data. For example, the European Commission is considering the adoption of special legislation to assuage concerns of contractual exploitation towards platform-dependent businesses[13]. Exploitation is also a central concern of the criticism being levelled to platforms for their relationships with workers/employees, leading most recently to several tech companies developing a code of ethics for worker values[14]. Finally, there are multiple investigations on the possible exploitation of personal data, relating both to their unlawful acquisition and their misuse leading to discrimination and consumer harm.

Against this backdrop, the need for a multistakeholder discussion on the role and responsibilities played by online platforms in our society becomes crucial. This book is built on the previous efforts of the DCPR and, although it does not pretend to offer definitive solutions, it provides some elements of reflection that should be carefully considered by all stakeholders in their effort to shape sustainable policies addressing shared problems regarding digital platforms.


[1] See OECD, The economic and social role of Internet intermediaries (OECD Publications,,2010),  <https://www.oecd.org/internet/ieconomy/44949023.pdf> [accessed 31 October 2017].

[2] See Emilar Vushe Gandhi, ‘Internet intermediaries: The dilemma of liability in Africa’, (APC News,19 May 2014). <https://www.apc.org/en/news/internet-intermediaries-dilemma-liability-africa> accessed 31 October 2017; Nicolo Zingales, ‘Internet intermediary liability: identifying best practices for Africa’, (APC Publication, 2013), <https://www.apc.org/sites/default/files/APCInternetIntermediaryLiability_BestPracticesAfrica_20131125.pdf> [accessed 31 October 2017]

[3] See ‘Manila Principles on Intermediary Liability. Best Practices Guidelines for Limiting Intermediary Liability for Content to Promote Freedom of Expression and Innovation’ (24 March 2015),  <https://www.eff.org/files/2015/10/31/manila_principles_1.0.pdf> [accessed 31 October 2017].

[4] Rebecca MacKinnon et al., Fostering freedom online: the role of Internet intermediaries (UNESCO Publication, 2014). <http://www.unesco.org/new/en/communication-and-information/resources/publications-and-communication-materials/publications/full-list/fostering-freedom-online-the-role-of-internet-intermediaries/> [accessed 31 October 2017]..

[5] Cristina Angelopoulos et al., ‘Study of fundamental rights limitations for online enforcement through self regulation’ (IVir, 2015)  <https://www.ivir.nl/publicaties/download/1796> [accessed 31 October 2017]. .

[6] Report of the the Special Rapporteur to the Human Rights Council on Freedom of expression, states and the private sector in the digital age, A/HRC/32/38 (11 May 2016) <https://documents-dds-ny.un.org/doc/UNDOC/GEN/G16/095/12/PDF/G1609512.pdf?OpenElement> [accessed 31 October 2017].

[7] Center for Law & Democracy, ‘Recommendations for Responsible Tech’ <http://responsible-tech.org/wp-content/uploads/2016/06/Final-Recommendations.pdf>[accessed 31 October 2017].

[8] Council of Europe, Recommendation CM/Rec(2017x)xx of the Committee of Ministers to member states on the roles and responsibilities of internet intermediaries. https://rm.coe.int/recommendation-cm-rec-2017x-xx-of-the-committee-of-ministers-to-member/1680731980 [accessed 31 October 2017].

[9] The Recommendations on Terms of Service and Human Rights are annexed to this book and can be found at <http://tinyurl.com/toshr2015> [accessed 31 October 2017].

[10] See the work carried out to streamline the interactions between different regimes by the Internet & Jurisdiction Project, described at https://www.internetjurisdiction.net/.

[11] For example, the relatively specific definition adopted by the European Commission in its consultations on online platforms – focused on the connection between two interdependent user groups – has been criticised for casting too wide regulatory net, catching a wide range of actors, business models and functionalities. Nor did the European Commission achieve more consensus with its narrower notion of “platforms making available large amounts of copyrighted content” identified as targets of heightened duty of care in the proposal for a copyright directive. Indeed, this latter definition triggering discussion as to the meaning of “large amount” and whether this should be defined (also) in relation to the profits made through the provision of access to such copyrighted material.

[12] See Communication on Tackling Ilegal Content Online. Towards an Enhanced Responsibility for Online Platforms, supra n. 11, pp. 5-6.

[13]  Communication from the Commission to the European Parliament, The Council, The European Economic and Social Committee and the Committee of Regions on the Mid-Term Review on the implementation of the Digital Single Market Strategy A Connected Digital Single Market for All, COM (2017) 228 final.

[14] Michael J. Coren, ‘Silicon Valley’s finest are finally developing a code of ethics’ (Quartz,  20 April 2017), <https://qz.com/964159/the-president-of-y-combinator-sam-altman-is-leading-an-effort-to-develop-a-code-of-ethics-for-silicon-valley-in-response-to-president-donald-trump/> [accessed 31 October 2017]. 

Part I – Exploring the Human Right Dimensions

This first part of the book explores some of the most pressing challenges regarding the impact that public

regulations targeting digital platforms and self-regulation developed by such entities may have on their users’

fundamental rights.

In their opening chapter on “Law of the Land or Law of the Platform? Beware of the Privatisation of

Regulation and Police,”
Luca Belli, Pedro Francisco and Nicolo Zingales argue that digital platforms are

increasingly undertaking regulatory and police functions, which are traditionally considered a matter of public

law. The authors emphasise that such functions have been growingly delegated to platforms by public

regulation while, on the other hand, platforms are self-attributing such functions to avoid liability, de facto

becoming private cyber-regulators and cyber-police. After highlighting the tendency towards delegation of

public functions to private platforms, Belli, Francisco and Zingales provide concrete examples of such

phenomenon. For example, the chapter scrutinise three types of delegations of public power: the imposition of

open-ended injunctions against innocent intermediaries, typically for content removal or website blocking; the

implementation of the right to content delisting against search engines, also known as the “right to be

forgotten”; and the enlisting of numerous IT companies into a voluntary scheme to counter “illegal hate

speech”. The authors show in all these cases that the amount of discretion conferred on platforms is

problematic from the standpoint of the protection of individual rights. Furthermore, the paper reviews the

parallel copyright regime developed by YouTube in order thereby emphasizing another collateral effect of the

privatisation of regulation and police functions: the extraterritorial application of a national legislation – US

copyright, in this case – which de facto turns the platform into a private proxy for global application of

national regulation. The authors conclude highlighting some of the challenges and viable solutions for the

protection of individual rights in an era of increasing privatisation of regulation and police.

View and Add Comments for Paragraph
or to post comments for this paragraph

In her chapter on “Online Platform Responsibility and Human Rights,” Emily Laidlaw explores the human

rights responsibilities of online platforms at the intersection of three areas: human rights, corporate social

responsibility (CSR) and regulation. In this conceptual paper, Laidlaw untangles the governance problems in

framing platform responsibility, focusing on the uneasy relationship between CSR and law, and identifying the

difficulties in articulating what it means for a platform to respect human rights. The chapter highlights the

benefits and challenges in considering CSR as part of the relevant regulatory framework, in particular when it

comes to the implementation of the UN Guiding Principles on Business and Human Rights. She concludes by

identifying three key challenges for the future of platform governance: defining appropriate (and where

possible uniform) rules for intermediary liability; clarifying the scope of application of the duty of respect; and

developing the linkage between alternative dispute resolution mechanisms and human rights.

View and Add Comments for Paragraph
or to post comments for this paragraph

In “Regulation by Platforms: the Impact on Fundamental Rights,” Orla Lynskey points out that the

relationship between platforms and regulation is two-fold: in addition to the various forms of regulation

affecting platforms, the latter also constitute a regulator themselves through ‘private ordering’, with notable

implications for economic, social, cultural and political dimensions of our lives. Lynskey explores, in

particular, both direct and indirect ways that platforms influence the extent to which we can exercise our

rights, and argues that these implications are exacerbated when these platforms are in a position of power -for

instance because of the number of individuals that use them. Importantly, she suggests that competition law is

not sufficient to constrain platform behaviour, in particular when it comes to addressing data power’ (the

power to profile and to exacerbate asymmetries of information) and ‘media power’ (the power to influence

opinion formation and autonomous decision-making) which transcend the economic notion of market power.

The chapter illustrates this point by reference to two examples (search engines and app stores) and concludes

briefly identifying some of the options and challenges which policy-makers are confronted with in trying to

tackle these issues.

View and Add Comments for Paragraph
or to post comments for this paragraph

In their chapter on “Fundamental Rights and Digital Platforms in the European Union: a suggested way

forward,”
Joe McNamee and Maryant Fernandez emphasise that it is important to understand which actors we

are addressing when referring to “digital platforms” because it may be counterproductive to categorise players

as different as AirBnB, Google News and YouTube, to name but a few examples, as the same type of

business. In this sense, the authors usefully suggest five classifications of platforms based on the relationship

with consumers or businesses and based on the transactional nature of the relationship. Furthermore, this

chapter notes that standard content guidelines of digital platforms do not necessarily respect the principle of

legality or comply with fundamental human rights. In this regard, so called “community guidelines” often ban

content, which is lawful and/or protected by European human rights law, often in an arbitrary and

unpredictable way. McNamee and Fernández Pérez offer several examples of bad practice to corroborate their

thesis and to conclude that, worryingly, neither governments nor Internet intermediaries appear to feel morally

or legally responsible/accountable for assessing the durability or potential counterproductive effects that can

be deployed by the measures that they implement. Importantly, the authors conclude the paper recommending

the essential points that that future platform policies should incorporate in order to abide fully to the

obligations prescribed by the Charter of Fundamental Rights of the European Union.

View and Add Comments for Paragraph
or to post comments for this paragraph

Part II – Data Governance

The second part of this volume is dedicated to the analysis of one of the most crucial element concerning

platform policies and regulations. The protection and use of individuals’ personal data have crossed the

borders of from privacy-focused discussions, growing to encompass an ample range of topics, including

competition, property rights and the conflict with the collective right to access to information. The chapters

included in this part provide a selection of analyses and some useful food for thoughts to identify priorities and

ponder what regulatory solutions might be elaborated.

View and Add Comments for Paragraph
or to post comments for this paragraph

Krzysztof Garstka and David Erdos open this second part with an important reflection on the right to be

forgotten from search engines, entitled “Hiding in Plain Sight: Right to be Forgotten & Search Engines in

the Context of International Data Protection Frameworks.”
The authors note that, in the wake of Google

Spain (2014) it has become widely recognised that data protection law within the EU/EEA grants individuals a

qualified right to have personal data relating to them de-indexed from search engines, this is far from being a

uniquely EU/EEA phenomenon. Through an analysis of five major extra-EU/EEA international data

protection instruments, Garstka and Erdos conclude that most of those lend themselves to a reasonable

interpretation supporting a Google Spain-like result. In light of the serious threats faced by individuals as a

result of the public processing of data relating to them, they argue that the time is ripe for a broader process of

international discussion and consensus-building on the “right to be forgotten”. They also suggest that such an

exercise cannot be limited to the traditionally discussed subjects such as challenging and d search engines), but

should also encompass other actors including social networking sites, video-sharing platforms and rating

websites.

View and Add Comments for Paragraph
or to post comments for this paragraph

The following chapter turns to the economic dimension of platform regulation, with Rolf Weber’s analysis of

the heated (but often misinterpreted) subject of “Data ownership in platform markets.” Weber points out

stressing that, while in the past platform regulations mainly concerned content issues related to accessible

information and to provider responsibility, the growing debates about data ownership might also extend the

scope of regulatory challenges to the economic analysis of platform markets. Relevant topics are collective

ownership and data portability in the legal ownership context, as well as access to data and data sharing in

case of an existing factual control about data. Weber opines that these challenges call for a different design of

the author regulatory framework for online platform.

View and Add Comments for Paragraph
or to post comments for this paragraph

The question of data ownership is further explored by Célia Zolynski in “What legal framework for data

ownership and access? The Opinion of the French Digital Council.”
This chapter takes stock of the

existing European debate and puts forward the approach of the French Digital Council (Conseil National du

Numérique or CNNum). The Chapter is in fact a CNNum Opinion issued in April 2017 to respond to the

public consultation launched by the European Commission on online platforms exploring various legislative

and non-legislative options, including the creation of a property right over non-personal data, to encourage the

free flow of data. First, the Opinion submits that value creation mostly occurs when data is contextualized and

combined with data from other datasets in order to produce new insights. Thus, the issue is not to establish a

hypothetical right of data ownership; rather, it is about thinking and designing incentive regimes of data access

and exchange between data controllers so as to encourage value creation. Indeed, contrary to a widely-held

belief, data ownership does not necessarily facilitate data exchanges - it can actually hinder them. Above all,

the Opinion makes the argument that a free flow of data should be envisioned not only between member

States, but also across online platforms. Importantly, the chapter highlights that these new forms of sharing are

essential to the development of a European data economy.

View and Add Comments for Paragraph
or to post comments for this paragraph

Part III – New Roles Calling for New Solutions

This part scrutinises the conundrum created by the blurring of distinction between private and public spheres in

some of the most crucial fields interested by the evolutions digital platforms. By exploring the challenges of

regulation, terrorism, online payments and digital labour, this third part highlights the heterogeneity of roles

that platforms are undertaking while stressing the need of policy solutions able to seize such diversity and

properly addressing the underling challenges.

View and Add Comments for Paragraph
or to post comments for this paragraph

Marc Tessier, Judith Herzog and Lofred Madzou open this part with their chapter on “Regulation at the Age

of online platform-based economy: accountability, users’ empowerment and responsiveness.”
This paper

expresses the views of the French Digital Council (CNNum) on the regulatory challenges associated with the

development of the digital platform economy. This piece is part of a more comprehensive reflexion on online

platforms policy-related issues developed by CNNUm since 2013, when the Council had been assigned the

task to organise a consultation with the French plaintiffs involved in the Google Shopping antitrust

investigation and made recommendations on policy issues posed by the rise of online platforms. Then in 2014,

the former Prime Minister asked the Council to organise a national consultation to elaborate France's digital

strategy. In this context, various market actors and civil society organisations reported their concerns about the

lack of transparency regarding online platform activities and the asymmetry of power in their relationships

with platform operators. To address these legitimate concerns, several recommendations were made; including

the need to develop the technical and policy means to assess the accountability and fairness of online

platforms. In 2016, following that recommendation, the government entrusted the Council with the task of

overseeing the creation of an agency with these capabilities. In their contribution, Tessier, Herzog and Madzou

discuss the challenges brought by the platform economy to our traditional regulatory tools, offering and a

comprehensive policy framework to address them and the possible grounds for intervention of a potential

Agency for Trust in the Digital Platform Economy

View and Add Comments for Paragraph
or to post comments for this paragraph

In her chapter on “Countering terrorism and violent extremism online: what role for social media

platforms?”
Krisztina Huszti-Orban highlights that social media platforms have been facing considerable

pressure on part of States to ‘do more’ in the fight against terrorism and violent extremism online. Because of

such pressure, many social media companies have set up individual and joint efforts to spot unlawful content

in a more effective manner, thereby becoming the de facto regulators of online content and the gatekeepers of

freedom of expression and interlinked rights in cyberspace. However, the author stresses that having corporate

entities carry out quasi-executive and quasi-adjudicative tasks, outsourced to them by governments under the

banner of self- or co-regulation, raises a series of puzzling questions under human rights law. In this

perspective, this chapter outlines the main human rights challenges that are arising in the European context,

regarding EU laws and policies as well as Member State practices. In Europe, the issues of terrorism and

violent extremism online have become uppermost in the political agenda and, in such context, the author

argues that the lack of internationally agreed definitions of violent extremism and terrorism-related offences

raises the risk of excessive measures with potential cross-border human rights implications. Furthermore,

Huszti-Orban analyses the problems arising from the attempts to broaden the liability of Internet intermediaries

in the counter-terrorism context. Crucially, the paper emphasises the need to provide social media platforms

with human rights-compliant guidance with regard to conducting content review, the criteria to be used in this

respect and the specialist knowledge required to perform these tasks appropriately. The chapter also stresses

the role of transparency, accountability and independent oversight, particularly considering the public interest

role that social media platforms play by regulating content to prevent and counter terrorism and violent

extremism.

View and Add Comments for Paragraph
or to post comments for this paragraph

In “Revenue Chokepoints: Global Regulation by Payment Intermediaries”, Natasha Tusikov argues that payment intermediaries are becoming go-to regulators for governments and, in a recent development, for multinational corporations’ intent on protecting their valuable intellectual property rights. More problematically, she stresses that those intermediaries that dominate the online payment industry (namely Visa, MasterCard and PayPal) can enact revenue chokepoints that starve targeted entities of sales revenue or donations and thereby undertake many of these regulatory efforts in the absence of legislation and formal legal orders, in what is commonly termed “voluntary industry regulation.” Drawing upon interviews with policy-makers, intermediaries and right-holders, the chapter argues that governments strategically employ the narrative of “voluntary intermediary-led” in order to distance the state from problematic practices. Further, it contends that payment intermediaries’ regulatory efforts are part of a broader effort to shape Internet governance in ways that benefit largely western legal, economic, and security interests, especially those of the United States. The conclusion is, in line with other contributions in this book, that intermediary-facilitated regulation needs some serious thinking and must take place within an appropriate regulatory framework, especially when payment providers act as private regulators for private actors’ material benefit.

It is not a coincidence that the last chapter concludes precisely where the discussion began in the opening chapter: the observation of widespread delegation of regulatory and police functions to private entities without an adequate complement of rights and remedies available to secure the effectiveness of rights and obligations of affected individuals. As pointed out by virtually every contributor in this book, that is particularly problematic when platforms are in a position where they effectively decide the meaning, scope and level of protection of fundamental rights. This situation calls for a reflection on the goals for regulatory intervention in a platform society, and the role that private platforms can and should play in ensuring respect for individual rights.

View and Add Comments for Paragraph
or to post comments for this paragraph