Review Platform

Welcome to the Internet Governance Forum (IGF) Review Platform provided by the IGF Secretariat. This platform allows IGF Community members to review a document and leave public comments, which can be structured paragraph-by-paragraph. Users can annotate and debate, turning the document into a conversation.

Report of the UN Secretary-General’s ‎High-level Panel on Digital Cooperation


About the Report

The United Nations Secretary-General, Mr. António Guterres, convened the High-Level Panel on ‎Digital Cooperation to advance proposals to strengthen cooperation in the digital space among ‎Governments, the private sector, civil society, international organizations, academia, the technical ‎community and other relevant stakeholders.‎

The 20-member panel, co-chaired by Ms. Melinda Gates and Mr. Jack Ma, was expected to raise ‎awareness about the transformative impact of digital technologies across society and the ‎economy, and contribute to the broader public debate on how to ensure a safe and inclusive ‎digital future for all, taking into account relevant human rights norms.‎
During its work, the panel broadly consulted with various stakeholders, including the IGF ‎community.‎

The Panel submitted the final report to the Secretary-General on 10 June 2019. During the ‎launch, the Secretary-General called for a broad consultation process on the topics covered in ‎the report. ‎

While the consultation launched below focuses mainly on Digital Cooperation and the IGF/IGF ‎Plus, the full report is also available for consultation (here) and there are many important topics ‎and recommendations that deserve consideration and careful review.‎


Digital Cooperation at the IGF 2019 

The IGF 2019 Annual Meeting will feature a main session dedicated to Digital Cooperation, ‎scheduled to be on 26 November, from 10:00-13:00 p.m. CEST, Main Hall. This session will ‎reflect on the HLPDC Report recommendations, with special focus on the Recommendation 5 ‎and the proposed model for global digital cooperation called: The Internet Governance Forum Plus ‎‎(IGF Plus). ‎

In preparation for this session, the IGF community is invited to provide feedback to the Recommendation 5 - Global Digital Cooperation and the IGF Plus model. Relevant sections of the Report are extracted further below. Respondents can also email written contributions to [email protected]. These contributions will be posted on the IGF website.

All received inputs will be synthesized in a written output document and this will be posted in late October as an input to the above-mentioned main session during the 14th IGF in Berlin, where we will facilitate online as well as physical participation.

It is very important that this report and subsequent discussions have a very broad outreach. We ‎need to do all we can to include those voices not historically engaged in discussions on Internet ‎Governance or Digital Cooperation. This is a great opportunity to reach out and increase ‎engagement from marginalized groups as well as other ‎disciplines. Concrete and actionable feedback will help all our improvement efforts. ‎

Please log into the IGF website and post your comments by clicking on 'Add new comment at this ‎section'. ‎
 


Received contributions, in addition to the below in-line comments:

  1. CGI.br - Brazilian Internet Steering Committee
  2. Microsoft
  3. Web Foundaton
  4. Government of Australia, Department of Foreign Affairs and Trade
  5. Government of France, Ministry of Europe and Foreign Affairs 
  6. République Française, Ministère de l'Europe et des Affaires étrangères
  7. Government of Finland, Ministry for Foreign Affairs
  8. Governance Primer, Brazilian Association of Software Companies (ABES), AR-TARC Certification Authority
  9. Mercari Inc.
  10. RIPE NCC
  11. Government of Denmark, Ministry of Industry, Business and Financial Affairs
  12. Government of Switzerland
  13. Raúl Echeberría 
  14. Instituto de Pesquisa em Direito e Tecnologia do Recife - IP.rec
  15. ICC Basis
  16. Pathways for Prosperity Commission 
  17. Government of Germany
  18. UK Government
  19. European Broadcasting Union
  20. Group of stakeholders gathered around IGF 2019 Best Practice Forums
  21. Media 21 Foundation
  22. United States Council for International Business
  23. The Association for Progressive Communications  (APC)
  24. Internet Society (ISOC)
  25. Juan Alfonso Fernández

See the Consolidated Summary of Received Feedback 

 


 

CALL FOR FEEDBACK: Section 1

GLOBAL DIGITAL COOPERATION
Recommendation 5A

We recommend that, as a matter of urgency, the UN ‎Secretary-General facilitate an agile and open consultation ‎process to develop updated mechanisms for global digital ‎cooperation, with the options discussed in Chapter 4 as a ‎starting point. We suggest an initial goal of marking the UN's ‎‎75th anniversary in 2020 with a “Global Commitment for ‎Digital Cooperation” to enshrine shared values, principles, ‎understandings and objectives for an improved global digital ‎cooperation architecture. As part of this process, we ‎understand that the UN Secretary-General may appoint a ‎Technology Envoy.

Recommendation 5B
We support a multi-stakeholder “systems” approach for cooperation and regulation that is adaptive, agile, inclusive and fit for purpose for the fast-changing digital age.

Proposed questions for your feedback (suggestions only, all feedback welcome):

  1. How would you improve the current existing frameworks for digital cooperation?
  2. ‎What/if any new frameworks/mechanisms would you recommend?‎
  3. ‎How might we strengthen the practices/impacts of digital governance mechanisms?‎
  4. ‎How can we properly resource and fund multi-stakeholder processes to ensure:‎
    • Broad, inclusive and adequate participation
    • Ability to implement desired programmes
    • On-going improvement efforts are successful
  5. How do we further enhance our collaboration to advance our shared values, principles, understandings ‎and objectives for digital cooperation? ‎

Enhancing digital cooperation will require both reinvigorating existing multilateral partnerships and potentially creating new mechanisms that involve stakeholders from business, academia, civil society and technical organisations. We should approach questions of governance based on their specific circumstances and choosing among all available tools.

Where possible we can make existing inter-governmental forums and mechanisms fit for the digital age rather than rush to create new mechanisms, though this may involve difficult judgement calls: for example, while the WTO remains a major forum to address issues raised by the rapid growth in cross-border e-commerce, it is now over two decades since it was last able to broker an agreement on the subject. 

Given the speed of change, soft governance mechanisms – values and principles, standards and certification processes – should not wait for agreement on binding solutions. Soft governance mechanisms are also best suited to the multi-stakeholder approach demanded by the digital age: a fact-based, participative process of deliberation and design, including governments, private sector, civil society, diverse users and policy-makers.

The aim of the holistic “systems” approach we recommended is to bring together government bodies such as competition authorities and consumer protection agencies with the private sector, citizens and civil society to enable them to be more agile in responding to issues and evaluating trade-offs as they emerge. Any new governance approaches in digital cooperation should also, wherever possible, look for ways – such as pilot zones, regulatory sandboxes or trial periods – to test efficacy and develop necessary procedures and technology before being more widely applied.213 

We envisage that the process of developing a “Global Commitment for Digital Cooperation” would be inspired by the “World We Want” process, which helped formulate the SDGs. Participants would include governments, the private sector from technology and other industries, SMEs and entrepreneurs, civil society, international organisations including standards and professional organisations, academic scholars and other experts, and government representatives from varied departments at regional, national, municipal and community levels. Multi-stakeholder consultation in each member state and region would allow ideas to bubble up from the bottom. 

The consultations on an updated global digital cooperation architecture could define upfront the criteria to be met by the governance mechanisms to be proposed, such as funding models, modes of operation and means for serving the functions explored in this report. 

More broadly, if appointed, a UN Tech Envoy could identify over-the-horizon concerns that need improved cooperation or governance; provide light-touch coordination of multi-stakeholder actors to address shared concerns; reinforce principles and norms developed in forums with relevant mandates; and work with UN member states, civil society and businesses to support compliance with agreed norms. 

The Envoy’s mandate could also include coordinating the digital technology-related efforts of UN entities; improving communication and collaboration among technology experts within the UN; and advising the UN Secretary-General on new technology issues. Finally, the Envoy could promote partnerships to build and maintain international digital common resources that could be used to help achieve the SDGs.

CALL FOR FEEDBACK: Section 2

A possible architecture for Global Digital Cooperation
''INTERNET GOVERNANCE FORUM PLUS"205

The proposed Internet Governance Forum Plus, or IGF Plus, would build on the existing IGF which was established by the World Summit on the Information Society (Tunis, 2005). The IGF is currently the main global space convened by the UN for addressing internet governance and digital policy issues. The IGF Plus concept would provide additional multi-stakeholder and multilateral legitimacy by being open to all stakeholders and by being institutionally anchored in the UN system.

The IGF Plus would aim to build on the IGF’s strengths, including well-developed infrastructure and procedures, acceptance in stakeholder communities, gender balance in IGF bodies and activities, and a network of 114 national, regional and youth IGFs206. It would add important capacity strengthening and other support activities.

The IGF Plus model aims to address the IGF’s current shortcomings. For example, the lack of actionable outcomes can be addressed by working on policies and norms of direct interest to stakeholder communities. The limited participation of government and business representatives, especially from small and developing countries, can be addressed by introducing discussion tracks in which governments, the private sector and civil society address their specific concerns.

The IGF Plus would comprise an Advisory Group, Cooperation Accelerator, Policy Incubator and Observatory and Help Desk.

The Advisory Group, based on the IGF’s current Multi-stakeholder Advisory Group, would be responsible for preparing annual meetings, and identifying focus policy issues each year. This would not exclude coverage of other issues but ensure a critical mass of discussion on the selected issues. The Advisory Group could identify moments when emerging discussions in other forums need to be connected, and issues that are not covered by existing organisations or mechanisms.

Building on the current practices of the IGF, the Advisory Group could consist of members appointed for three years by the UN Secretary-General on the advice of member states and stakeholder groups, ensuring gender, age, stakeholder and geographical balance.

Potential questions for your feedback ‎(suggestions only, all feedback welcome):‎

  1. What are in your view criteria that the proposed Advisory Group should fulfil that are not ‎yet being taken into account by the IGF Multi-stakeholder Advisory Group in present IGF ‎setting?‎
  2. How do you address the concerns that these proposals may be considered going ‎beyond the original IGF governance structure and mandates?‎
  3. How might the current Multi-stakeholder Advisory Group be strengthened?‎
  4. What changes (if any) should be considered to the role and responsibilities of the Multi-stakeholder Advisory Group/Advisory Group?‎
  5. How do we ensure the Multi-stakeholder Advisory Group/Advisory Group has appropriate ‎funding and support?‎
     

The Cooperation Accelerator would accelerate issue-centred cooperation across a wide range of institutions, organisations and processes; identify points of convergence among existing IGF coalitions, and issues around which new coalitions need to be established; convene stakeholder-specific coalitions to address the concerns of groups such as governments, businesses, civil society, parliamentarians, elderly people, young people, philanthropy, the media, and women; and facilitate convergences among debates in major digital and policy events at the UN and beyond.

The Cooperation Accelerator could consist of members selected for their multi-disciplinary experience and expertise. Membership would include civil society, businesses and governments and representation from major digital events such as the Web Summit, Mobile World Congress, Lift:Lab, Shift, LaWeb, and Telecom World.

Potential questions for your feedback ‎(suggestions only, all feedback welcome):‎

  1. ‎How would you envision the work of the Cooperation Accelerator in practice?‎
  2. How do we ensure the Cooperation Accelerator has appropriate funding and support?‎
  3. How could existing intersessional activities from across the IGF community ‎support/participate in a Cooperation Accelerator?  For example, Best Practice Forums ‎‎(BPFs), National, Regional, Sub-regional and Youth IGF Initiatives (NRIs), or Dynamic ‎Coalitions (DCs)?‎

The Policy Incubator would incubate policies and norms for public discussion and adoption. In response to requests to look at a perceived regulatory gap, it would examine if existing norms and regulations could fill the gap and, if not, form a policy group consisting of interested stakeholders to make proposals to governments and other decision making bodies. It would monitor policies and norms through feedback from the bodies that adopt and implement them.207

The Policy Incubator could provide the currently missing link between dialogue platforms identifying regulatory gaps and existing decision making bodies by maintaining momentum in discussions without making legally binding decisions. It should have a flexible and dynamic composition involving all stakeholders concerned by a specific policy issue.

Potential questions for your feedback (suggestions only, all feedback welcome):‎

  1. ‎How should the Policy Incubator be organized, locally and globally?‎
  2. How could existing intersessional activities from across the IGF community ‎support/participate in the Policy Incubator?  For example, Best Practice Forums (BPFs), ‎National, Regional, Sub-regional and Youth IGF Initiatives (NRIs), or Dynamic Coalitions ‎‎(DCs)?‎
  3. ‎How do we ensure the Policy Incubator has appropriate funding and support?‎

The Observatory and Help Desk would direct requests for help on digital policy (such as dealing with crisis situations, drafting legislation, or advising on policy) to appropriate entities, including the Help Desks described in Recommendation 2; coordinate capacity development activities provided by other organisations; collect and share best practices; and provide an overview of digital policy issues, including monitoring trends, identifying emerging issues and providing data on digital policy.

Potential questions for your feedback (suggestions only, all feedback welcome):‎

  1. ‎How do you see the implementation of the Observatory and Help Desk? ‎
  2. How do we connect the local and global levels through this proposed mechanism?‎
  3. How could existing intersessional activities from across the IGF community ‎support/participate in the Observatory and Help Desk?  For example, Best Practice ‎Forums (BPFs), National, Regional, Sub-regional and Youth IGF Initiatives (NRIs), or ‎Dynamic Coalitions (DCs)?‎
  4. How do we ensure the Observatory and Help Desk has appropriate funding and support?‎
  5. How do you address the concern that these proposals will go beyond the original mandate ‎add an operational workstream to IGF, with significant resource implications?‎ 

The IGF Trust Fund would be a dedicated fund for the IGF Plus. All stakeholders – including governments, international organisations, businesses and the tech sector – would be encouraged to contribute. The IGF Plus Secretariat should be linked to the Office of the United Nations Secretary-General to reflect its interdisciplinary and system-wide approach.

Potential questions for your feedback (suggestions only, all feedback welcome):‎

  1. Do you believe the IGF Plus model is implementable, given that the IGF Trust Fund is based on voluntary donations?
  2. ‎What can we do to ensure the IGF Plus has appropriate funding and support? The IGF ‎Trust Fund historically lacked sufficient funding to fulfil its current (and basic) budget.  ‎

Foreword

THE AGE OF DIGITAL INTERDEPENDENCE ‎ 
REPORT OF THE UN SECRETARY-GENERAL'S HIGH-LEVEL PANEL ON DIGITAL COOPERATION

 

We live in an era of increasing interdependence and accelerating change, much of it driven by technological advances such as low-‎cost computing, the internet and mobile connectivity. Moments of change present new opportunities to solve old problems. The ‎efficiency, innovation, and speed of a digitally connected world can expand what is possible for everyone – including those who ‎historically have been marginalised. ‎

At the same time, humanity faces significant new challenges. Modern technologies can be used to erode security and violate privacy. ‎We are also beginning to see complex impacts on education systems and labor markets. ‎

We believe the opportunities for human progress in the digital age ultimately outweigh the challenges – if we join together in a spirit of ‎cooperation and inclusiveness. ‎

We urgently need to lay the foundations of an inclusive digital economy and society for all. We need to focus our energies on policies ‎and investments that will enable people to use technology to build better lives and a more peaceful, trusting world. Making this vision a ‎reality will require all stakeholders to find new ways of working together. That is why the Secretary General appointed this Panel and ‎what we have sought to do with this Report. ‎

We are grateful to each member of the Panel, the Secretariat, and the many groups and individuals we consulted; though the views ‎expressed were not always in agreement, they were always conveyed with respect and in the spirit of collaboration. ‎

No one knows how technology will evolve, but we do know that our path forward must be built through cooperation and illuminated by ‎shared human values. We hope this Report will contribute to improved understanding of the opportunities and challenges ahead, so ‎that together we can shape a more inclusive and sustainable future for all. ‎                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
Melinda Gates and Jack Ma
Co-Chairs (signed)         

 


                                                                                                                                                                                                                                               

Executive Summary

Digital technologies are rapidly transforming society, simultaneously allowing for unprecedented advances in the human condition and ‎giving rise to profound new challenges. Growing opportunities created by the application of digital technologies are paralleled by stark ‎abuses and unintended consequences. Digital dividends co-exist with digital divides. And, as technological change has accelerated, ‎the mechanisms for cooperation and governance of this landscape have failed to keep pace. Divergent approaches and ad hoc ‎responses threaten to fragment the interconnectedness that defines the digital age, leading to competing standards and approaches, ‎lessening trust and discouraging cooperation. ‎

Sensing the urgency of the moment, in July 2018 the Secretary-General of the United Nations (UN) appointed this Panel to consider the ‎question of “digital cooperation” – the ways we work together to address the social, ethical, legal and economic impact of digital ‎technologies in order to maximise their benefits and minimise their harm. In particular, the Secretary-General asked us to consider how ‎digital cooperation can contribute to the achievement of the Sustainable Development Goals (SDGs) – the ambitious agenda to protect ‎people and the planet endorsed by 193 UN member states in 2015. He also asked us to consider models of digital cooperation to ‎advance the debate surrounding governance in the digital sphere. ‎

In our consultations – both internally and with other stakeholders – it quickly became clear that our dynamic digital world urgently needs ‎improved digital cooperation and that we live in an age of digital interdependence. Such cooperation must be grounded in common ‎human values – such as inclusiveness, respect, human-centredness, human rights, international law, transparency and sustainability. ‎In periods of rapid change and uncertainty such as today, these shared values must be a common light which helps guide us. ‎

Effective digital cooperation requires that multilateralism, despite current strains, be strengthened. It also requires that multilateralism ‎be complemented by multi-stakeholderism – cooperation that involves not only governments but a far more diverse spectrum of other ‎stakeholders such as civil society, academics, technologists and the private sector. We need to bring far more diverse voices to the ‎table, particularly from developing countries and traditionally marginalised groups, such as women, youth, indigenous people, rural ‎populations and older people. ‎

After an introduction which highlights the urgency of improved digital cooperation and invites readers to commit to a Declaration of ‎Digital Interdependence, our report focuses on three broad sets of interlocking issues, each of which is discussed in one subsequent ‎chapter. As a panel, we strove for consensus, but we did not always agree. We have noted areas where our views differed and tried to ‎give a balanced summary of our debates and perspectives. While there was not unanimity of opinion among the Panel members ‎regarding all of the recommendations, the Panel does endorse the full report in the spirit of promoting digital cooperation. ‎

Chapter 2, Leaving No One Behind, argues that digital technologies will only help progress towards the full sweep of the SDGs if we ‎think more broadly than the important issue of access to the internet and digital technologies. Access is a necessary, but insufficient, ‎step forward. To capture the power of digital technologies we need to cooperate on the broader ecosystems that enable digital ‎technologies to be used in an inclusive manner. This will require policy frameworks that directly support economic and social inclusion, ‎special efforts to bring traditionally marginalised groups to the fore, important investments in both human capital and infrastructure, ‎smart regulatory environments, and significant efforts to assist workers facing disruption from technology’s impact on their livelihoods. ‎This chapter also addresses financial inclusion – including mobile money, digital identification and e-commerce –, affordable and ‎meaningful access to the internet, digital public goods, the future of education, and the need for regional and global economic policy ‎cooperation. ‎

Chapter 3, Individuals, Societies and Digital Technologies, underscores the fact that universal human rights apply equally online as ‎offline, but that there is an urgent need to examine how time-honoured human rights frameworks and conventions should guide digital ‎cooperation and digital technology. We need society-wide conversations about the boundaries, norms and shared aspirations for the ‎uses of digital technologies, including complicated issues like privacy, human agency and security in order to achieve inclusive and ‎equitable outcomes. This chapter also discusses the right to privacy, the need for clear human accountability for autonomous systems, ‎and calls for strengthening efforts to develop and implement global norms on cybersecurity. ‎

To take significant steps toward the vision identified in Chapters 2 and 3, we feel the following priority actions deserve immediate ‎attention: ‎

AN INCLUSIVE DIGITAL ECONOMY AND SOCIETY ‎

1A: We recommend that by 2030, every adult should have affordable access to digital networks, as well as digitally-enabled ‎financial and health services, as a means to make a substantial contribution to achieving the SDGs. Provision of these services ‎should guard against abuse by building on emerging principles and best practices, one example of which is providing the ability to ‎opt in and opt out, and by encouraging informed public discourse. ‎

1B: We recommend that a broad, multi-stakeholder alliance, involving the UN, create a platform for sharing digital public goods, ‎engaging talent and pooling data sets, in a manner that respects privacy, in areas related to attaining the SDGs. ‎

1C: We call on the private sector, civil society, national governments, multilateral banks and the UN to adopt specific policies to ‎support full digital inclusion and digital equality for women and traditionally marginalised groups. International organisations such as ‎the World Bank and the UN ‎ should strengthen research and promote action on barriers women and marginalised groups face to digital inclusion and digital ‎equality. ‎

1D: We believe that a set of metrics for digital inclusiveness should be urgently agreed, measured worldwide and detailed with sex ‎disaggregated data in the annual reports of institutions such as the UN, the International Monetary Fund, the World Bank, other ‎multilateral development banks and the OECD. From this, strategies and plans of action could be developed.‎

HUMAN AND INSTITUTIONAL CAPACITY

 2: We recommend the establishment of regional and global digital help desks to help governments, civil society and the private ‎sector to understand digital issues and develop capacity to steer cooperation related to social and economic impacts of digital ‎technologies. ‎

HUMAN RIGHTS AND HUMAN AGENCY

3A: Given that human rights apply fully in the digital world, we urge the UN Secretary-General to institute an agencies-wide review of ‎how existing international human rights accords and standards apply to new and emerging digital technologies. Civil society, ‎governments, the private sector and the public should be invited to submit their views on how to apply existing human rights ‎instruments in the digital age in a proactive and transparent process. ‎ ‎

3B: In the face of growing threats to human rights and safety, including those of children, we call on social media enterprises to work ‎with governments, international and local civil society organisations and human rights experts around the world to fully understand ‎and respond to concerns about existing or potential human rights violations. ‎

‎3C: We believe that autonomous intelligent systems should be designed in ways that enable their decisions to be explained and ‎humans to be accountable for their use. Audits and certification schemes should monitor compliance of artificial intelligence (AI) ‎systems with engineering and ethical standards, which should be developed using multi-stakeholder and multilateral approaches. ‎Life and death decisions should not be delegated to machines. We call for enhanced digital cooperation with multiple stakeholders ‎to think through the design and application of these standards and principles such as transparency and non-bias in autonomous ‎intelligent systems in different social settings. 

TRUST, SECURITY AND STABILITY ‎

4. We recommend the development of a Global Commitment on Digital Trust and Security to shape a shared vision, identify ‎attributes of digital stability, elucidate and strengthen the implementation of norms for responsible uses of technology, and propose ‎priorities for action. ‎
If we are to deliver on the promise of digital technologies for the SDGs, including the above-mentioned priority action areas, and avoid ‎the risks of their misuse, we need purposeful digital cooperation arrangements. To this end, in Chapter 4, Mechanisms for Global ‎Digital Cooperation, we analyse gaps in the current mechanisms of global digital cooperation, identify the functions of global digital ‎cooperation needed to address them, and outline three sets of modalities on how to improve our global digital cooperation architecture ‎‎– which build on existing structures and arrangements in ways consistent with our shared values and principles. ‎
Given the wide spectrum of issues, there will of necessity be many forms of digital cooperation; some may be led by the private sector ‎or civil society rather than government or international organisations. Moreover, special efforts are needed to ensure inclusive ‎participation by women and other traditionally marginalised groups in all new or updated methods of global digital cooperation. ‎
The three proposed digital cooperation architectures presented are intended to ignite focused, agile and open multi-stakeholder ‎consultations in order to quickly develop updated digital governance mechanisms. The 75th Anniversary of the UN in 2020 presents an ‎opportunity for an early harvest in the form of a “Global Commitment for Digital Cooperation” enshrining goals, principles, and priority ‎actions. ‎
The chapter also discusses the role of the UN, both in adapting to the digital age and in contributing to improved global digital ‎cooperation. ‎
We feel the following steps are warranted to update digital governance: ‎

GLOBAL DIGITAL COOPERATION ‎

5A: We recommend that, as a matter of urgency, the UN Secretary- General facilitate an agile and open consultation process to ‎develop updated mechanisms for global digital cooperation, with the options discussed in Chapter 4 as a starting point. We suggest ‎an initial goal of marking the UN's 75th anniversary in 2020 with a “Global Commitment for Digital Cooperation” to enshrine shared ‎values, principles, understandings and objectives for an improved global digital cooperation architecture. As part of this process, we ‎understand that the UN Secretary-General may appoint a Technology Envoy. ‎

5B: We support a multi-stakeholder “systems” approach for cooperation and regulation that is adaptive, agile, inclusive and fit for ‎purpose for the fast-changing digital age. ‎

We hope this report and its recommendations will form part of the building blocks of an inclusive and interdependent digital world, with ‎a fit-for-purpose new governance architecture. We believe in a future in which improved digital cooperation can support the ‎achievement of the SDGs, reduce inequalities, bring people together, enhance international peace and security, and promote ‎economic opportunity and environmental sustainability.‎


 

1. Introduction: Interdependence ‎in the Digital Age

Digital technologies are rapidly transforming societies and economies, simultaneously advancing the human condition and creating ‎profound and unprecedented challenges. How well are we managing the complex impacts on our individual and collective lives? How ‎can we use digital technologies to contribute to the achievement of the Sustainable Development Goals? What are current best ‎practices and gaps in digital cooperation? What new ways of working together are needed, and who should be involved? ‎

These are among the questions the UN Secretary-General asked us to consider.1 We approached our task with both humility and ‎urgency. The challenges are multi-faceted and rapidly evolving. The potential that could be unlocked by improved digital cooperation is ‎enormous – and so are the perils if humanity fails to create more effective and inclusive ways for citizens, civil society, governments, ‎academia and the private sector to work together. ‎

‎“Digital cooperation” is used in this report to describe ways of working together to address the societal, ethical, legal and ‎economic impacts of digital technologies in order to maximise benefits to society and minimise harms. ‎

As digital technologies have come to touch almost every aspect of modern life, a patchwork of cooperation and governance ‎mechanisms has gradually emerged to generate norms, standards, policies and protocols in this arena. In 2014, the United Nations ‎identified 680 distinct mechanisms related to digital cooperation,2 and the number has since risen to over a thousand.3 In many ‎technical areas, these mechanisms work well. But they struggle to keep up with the unprecedented pace and increasingly wide range ‎of change. ‎

While digital technologies have been developing for many years, in the last decade their cumulative impacts have become so deep, ‎wide-ranging and fast-changing as to herald the dawn of a new age. The cost of massive computing power has fallen.4 Billions of ‎people and devices have come online.5 Digital content now crosses borders in vast volumes, with constant shifts in what is produced ‎and how and where it is used. ‎

The spread of digital technologies has already improved the world in myriad ways. It has, for example, revolutionised the ability to ‎communicate with others and to share and access knowledge. Individuals from long-neglected populations have used mobile money ‎and other financial services for the first time, and started businesses that reach both domestic and global markets.6 If we are to achieve ‎the flagship ambition of the Sustainable Development Goals, to end extreme poverty by 2030, improved digital cooperation will need to ‎play a vital role. ‎

But digital technologies have also brought new and very serious concerns. Around the world, many people are increasingly – and ‎rightly – worried that our growing reliance on digital technologies has created new ways for individuals, companies and governments to ‎intentionally cause harm or to act irresponsibly. Virtually every day brings new stories about hatred being spread on social media, ‎invasion of privacy by businesses and governments, cyber-attacks using weaponised digital technologies or states violating the rights ‎of political opponents.7
And many people have been left out of the benefits of digital technology. Digital dividends co-exist with digital divides. Well more than ‎half the world’s population still either lacks affordable access to the internet or is using only a fraction of its potential despite being ‎connected.8 People who lack safe and affordable access to digital technologies are overwhelmingly from groups who are already ‎marginalised: women, elderly people and those with disabilities; indigenous groups; and those who live in poor, remote or rural areas.9 ‎Many existing inequalities – in wealth, opportunity, education, and health – are being widened further. ‎

The speed and scale of change is increasing – and the agility, responsiveness and scope of cooperation and governance mechanisms ‎needs rapidly to improve. We cannot afford to wait any longer to develop better ways to cooperate, collaborate and reach consensus. ‎We urgently need new forms of digital cooperation to ensure that digital technologies are built on a foundation of respect for human ‎rights and provide meaningful opportunity for all people and nations. ‎

The speed and scale of change is increasing – and the agility, ‎responsiveness and scope of cooperation and governance mechanisms ‎needs rapidly to improve. We cannot afford to wait any longer to develop ‎better ways to cooperate, collaborate and reach consensus. We urgently ‎need new forms of digital cooperation to ensure that digital technologies ‎are built on a foundation of respect for human rights and provide ‎meaningful opportunity for all people and nations.‎

OUR DIGITAL INTERDEPENDENCE 

If we want to use digital technologies to improve life for everyone, we will have to go about it consciously and deliberately – with civil society, companies and governments recognising their interdependence and working together. The unique benefits and profound risks arising from the dramatic increase in computing power and interconnectivity in the digital age reinforce our underlying interdependence. Globally and locally, we are increasingly linked in an ever-expanding digital web, just as we are increasingly linked, and mutually dependent, in the spheres of economics, public well-being and the environment. 

The critical need to improve digital cooperation comes at a time when many of the mechanisms of multilateral cooperation developed since World War II are under unprecedented duress. Although far from perfect, these avenues for cooperation between national governments underpinned one of the most peaceful and productive periods in human history. Their erosion is dangerous: it will make it harder to capitalise on the benefits of digital technologies and mitigate the hazards. 

Reinvigorating multilateralism alone will not be sufficient. Effective digital cooperation requires that multilateralism be complemented by multi-stakeholderism – cooperation that involves governments and a diverse spectrum of other stakeholders such as civil society, technologists, academics, and the private sector (ranging from small enterprises to large technology companies). 

While only governments can make laws, all these stakeholders are needed to contribute to effective governance by cooperating to assess the complex and dynamic impacts of digital technologies and developing shared norms, standards and practices. We need to bring far more diverse voices to the table, particularly from developing countries and traditionally marginalised populations. Important digital issues have often been decided behind closed doors, without the involvement of those who are most affected by the decisions. 

The unique benefits and risks arising from the dramatic increase in computing power and interconnectivity in the digital age reinforce our underlying interdependence."

Managing digital technologies to maximise benefits to society and minimise harms requires a far-sighted and wide-ranging view of the complex ways in which they interact with societal, environmental, ethical, legal and economic systems. The Panel is enormously grateful to the many individuals, institutions and others who provided us with their insights and expertise as we sought to better understand how to navigate this new landscape. We endeavoured to consult as broadly as possible in the time available.

Drawing on many thoughtful reflections,10 we identified the following nine values that we believe should shape the development of digital cooperation:

  • Inclusiveness – Leaving no one behind, so that we can maximise equality of opportunity, access and outcomes to achieve the Sustainable Development Goals;
  • Respect – Embodying respect for human rights and human dignity, diversity, the safety and security of personal data and devices, and national and international law; 
  • Human-centredness – Maximising benefits to humans, and ensuring that humans remain responsible for decisions; 
  • Human flourishing – Promoting sustainable economic growth, the social good and opportunities for self-realisation; 
  • Transparency – Promoting open access to information and operations; 
  • Collaboration – Upholding open standards and interoperability to facilitate collaboration; 
  • Accessibility – Developing affordable, simple and reliable devices and services for as diverse a range of users as possible; 
  • Sustainability – Furthering the aim of a zero-carbon, zero-waste economy that does not compromise the ability of future generations to meet their own needs; and, 
  • Harmony – The use by governments and businesses of digital technologies in ways that earn the trust of peers, partners and people, and that avoid exploiting or exacerbating divides and conflicts. 

ABOUT THIS REPORT 

As a panel, we strove for consensus, but we did not always agree. We have noted areas where our views differed and tried to give a balanced summary of our debates and perspectives. While there was not unanimity of opinion among the Panel members regarding all of the recommendations, the Panel does endorse the full report in the spirit of promoting digital cooperation. 

The next three chapters highlight issues that emerged from the Panel’s deliberations, setting out the backdrop for the recommendations in the final chapter. Our report does not aim to be comprehensive – some important topics are touched briefly or not at all – but to focus on areas where we felt digital cooperation could make the greatest difference. These chapters deal broadly with the areas of economics, society and governance, while noting that many issues – such as capacity, infrastructure and data – are relevant to all. 

Chapter 2, Leaving No One Behind, assesses the contribution of digital technologies to the Sustainable Development Goals. It addresses issues including financial inclusion, affordable and meaningful access to the internet, the future of education and jobs and the need for regional and global economic policy cooperation.

Chapter 3, Individuals, Societies and Digital Technologies, discusses the application of human rights to the digital age, the need to keep human rights and human agency at the centre of technological development, and the imperative to improve cooperation on digital security and trust. 

Chapter 4, Mechanisms for Global Digital Cooperation, identifies gaps in current mechanisms of global digital cooperation, the functions of digital cooperation and principles digital cooperation should aim to follow, provides three options for potential new global digital cooperation architectures, and discusses the role of the United Nations in promoting digital cooperation. 

Drawing on the analysis in the preceding chapters, Chapter 5 shares and explains our Recommendations for shaping our common digital future. 

As members of the Panel, we brought a wide range of experience of working in government, business, academic institutions, philanthropy and civil society organisations – but we engaged in our task as equal citizens of a digitalising world, appreciating the vital role of all stakeholders and the need for humility and cooperation. 

In this spirit, we invite all stakeholders to commit to a Declaration of Digital Interdependence: 

DECLARATION OF DIGITAL INTERDEPENDENCE

Humanity is still in the foothills of the digital age.

The peaks are yet uncharted, and their promise still untold. But the risks of losing our foothold are apparent: dangerous adventurism among states, exploitative behaviour by companies, regulation that stifles innovation and trade, and an unforgivable failure to realise vast potential for advancing human development. 

How we manage the opportunities and risks of rapid technological change will profoundly impact our future and the future of the planet. 

We believe that our aspirations and vulnerabilities are deeply interconnected and interdependent; that no one individual, institution, corporation or government alone can or should manage digital developments; and that it is essential that we work through our differences in order to shape our common digital future. 

We declare our commitment to building on our shared values and collaborating in new ways to realise a vision of humanity’s future in which affordable and accessible digital technologies are used to enable economic growth and social opportunity, lessen inequality, enhance peace and security, promote environmental sustainability, preserve human agency, advance human rights and meet human needs. 
 


 

2. Leaving No One Behind

The Sustainable Development Goals represent humanity’s shared commitment to achieve ambitious global gains for people and the planet by 2030. Of the SDG’s 17 goals and 169 targets, not a single one is detached from the implications and potential of digital technology. From ending extreme poverty, to promoting inclusive economic growth and decent work, to reducing maternal mortality, to achieving universal literacy and numeracy and doubling the productivity of small farmers – progress is intertwined with the use of digital technology and new forms of digital cooperation.11 

However, technological solutions are not enough. Diverse political systems, history, culture, resource constraints and other factors which have marginalised far too many people, are – and will continue to be – of critical importance. The application of technology must be aligned with investments in human capital, infrastructure and environmental protection. Widening access to digital technologies is necessary, but not sufficient. Access needs to be affordable to be meaningful. Special efforts are needed to remove barriers for marginalised groups who often face a double bind: they already face discrimination in its many analogue forms and are least likely to be connected. Pre-existing forms of marginalisation should not be perpetuated or aggravated in the digital sphere. 

Success will require a commitment by all involved stakeholders to hard work and learning over many years about how to broaden opportunity and build truly inclusive economies and societies. We believe that there is significant room for digital technology and improved cooperation to contribute to these efforts.

2.1 CREATING AN INCLUSIVE DIGITAL ECONOMY

With mobile internet and increasingly powerful and lower cost computing, every person can theoretically connect to anyone else, obtain and generate knowledge, or engage in commercial or social activity.12 For organisations of whatever size, likewise, there are fewer technical barriers to global economic interaction at scale. Digital technology can support economic inclusion by breaking down barriers to information, broadening access, and lowering the level of skills needed to participate in the economy.13 

Of course, this does not mean that everyone and everything should be connected or digitised. Nor does it mean that the social and economic consequences of digital technology are necessarily inclusive or beneficial. Digital technology can both provide opportunity and accentuate inequality. 

The challenge for policy makers, and other stakeholders seeking to contribute to progress toward the SDGs, is how to cooperate to leverage technology to create a more inclusive society. As we emphasise in this chapter and our recommendations, we believe digital cooperation must steer how digital technologies are developed and deployed to create meaningful economic opportunities for all. 

Developing an inclusive digital economy will require sustained and coherent effort from many stakeholders across all walks of life. National policy frameworks and international agreements need to find ways to promote financial inclusion, innovation, investment and growth while protecting people and the environment, keeping competition fair and the tax base sustainable. 

Developing an inclusive digital economy will require sustained and coherent effort from many stakeholders across all walks of life. National policy frameworks and international agreements need to find ways to promote financial inclusion, innovation, investment and growth while protecting people and the environment, keeping competition fair and the tax base sustainable.

FINANCIAL INCLUSION: MOBILE MONEY, DIGITAL IDENTIFICATION AND E-COMMERCE 

The ability of digital technologies to empower traditionally marginalised people and drive inclusive economic development is illustrated by financial inclusion.14 Mobile money, digital identification and e-commerce have given many more people the ability to save and transact securely without needing cash, insure against risks, borrow to grow their businesses and reach new markets. 

According to the World Bank’s Global Findex 2017 report, 69 percent of adults have an account with a financial institution, up by seven percentage points since 2014. That means over half a billion adults gained access to financial tools in three years. But many are still left behind, and there is scope for further rapid progress: a billion people who still have no access to financial services already have a mobile phone. 15 

Mobile money – the ability to send, receive and store money using a mobile phone – has brought financial services to people who have long been ignored by traditional banks.16 It reaches remote regions without physical bank branches. It can also help women access financial services – an important aspect of equality, given that in many countries women are less likely than men to have a bank account. 

New business models enable people who have no physical collateral to demonstrate to lenders that they are creditworthy – for example, by allowing the lenders to see phone location data and online transaction and payment history.18 Mobile finance matters in wealthy countries, too, where low-income and historically marginalised groups generally both pay higher interest rates and receive a narrower range of financial services.19 

Well-known examples of mobile money include Kenya’s M-Pesa and China’s Alipay. Launched in 2007 by Vodafone, M-Pesa received support from diverse stakeholders who all have a role to play in digital cooperation. A private sector innovation with donor funding, it originally addressed microfinance clients in partnership with civil society – then citizens found new uses, including low cost person-to-person transfers.20 Alipay has made millions of small business loans to online merchants, more than half of whom are aged under 30. 21 

What works in one country may not work in another.22 Rather than try to replicate specific successes, digital cooperation should aim to highlight best practices, standards and principles that can create conditions for local innovations to emerge and grow based on local issues, needs and cultural values. India, for example, has added 300 million bank accounts in three years as new business models have been built on the India Stack, a set of government-managed online standards in areas including online payments and digital identity. 23 

Across many areas of financial inclusion, fragmented systems and lack of cooperation within and across states make it difficult to fully realise the benefits of digital technology. Common standards for cross-border interoperability of mobile money could unleash much more innovation: discussions to develop them should be a priority for digital cooperation. 24

Digital identification (ID) can support inclusive economic development more broadly. More than a billion people today lack an official way to prove their identity: this means they may not be able to vote, open a bank account, transact online, own land, start a business, connect to utilities or access public services such as health care or education.25 The consulting firm McKinsey & Company studied seven large countries and concluded that digital ID systems could add between 3 and 13% to their gross domestic product.26 

However, digital ID systems require caution. A digital ID can help unlock new opportunities but can also introduce new risks and challenges. They can be used to undermine human rights – for example, by enabling civil society to be targeted, or selected groups to be excluded from social benefits.27 Data breaches can invade the privacy of millions. To minimise risks, countries should introduce a digital ID system only after a broad national conversation and allow for voluntary enrolment and viable alternatives for those who opt out. They should establish ways to monitor use and redress misuse. Countries could cooperate to share experience and best practices in this regard. 

The World Bank Identification for Development (ID4D) initiative has identified ten Principles of Digital Identification covering inclusion, design and governance “to improve development outcomes while maintaining trust and privacy”.28 This initiative draws on the experiences of countries that have already implemented digital ID systems. Among the most successful is Estonia, where citizens can use their digital ID to access over 2,000 online government services. Building on the positive and cautionary lessons of early adopters, the Modular Open Source Identity Platform (MOSIP) is developing open source code countries can adapt to design their own systems.29 

Recent years have also seen a dramatic increase in e-commerce, including by individuals and small businesses selling products and services using online platforms. When e-commerce platforms provide technological services to small entrepreneurs, rather than compete with them, they can level the playing field: it is relatively cheap and simple to start a business online, and entrepreneurs can reach markets far beyond their local area. 

Inclusive e-commerce, which promotes participation of small firms in the digital economy, is particularly important for the SDGs as it can create new opportunities for traditionally excluded groups. In China, for example, an estimated 10 million small and medium-sized enterprises (SMEs) sell on the Taobao platform; nearly half of the entrepreneurs on the platform are women, and more than 160,000 are people with disabilities.30 E-commerce can support rural economic inclusion as clusters of villages can develop market niches in certain types of products: in China, an estimated 3,000 “Taobao villages” have annual online sales of more than one million dollars annually.31 A growing e-commerce sector also creates demand and employment in related businesses including logistics, software, customised manufacturing and content production. 

E-commerce shows how digital technologies with supportive policies can contribute to inclusive economic development – it has done best in countries where it is relatively easy to set up a business, and where traditionally neglected populations are able to get online.32 As with inclusive mobile finance, as more individuals and small enterprises buy and sell internationally, there is also a need to create more supportive rules for cross-border e-commerce. 

As e-commerce grows, there are also concerns about its relation to local and international markets, as discussed below in Section 2.3. 

HARNESSING DATA AND ‘DIGITAL PUBLIC GOODS’ FOR DEVELOPMENT 

The immense power and value of data in the modern economy can and must be harnessed to meet the SDGs, but this will require new models of collaboration. 

The Panel discussed potential pooling of data in areas such as health, agriculture and the environment to enable scientists and thought leaders to use data and artificial intelligence to better understand issues and find new ways to make progress on the SDGs. Such data commons would require criteria for establishing relevance to the SDGs, standards for interoperability, rules on access and safeguards to ensure privacy and security. 

The immense power and value of data in the modern economy can and must be harnessed to meet the SDGs, but this will require new models of collaboration.

We also need to generate more data relevant to the SDGs. In a world which has seen exponential growth of data in recent years,33 many people remain invisible. For example, the 2018 UN SDG Report notes that only 73 percent of children under the age of 5 have had their births registered.34 The World Health Organization (WHO) estimated in 2014 that two-thirds of deaths are not registered.35 Only 11 countries in sub-Saharan Africa have data on poverty from surveys conducted after 2015. Most countries do not collect sex-disaggregated data on internet access. 36

Anonymised data – information that is rendered anonymous in such a way that the data subject is not or no longer identifiable – about progress toward the SDGs is generally less sensitive and controversial than the use of personal data of the kind companies such as Facebook, Twitter or Google may collect to drive their business models, or facial and gait data that could be used for surveillance.37 However, personal data can also serve development goals, if handled with proper oversight to ensure its security and privacy. 

For example, individual health data is extremely sensitive – but many people’s health data, taken together, can allow researchers to map disease outbreaks, compare the effectiveness of treatments and improve understanding of conditions. Aggregated data from individual patient cases was crucial to containing the Ebola outbreak in West Africa.38 Private and public sector healthcare providers around the world are now using various forms of electronic medical records. These help individual patients by making it easier to personalise health services, but the public health benefits require these records to be interoperable. 

There is scope to launch collaborative projects to test the interoperability of data, standards and safeguards across the globe. The World Health Assembly’s consideration of a global strategy for digital health in 2020 presents an opportunity to launch such projects, which could initially be aimed at global health challenges such as Alzheimer’s and hypertension.39 

The slowing progress in bringing more people online points to the urgent need for new approaches to building digital infrastructure, a complex task that requires better coordination among many stakeholders: governments, international organisations, communications service providers, makers of hardware and software, providers of digital services and content, civil society and the various groups that oversee protocols and standards on which digital networks operate.

Improved digital cooperation on a data-driven approach to public health has the potential to lower costs, build new partnerships among hospitals, technology companies, insurance providers and research institutes and support the shift from treating diseases to improving wellness. Appropriate safeguards are needed to ensure the focus remains on improving health care outcomes. With testing, experience and necessary protective measures as well as guidelines for the responsible use of data, similar cooperation could emerge in many other fields related to the SDGs, from education to urban planning to agriculture. 

Data collaboration for climate change, agriculture and the environment 

The Platform for Big Data in Agriculture was launched in 2017 by the Colombia-based International Center for Tropical Agriculture after consultation with public, private and non-profit stakeholders. By providing ways to share data on agriculture, it seeks to transform research and innovation in food security, sustainability and climate change.40 

More broadly, cheaper sensors generating more data – and better AI algorithms to analyse it – can further improve our understanding of how complex environmental systems interact and the likely impacts of climate change.41 

Digital technologies can also be used to reduce waste. The methods of complex coordination that have lowered costs by enabling supply chains to touch every corner of the planet can also help to meet higher environmental standards and design devices with repair, reuse, upgrading and recycling in mind. For this, new forms of digital cooperation and data sharing would be needed among suppliers, customers and competitors

 

Many types of digital technologies and content – from data to apps, data visualisation tools to educational curricula – could accelerate achievement of the SDGs. When they are freely and openly available, with minimal restrictions on how they can be distributed, adapted and reused, we can think of them as “digital public goods”.42 In economics, a “public good” is something which anyone can use without charge and without preventing others from using it.43 Digital content and technologies lend themselves to being public goods in this respect.

Combinations of digital public goods can create “common rails” for innovation of inclusive digital products and services. The India Stack is an example of how a unified, multi-layered software platform with clear standards, provided by public entities, can give government agencies and entrepreneurs the technological building blocks to improve service delivery and develop new business models which promote economic inclusion.44 

There is currently no “go to” place for discovering, engaging with, building, and investing in digital public goods. Along the lines of the MOSIP model – and with the participation of civil society and other stakeholders – such a platform could create great value by enabling the sharing and adaptation of digital technologies and content across countries in a wider range of areas relevant to achieving the SDGs. 

EXPANDING ACCESS TO DIGITAL INFRASTRUCTURE 

The proportion of people online in the developing world expanded rapidly in the last decade – from 14.5% in 2008 to 45.3% in 2018 – but progress has recently slowed.45 Internet access in many parts of the world is still too slow and expensive to be effectively used.46 The cost of mobile data as a percent of income increased in nearly half the countries according to a recent study.47 Without affordable access, advances in digital technologies disproportionately benefit those already connected, contributing to greater inequality. 

The people being left behind are typically those who can least afford it. Growth in new internet connections is slowest in the lowest-income countries.48 Rural areas continue to lag, as companies prioritise improving access in more densely populated areas which will offer a better return on investment.49 

The slowing progress in bringing more people online points to the urgent need for new approaches to building digital infrastructure, a complex task that requires better coordination among many stakeholders: governments, international organisations, communications service providers, makers of hardware and software, providers of digital services and content, civil society and the various groups that oversee protocols and standards on which digital networks operate.50 As these actors cooperate, it also represents an important moment to re-emphasise and address the complex social, cultural and economic factors that continue to marginalise many groups. 

Some countries, such as Indonesia, have set targets that treat internet connectivity as a national priority.57 While finance alone will not achieve universal internet access, it can help if invested wisely: some countries are generating financing from fees on existing communication network providers to help expand systems to those who are currently uncovered, for example through Universal Service Funds.58

Advance market commitments deserve further consideration as a possible way to incentivise investment, as they have in other areas such as vaccine developments. They involve a commitment to pay for a future product or service once it exists; the commitment in this case could come from consortia of governments, international organisations or others interested in enabling specific uses in areas such as health or education.59

Many local groups are also working on small-scale community solutions: for example, a rural community of 6,000 people in Mankosi, South Africa, built a solar-powered “mesh network” in collaboration with a university.60 Such community projects are often not just about getting online but building skills and empowering locals to use technology for development and entrepreneurship.61

Digital cooperation should increase coordination among the public and private entities working in this space and help tailor approaches to economic, cultural and geographic contexts. Governments have an important role to play in creating a policy framework to enable private sector enterprise, innovation, and cooperative, bottom-up networks.

SUPPORTING MARGINALISED ‎GROUPS AND MEASURING ‎INCLUSIVENESS

Even where getting online is possible and affordable, extra ‎efforts are needed to empower groups that are discriminated ‎against and excluded. For example, digital technologies are ‎often not easily accessible for elderly people or those with ‎disabilities;62 indigenous people have little digital content in their ‎native languages;63 and globally an estimated 12 percent more ‎men use the internet than women. 64

Even where getting online is ‎possible and affordable, extra ‎efforts are needed to empower ‎groups that are discriminated ‎against and excluded. ‎

Responses need to address deep and complex social and cultural factors, such as those contributing to the gender gap in access to and usage of mobile phones, smart phones and digital services – gaps which persist in many cases despite increases in women’s income and education levels.65 Social marketing could play a role in changing attitudes, as it has in many other areas with backing from donors, governments and civil society organisations.66 Initiatives to improve access for marginalised populations should start with consultation involving these groups in the design, deployment and evaluation of such efforts.

Efforts to improve digital inclusion would be greatly helped if there were a clear and agreed set of metrics to monitor it. Initial work – notably by the Organisation for Economic Co-operation and Development (OECD), the Group of Twenty (G20), ITU, and the Economist Intelligence Unit – needs to be broadened to reflect the wide variety of global contexts and, importantly, needs greater buy-in and participation from developing countries.67 The Panel urges international organisations, civil society and governments to develop action plans around reliable and consistent measures of digital inclusion with sex disaggregated data. Discussion about measurements and definitions would also focus attention on the issues underlying inclusion.

2.2 RETHINKING HOW WE WORK AND LEARN

Many previous waves of technological change have shifted what skills are demanded in the labour market, making some jobs obsolete while creating new ones. But the current wave of change may be the most rapid and unpredictable in history. How to prepare people to earn a livelihood in the digital age – and how to protect those struggling to do so – is a critical question for digital cooperation for governments and other stakeholders who aim to reduce inequality and achieve the SDGs.

At this stage, there appears to be limited value in attempting to predict whether robots and artificial intelligence will create more jobs than they eliminate, although technology historically has been a net job creator.68 Many studies attempt to predict the impact on the jobs market but there is far from being a consensus.69 The only certainty is that workers have entered a period of vast and growing uncertainty – and that this necessitates new mechanisms of cooperation. 

REFORMING EDUCATION SYSTEMS AND SUPPORTING LIFELONG LEARNING

Modern schools were developed in response to the industrial revolution, and they may ultimately need fundamental reform to be fit for the digital age – but it is currently difficult to see more than the broad contours of the changes that are likely to be needed.

Countries are still in early stages of learning how to use digital tools in education and how to prepare students for digital economies and societies. These will be ongoing challenges for governments and other stakeholders. Some countries are now exposing even very young children to science and robotics. Alongside such broader digital literacy efforts, it may be even more important to focus from an early age on developing children’s “soft skills”, such as social and emotional intelligence, creativity, collaboration and critical thinking. One widely referenced study concludes that occupations requiring such soft skills are less likely to be automated.70

Teaching about specific technologies should always be based on strong foundational knowledge in science and math, as this is less likely to become obsolete. At a degree level, science, technology, engineering and mathematics (STEM) curricula need to borrow from the humanities and social sciences, and vice versa: STEM students need to be encouraged to think about the ethical and social implications of their disciplines, while humanities and social science students need a basic understanding of data science.71 More informal approaches to learning may be needed to prepare students for working in cross-disciplinary teams, and where such informal approaches already exist in the developing world they should be fully appreciated for their value.

As the boundaries increasingly blur between ‘work’ and ‘learning’, the need to enable and incentivise lifelong learning was emphasised in many of the written contributions the Panel received.

Lifelong learning should be affordable, portable and accessible to all. Responsibility for lifelong learning should be shared between workers themselves, governments, education institutions, the informal sector and industry: digital cooperation mechanisms should bring these groups together for regular debates on what skills are required and how training can be delivered. Workers should have flexibility to explore how best to opt into or design their own approach to lifelong learning.

There are emerging examples of government efforts to use social security systems and public-private partnerships to incentivise and empower workers to learn new skills and plan for a changing labour market. Among those drawn to the Panel’s attention were efforts by the International Trade Union Confederation in Ghana and Rwanda,72 France’s Compte Personnel de Formation, Scotland’s Individual Training Account, Finland’s transformation of work and the labour market sub-group under its national AI programme, and Singapore’s Skills Framework for Information and Communication Technology (ICT).

However, reskilling cannot be the only answer to inequality in the labour market – especially as the workers most able to learn new skills will be those who start with the advantage of comparatively higher levels of education.73

PROTECTING WORKERS, NOT JOBS

New business models are fuelling the rise of an informal or “gig” economy, in which workers typically have flexibility but not job or income security.74 In industrialised countries, as more and more people work unpredictable hours as freelancers, independent contractors, agency workers or workers on internet platforms, there is an urgent need to rethink labour codes developed decades ago when factory jobs were the norm.75

Promising initiatives include Germany’s Crowdsourcing Code of Conduct, which sets out guidelines on fair payment, reasonable timing and data protection for internet platform workers, and employs an ombudsman to mediate disputes; and Belgium’s Titre-Services and France’s Chèque Emploi Service Universel, which offer tax incentives for people engaging casual workers to participate in a voucher scheme that enables the workers to qualify for formal labour rights. There are also examples of digital technologies enabling new ways for workers to engage in collective bargaining.76

While the gig economy tends to make work less formal in industrialised countries, in the developing world the majority of people have long worked in the informal sector.77 For these workers, gig economy arrangements may be more formal and transparent, and – with appropriate cooperation measures with technology firms – easier for governments to oversee.78 The challenge, as with industrialised countries, is to uphold labour rights while still allowing flexibility and innovation.

In all national contexts, protecting workers and promoting job creation in the digital age will require smart regulations and investments, and policies on taxation and social protection policies which support workers as they seek to transition to new opportunities.

 

2.3 REGIONAL AND GLOBAL ECONOMIC POLICY COOPERATION

Taxation, trade, consumer protection and competition are among the areas of economic policy that require new thinking in the digital age: they are the ‘guard rails’ of the digital economy. Increased cooperation could lead to effective national approaches and experience informing regional and global multilateral cooperation arrangements.

Taxation, trade, consumer protection and competition are among the areas of economic policy that require new thinking in the digital age: they are the ‘guard rails’ of the digital economy. Increased cooperation could lead to effective national approaches and experience informing regional and global multilateral cooperation arrangements.

Currently, however, there is a lack of regional and global standards in these areas, and multilateral cooperation is generally not working well. This may inflict far higher costs than is widely recognised. To take one relatively simple example, regional and global standards in areas such as interoperability of mobile money systems and best practices for digital ID would have considerable benefits. To discourage misuse, such standards and practices would also need to include clear accountability.

International trade rules need to be updated for the digital age. Technologies and trade have changed dramatically since 1998, for example, when the World Trade Organisation (WTO) last brokered an agreement on e-commerce.79 In January 2019, 76 WTO member states announced the initiation of plurilateral negotiations on trade-related aspects of e-commerce.80 Any agreement will need to address concerns of a diverse range of countries, including lower-income countries in which the e-commerce sector is less developed.81

Some argue that restrictions on data flows should be treated like any other trade barrier and generally minimised.83 However, views differ sharply, and decisions on national legislation are complicated by concerns about privacy and security – discussed in the next chapter. Countries that require companies to store and process data within their national borders argue that it promotes local innovation and investment in technology infrastructure and makes it easier to tax global corporations.84 Others argue against such approaches on the basis that they are protectionist or represent an effort to obtain access to the data.

There is growing recognition that taxation is an area where digital technology has moved faster than policy frameworks. In particular, technology firms may operate business models – such as multi-sided platforms or “freemium” models – which offer free services to some individual users and earn revenue from other users, merchants or advertisers.85 A company may provide services to millions of people in a country without establishing a legal entity or paying tax there. This has become a source of growing popular resentment.86

Where possible, new regulatory approaches should be tested on a small scale before being rolled out widely – through, for example, pilot zones, regulatory sandboxes or trial periods.

International digital cooperation could assist countries to develop appropriate tax policies. The G20 and OECD’s Base Erosion and Profit Shifting project is currently seeking consensus on issues such as how a global company’s tax receipts should be allocated to different jurisdictions based on its business activities.87 An agreement in this area could offer countries a source of revenue that they could, for example, use to invest in human capital or lower the tax burden on small businesses.

Some countries are now taking unilateral action. Countries such as Italy, France and the United Kingdom (UK) have announced the intent to impose taxes on digital sales rather than profits, at least on an interim basis.88 Other countries, such as Thailand, have amended tax rules relating to offshore digital services.89 The lack of cooperation and coordination among different regulators is creating a patchwork of different national rules and regulations which makes trade and e-commerce more difficult. Ensuring that such emerging tax policies do not have unintended consequences on small enterprises or poor populations deserves special attention.

An international perspective is also needed to tackle concerns about competition, which have grown as large firms have established leading positions in many digital services. This is due in part to network effects: the more users a platform already has, the more attractive it becomes for new users and advertisers.

Recent discussions have proposed three main approaches.90 First, a relatively laissez-faire approach that favours self-regulation or minimal regulation. Proponents argue that government regulation is often poorly conceived and counterproductive, harming innovation and economic dynamism. Critics counter that an overly hands-off approach has led to a concentration of market power in large firms and abuses of privacy that have sparked public and government concern.

A second approach calls for more active state intervention to set rules for digital companies. Experience in industrial policy shows that such an approach can either help or hinder depending on many factors, including regulators’ willingness and ability to engage varied stakeholders in a smart discourse to balance competing interests effectively.91

A third approach suggests regulating digital businesses as public utilities, analogous to railroads or electricity companies. The analogy is not an exact one, however, as physical infrastructure is easier to segment and harder to replicate than digital infrastructure and lends itself more easily to hosting competition among service providers. There is also dispute about how contestable are digital markets – that is, how vulnerable are the leading firms to new competitors. Moreover, traditional competition law operates far more slowly than changes in technology.

 

Finding the right approach in these areas will require not only different countries to work together, but also regulators in different government agencies. Models for how agencies can come together for peer-to-peer information sharing include the International Conference of Data Protection & Privacy Commissioners and the International Competition Network.92

Alongside existing models, new models of governance and cooperation may be needed. They will need to be multi-stakeholder, including the private sector, civil society and users. Their debates should be transparent and open to citizens, as modelled by Mexico’s National Institute for Transparency, Access to Information and Personal Data Protection.93

Where possible, new regulatory approaches should be tested on a small scale before being rolled out widely – through, for example, pilot zones, regulatory sandboxes or trial periods. We stress the overall need for a “systems” approach to cooperation and regulation that is multi-stakeholder, adaptive, agile and inclusive in Recommendation 5B.

However, regulators need to have sufficient resources and expertise to engage in such an approach – and the Panel’s consultations highlighted concern that many regulators and legislators have insufficient understanding of complex digital issues to develop and implement policies, engage with companies developing technologies and explain issues to the public.94 This increases the risk of regulations having unintended consequences.

There are several existing examples of initiatives to develop the capacity and understanding of public officials, from countries such as Israel,95 Singapore96 and the United Arab Emirates (UAE).97 But much more could be done, and the Panel’s Recommendation 2 envisages “digital help desks” which would broaden opportunities for officials and regulators to develop the skills needed for the smart governance that will be required to create inclusive and positive outcomes for all.
 


 

3. Individuals, Societies and Digital Technologies

The ultimate purpose of digital technology should always be to improve human welfare. Beyond the socio-economic aspects discussed in the previous chapter, digital technologies have proved that they can connect individuals across cultural and geographic barriers, increasing understanding and potentially helping societies to become more peaceful and cohesive.

However, this is only part of the story. There are also many examples of digital technologies being used to violate rights, undermine privacy, polarise societies and incite violence.

The questions raised are new, complex and pressing. What are the responsibilities of social media companies, governments and individual users? Who is accountable when data can move across the world in an instant? How can varied stakeholders, in nations with diverse cultural and historical traditions, cooperate to ensure that digital technologies do not weaken human rights but strengthen them? 

3.1 HUMAN RIGHTS AND HUMAN AGENCY

Many of the most important documents that codify human rights were written before the age of digital interdependence. They include the Universal Declaration on Human Rights; the International Covenant on Economic, Social and Cultural Rights and the International Covenant on Civil and Political Rights; the Convention on the Elimination of all forms of Discrimination against Women; and the Convention on the Rights of the Child.

The rights these treaties and conventions codify apply in full in the digital age – and often with fresh urgency.

Digital technologies are widely used to advocate for, defend and exercise human rights – but also to violate them. Social media, for example, has provided powerful new ways to exercise the rights to free expression and association, and to document rights violations. It is also used to violate rights by spreading lies that incite hatred and foment violence, often at terrible speed and with the cloak of anonymity.

The most outrageous cases make the headlines. The live streaming of mass shootings in New Zealand.98 Incitement of violence against an ethnic minority in Myanmar.99 The #gamergate scandal, in which women working in video games were threatened with rape.100 The suicides of a British teenager who had viewed self-harm content on social media101 and an Indian man bullied after posting videos of himself dressed as a woman.102

But these are manifestations of a problem that runs wide and deep: one survey of UK adult internet users found that 40 percent of 16-24 year-olds have reported some form of harmful online content, with examples ranging from racism to harassment and child abuse.103 Children are at particular risk: almost a third of under-18s report having recently been exposed to “violent or hateful contact or behaviour online”.104 Elderly people are also more prone to online fraud and misinformation.

Governments have increasingly sought to cut off social media in febrile situations – such as after a terrorist attack – when the risks of rapidly spreading misinformation are especially high. But denying access to the internet can also be part of a sustained government policy that itself violates citizens’ rights, including by depriving people of access to information. Across the globe, governments directed 188 separate internet shutdowns in 2018, up from 108 in 2017.105

PROTECTING HUMAN RIGHTS IN THE DIGITAL AGE

Universal human rights apply equally online as offline – freedom of expression and assembly, for example, are no less important in cyberspace than in the town square. That said, in many cases it is far from obvious how human rights laws and treaties drafted in a pre-digital era should be applied in the digital age.

There is an urgent need to examine how time-honoured human rights frameworks and conventions – and the obligations that flow from those commitments – can guide actions and policies relating to digital cooperation and digital technology.

There is an urgent need to examine how time-honoured human rights frameworks and conventions – and the obligations that flow from those commitments – can guide actions and policies relating to digital cooperation and digital technology. The Panel’s Recommendation 3A urges the UN Secretary-General to begin a process that invites views from all stakeholders on how human rights can be meaningfully applied to ensure that no gaps in protection are caused by new and emerging digital technologies.

Such a process could draw inspiration from many recent national and global efforts to apply human rights for the digital age.106 Illustrative examples include

  • India’s Supreme Court has issued a judgement defining what the right to privacy means in the digital context.107
  • Nigeria’s draft Digital Rights and Freedom Bill tries to apply international human rights law to national digital realities.108
  • The Global Compact and UNICEF have developed guidance on how businesses should approach children’s rights in the digital age.109
  • UNESCO has used its Rights, Openness, Access and Multi-stakeholder governance (ROAM) framework to discuss AI’s implications for rights including freedom of expression, privacy, equality and participation in public life.110
  • The Council of Europe has developed recommendations and guidelines, and the European Court of Human Rights has produced case law, interpreting the European Convention on Human Rights in the digital realm.111

We must collectively ensure that advances in technology are not used to erode human rights or avoid accountability. Human rights defenders should not be targeted for their use of digital media.112 International mechanisms for human rights reporting by states should better incorporate the digital dimension.

In the digital age, the role of the private sector in human rights is becoming increasingly pronounced. As digital technologies and digital services reach scale so quickly, decisions taken by private companies are increasingly affecting millions of people across national borders.

The roles of government and business are described in the 2011 UN Guiding Principles on Business and Human Rights. Though not binding, they were unanimously endorsed by the Human Rights Council and the UN General Assembly. They affirm that while states have the duty to protect rights and provide remedies, businesses also have a responsibility to respect human rights, evaluate risk and assess the human rights impact of their actions.113

There is now a critical need for clearer guidance about what should be expected on human rights from private companies as they develop and deploy digital technologies. The need is especially pressing for social media companies, which is why our Recommendation 3B calls for them to put in place procedures, staff and better ways of working with civil society and human rights defenders to prevent or quickly redress violations.

As any new technology is developed, we should ask how it might inadvertently create new ways of violating rights – especially of people who are already often marginalised or discriminated against.

We heard from one interviewee that companies can struggle to understand local context quickly enough to respond effectively in fast-developing conflict situations and may welcome UN or other expert insight in helping them assess concerns being raised by local actors. One potential venue for information sharing is the UN Forum on Business and Human Rights, through which the Office of the High Commissioner for Human Rights in Geneva hosts regular discussions among the private sector and civil society.114

Civil society organisations would like to go beyond information sharing and use such forums to identify patterns of violations and hold the private sector to account.115 Governments also are becoming less willing to accept a hands-off regulatory approach: in the UK, for example, legislators are exploring how existing legal principles such as “duty of care” could be applied to social media firms.116

As any new technology is developed, we should ask how it might inadvertently create new ways of violating rights – especially of people who are already often marginalised or discriminated against. Women, for example, experience higher levels of online harassment than men.117 The development of personal care robots is raising questions about the rights of elderly people to dignity, privacy and agency.118

The rights of children need especially acute attention. Children go online at ever younger ages, and under-18s make up one-third of all internet users.119 They are most vulnerable to online bullying and sexual exploitation. Digital technologies should promote the best interests of children and respect their agency to articulate their needs, in accordance with the Convention on the Rights of the Child.

Online services and apps used by children should be subject to strict design and data consent standards. Notable examples include the American Children’s Online Privacy Protection Rule of 2013 and the draft Age Appropriate Design Code announced by the UK Information Commissioner in 2019, which defines standards for apps, games and many other digital services even if they are not intended for children.120

HUMAN DIGNITY, AGENCY AND CHOICE

We are delegating more and more decisions to intelligent systems, from how to get to work to what to eat for dinner.121 This can improve our lives, by freeing up time for activities we find more important. But it is also forcing us to rethink our understandings of human dignity and agency, as algorithms are increasingly sophisticated at manipulating our choices – for example, to keep our attention glued to a screen.122

It is also becoming apparent that ‘intelligent’ systems can reinforce discrimination. Many algorithms have been shown to reflect the biases of their creators.123 This is just one reason why employment in the technology sector needs to be more diverse – as noted in Recommendation 1C, which calls for improving gender equality.124 Gaps in the data on which algorithms are trained can likewise automate existing patterns of discrimination, as machine learning systems are only as good as the data that is fed to them.

Often the discrimination is too subtle to notice, but the real-life consequences can be profound when AI systems are used to make decisions such as who is eligible for home loans or public services such as health care.125 The harm caused can be complicated to redress.126 A growing number of initiatives, such as the Institute of Electrical and Electronics Engineers (IEEE)’s Global Initiative on Ethics of Autonomous and Intelligent Systems, are seeking to define how developers of artificial intelligence should address these and similar problems.127

Other initiatives are looking at questions of human responsibility and legal accountability – a complex and rapidly-changing area.128 Legal systems assume that decisions can be traced back to people. Autonomous intelligent systems raise the danger that humans could evade responsibility for decisions made or actions taken by technology they designed, trained, adapted or deployed.129 In any given case, legal liability might ultimately rest with the people who developed the technology, the people who chose the data on which to train the technology, and/or the people who chose to deploy the technology in a given situation.

These questions come into sharpest focus with lethal autonomous weapons systems – machines that can autonomously select targets and kill. UN Secretary-General António Guterres has called for a ban on machines with the power and discretion to take lives without human involvement, a position which this Panel supports.130

Gaps in the data on which algorithms are trained can likewise automate existing patterns of discrimination, as machine learning systems are only as good as the data that is fed to them.

The Panel supports, as stated in Recommendation 3C, the emerging global consensus that autonomous intelligent systems be designed so that their decisions can be explained, and humans remain accountable. These systems demand the highest standards of ethics and engineering. They should be used with extreme caution to make decisions affecting people’s social or economic opportunities or rights, and individuals should have meaningful opportunity to appeal. Life and death decisions should not be delegated to machines.

THE RIGHT TO PRIVACY

The right to privacy131 has become particularly contentious as digital technologies have given governments and private companies vast new possibilities for surveillance, tracking and monitoring, some of which are invasive of privacy.132 As with so many areas of digital technology, there needs to be a society-wide conversation, based on informed consent, about the boundaries and norms for such uses of digital technology and AI. Surveillance, tracking or monitoring by governments or businesses should not violate international human rights law.
 

It is helpful to articulate what we mean by “privacy” and “security”. We define “privacy” as being about an individual’s right to decide who is allowed to see and use their personal information. We define “security” as being about protecting data, on servers and in communication via digital networks.

Notions and expectations of privacy also differ across cultures and societies. How should an individual’s right to privacy be balanced against the interest of businesses in accessing data to improve services and government interest in accessing data for legitimate public purposes related to law enforcement and national security?133

Societies around the world debate these questions heatedly when hard cases come to light, such as Apple’s 2016 refusal of the United States Federal Bureau of Investigation (FBI)’s request to assist in unlocking an iPhone of the suspect in a shooting case.134 Different governments are taking different approaches: some are forcing technology companies to provide technical means of access, sometimes referred to as “backdoors”, so the state can access personal data.135 

Complications arise when data is located in another country: in ‎‎2013, Microsoft refused an FBI request to provide a suspect’s ‎emails that were stored on a server in Ireland. The United States ‎of America (USA) has since passed a law obliging American ‎companies to comply with warrants to provide data of American ‎citizens even if it is stored abroad.136 It enables other ‎governments to separately negotiate agreements to access their ‎citizens’ data stored by American companies in the USA. ‎

There currently seems to be little alternative to handling cross-‎border law enforcement requests through a complex and slow-‎moving patchwork of bilateral agreements – the attitudes of ‎people and governments around the world differ widely, and the ‎decision-making role of global technology companies is ‎evolving. Nonetheless, it is possible that regional and ‎multilateral arrangements could develop over time. ‎

For individuals, what companies can do with their personal data ‎is not just a question of legality but practical understanding – to ‎manage permissions for every single organisation we interact ‎with would be incredibly time consuming and confusing. How to ‎give people greater meaningful control over their personal data ‎is an important question for digital cooperation. ‎

Alongside the right to privacy is the important question of who ‎realises the economic value that can be derived from personal ‎data. Consumers typically have little awareness of how their ‎personal information is sold or otherwise used to generate ‎economic benefit. ‎

There are emerging ideas to make data transactions more ‎explicit and share the value extracted from personal data with ‎the individuals who provide it. These could include business ‎models which give users greater privacy by default: promising ‎examples include the web browser Brave and the search engine ‎DuckDuckGo.137 They could include new legal structures: the ‎UK138 and India139 are among countries exploring the idea of a ‎third-party ‘data fiduciary’ who users can authorise to manage ‎their personal data on their behalf. ‎

3.2 TRUST AND SOCIAL COHESION

The world is suffering from a “trust deficit disorder”, in the words of the UN Secretary-General addressing the UN General Assembly in 2018.140 Trust among nations and in multilateral processes has weakened as states focus more on strategic competition than common interests and behave more aggressively. Building trust, and underpinning it with clear and agreed standards, is central to the success of digital cooperation.

Digital technologies have enabled some new interactions that promote trust, notably by verifying people’s identities and allowing others to rate them.141 Although not reliable in all instances, such systems have enabled many entrepreneurs on e-commerce platforms to win the trust of consumers, and given many people on sharing platforms the confidence to invite strangers into their cars or homes.

In other ways, digital technologies are eroding trust. Lies can now spread more easily, including through algorithms which generate and promote misinformation, sowing discord and undermining confidence in political processes.142 The use of artificial intelligence to produce “deep fakes” – audio and visual content that convincingly mimics real humans – further complicates the task of telling truth from misinformation.143

Violations of privacy and security are undermining people’s trust in governments and companies. Trust between states is challenged by new ways to conduct espionage, manipulate public opinion and infiltrate critical infrastructure. While academia has traditionally nurtured international cooperation in artificial intelligence, governments are incentivised to secrecy by awareness that future breakthroughs could dramatically shift the balance of power.144

The trust deficit might in part be tackled by new technologies, such as training algorithms to identify and take down misinformation. But such solutions will pose their own issues: could we trust the accuracy and impartiality of the algorithms? Ultimately, trust needs to be built through clear standards and agreements based on mutual self-interest and values and with wide participation among all stakeholders, and mechanisms to impose costs for violations.

 

How can trust be promoted in the digital age?

The problem of trust came up repeatedly in written contributions to the Panel. Microsoft’s contribution stressed that an atmosphere of trust incentivises the invention of inclusive new technologies. As Latin American human rights group Derechos Digitales put it, “all participants in processes of digital cooperation must be able to share and work together freely, confident in the reliability and honesty of their counterparts”. But how can trust be promoted? We received a large number of ideas:
Articulating values and principles that govern technology development and use. Being transparent about decision-making that impacts other stakeholders, known vulnerabilities in software, and data breaches. Governments inviting participation from companies and civil society in discussions on regulation. Making real and visible
efforts to obtain consent and protect data, including “security-bydesign” and “privacy-by-design” initiatives.149

Accepting oversight from a trusted third-party: for the media, this could be an organisation that fact-checks sources; for technology companies, this could be external audits of design, deployment and internal audit processes; for governments, this could be reviews by human rights forums.

Understanding the incentive structures that erode trust, and finding ways to change them: for example, requiring or pressuring social media firms to refuse to run adverts which contain disinformation, de-monetise content that contains disinformation, and clearly label sponsors of political adverts.150

Finally, digital cooperation itself can be a source of trust. In the Cold War, small pools of shared interest – non-proliferation or regional stability – allowed competitors to work together and paved the way for transparency and confidence-building measures that helped build a modicum of trust.151 Analogously, getting multiple stakeholders into a habit of cooperating on issues such as standard-setting and interoperability, addressing risks and social harm and collaborative application of digital technologies to achieve the SDGs, could allow trust to be built up gradually.

 

 

All citizens can play a role in building societal resilience against ‎the misuse of digital technology. We all need to deepen our ‎understanding of the political, social, cultural and economic ‎impacts of digital technologies and what it means to use them ‎responsibly. We encourage nations to consider how educational ‎systems can train students to thoughtfully consider the sources ‎and credibility of information. ‎

All citizens can play a role in building societal resilience against the misuse of digital technology. We all need to deepen our understanding of the political, social, cultural and economic impacts of digital technologies and what it means to use them responsibly.

There are many encouraging instances of digital cooperation being used to build individual capacities that will collectively make it harder for irresponsible use of digital technologies to erode societal trust.145 Examples drawn to the Panel’s attention by written submissions and interviews include:

  • The 5Rights Foundation and British Telecom developed an initiative to help children understand how the apps and games they use make money, including techniques to keep their attention for longer.146
  • The Cisco Networking Academy and United Nations Volunteers are training youth in Asia and Latin America to explore how digital technologies can enable them to become agents of social change in their communities.147
  • The Digital Empowerment Foundation is working in India with WhatsApp and community leaders to stop the spread of misinformation on social media.148

3.3 SECURITY

Global security and stability are increasingly dependent on digital security and stability. The scope of threats is growing. Cyber capabilities are developing, becoming more targeted, more impactful on physical systems and more insidious at undermining societal trust.

“Cyber attacks” and “massive data fraud and threat” have ranked for two years in a row among the top five global risks listed by the World Economic Forum (WEF).152 More than 80% of the experts consulted in the WEF’s latest annual survey expected the risks of “cyber-attacks: theft of data/money” and “cyber-attacks: disruption of operations and infrastructure” to increase yearly.153

Three recent examples illustrate the concern. In 2016, hackers stole $81 million from the Bangladesh Central Bank by manipulating the SWIFT global payments network.154 In 2017, malware called “NotPetya” caused widespread havoc – shipping firm Maersk alone lost an estimated $250 million.155 In 2018, by one estimate, cybercriminals stole $1.5 trillion – an amount comparable to the national income of Spain.156

Accurate figures are hard to come by as victims may prefer to keep quiet. But often it is only publicity about a major incident that prompts the necessary investments in security. Short-term incentives generally prioritise launching new products over making systems more robust.157

The range of targets for cyber-attacks is increasing quickly. New internet users typically have low awareness of digital hygiene.158 Already over half of attacks are directed at “things” on the Internet of Things, which connects everything from smart TVs to baby monitors to thermostats.159 Fast 5G networks will further integrate the internet with physical infrastructure,
likely creating new vulnerabilities.160

The potential for cyber-attacks to take down critical infrastructure has been clear since Stuxnet was found to have penetrated an Iranian nuclear facility in 2010.161 More recently concerns have widened to the potential risks and impact of misinformation campaigns and online efforts by foreign governments to influence democratic elections, including the 2016 Brexit vote and the American presidential election.162
 

Other existing initiatives on digital security

The Paris Call for Trust and Security in Cyberspace is a multi-stakeholder initiative launched in November 2018 and joined by 65 countries, 334 companies – including Microsoft, Facebook, Google and IBM – and 138 universities and non-profit organisations. It calls for measures including coordinated disclosure of technical vulnerabilities. Many leading technology powers, such as the USA, Russia, China, Israel and India – have not signed up.173

The Global Commission on Stability in Cyberspace, an independent multi-stakeholder platform, is developing proposals for norms and policies to enhance international security and stability in cyberspace. The commission has introduced a series of norms, including calls for agreement not to attack critical infrastructure and non-interference in elections, and is currently discussing accountability and the future of cybersecurity.

The Global Conference on Cyberspace, also known as the ‘London Process’, are ad hoc multi-stakeholder conferences held so far in London (2011), Budapest (2012), Seoul (2013), The Hague (2015) and New Delhi (2017). The Global Forum on Cyber Expertise, established after the 2015 Conference, is a platform for identifying best practices and providing support to states, the private sector and organisations in developing cybersecurity frameworks, policies and skills.

The Geneva Dialogue on Responsible Behaviour in Cyberspace provides another forum for multi-stakeholder consultation.

The Cybersecurity Tech Accord and the Charter of Trust are examples of industry-led voluntary initiatives to identify guiding principles for trust and security, strengthen security of supply chains and improve training of employees in cybersecurity.174

Compared to physical attacks, it can be much harder to prove from which jurisdiction a cyber-attack originated. This makes it difficult to attribute responsibility or use mechanisms to cooperate on law enforcement.163

Perceptions of digital vulnerability and unfair cyber advantage are contributing to trade, investment and strategic tensions.164 Numerous countries have set up cyber commands within their militaries.165 Nearly 60 states are known to be pursuing offensive capabilities.166 This increases the risks for all as cyber weapons, once released, can be used to attack others – including the original developer of the weapon.167

As artificial intelligence advances, the tactics and tools of cyber-attacks will become more sophisticated and difficult to predict – including more able to pursue highly customised objectives, and to adapt in real time.168

Many governments and companies are aware of the need to strengthen digital cooperation by agreeing on and implementing international norms for responsible behaviour, and important progress has been made especially in meetings of groups of governmental experts at the UN.169
The UN Groups of Governmental Experts (GGE) on Developments in the Field of Information and Telecommunications in the Context of International Security have been set up by resolutions of the UN General Assembly at regular intervals since 1998. Decisions by the GGE are made on the basis of consensus, including the decision on the final report.170 The 2013 GGE on Developments in the Field of Information and Telecommunications in the Context of International Security agreed in its report that international law applies to cyberspace (see text box).171 This view was reaffirmed by the subsequent 2015 GGE, which also proposed eleven voluntary and non-binding norms for states.172 The UN General Assembly welcomed the 2015 report and called on member states to be guided by it in their use of information and communications technologies. This marks an important step forward in building cooperation and agreement in this increasingly salient arena.

DIGITAL COOPERATION ON CYBERSECURITY

The pace of cyber-attacks is quickening. Currently fragmented efforts need rapidly to coalesce into a comprehensive set of common principles to align action and facilitate cooperation that raises the costs for malicious actors.175

Private sector involvement is especially important to evolving a common approach to tracing cyber-attacks: assessing evidence, context, attenuating circumstances and damage. We are encouraged that the 2019 UN GGE176 and the new Open-Ended Working Group (OEWG)177 which deal with behaviour of states and international law, while primarily a forum for inter-governmental consultations, do provide for consultations with stakeholders other than governments, mainly regional organisations.

In our Recommendation 4, we call for a multi-stakeholder Global Commitment on Digital Trust and Security to bolster these existing efforts. It could provide support in the implementation of agreed norms, rules and principles of responsible behaviour and present a shared vision on digital trust and security. It could also propose priorities for further action on capacity development for governments and other stakeholders and international cooperation.

The Global Commitment should coordinate with ongoing and emerging efforts to implement norms in practice by assisting victims of cyber-attacks and assessing impact. It may not yet be feasible to envisage a single global forum to house such capabilities, but there would be value in strengthening cooperation among existing initiatives.

Another priority should be to deepen cooperation and information sharing among the experts who comprise national governments’ Computer Emergency Response Teams (CERTs). Examples to build on here include the Oman-ITU Arab Regional Cybersecurity Centre for 22 Arab League countries,178 the EU’s Computer Security Incident Response Team (CSIRT)s Network,179 and Israel’s Cyber Net, in which public and private teams work together. Collaborative platforms hosted by neutral third parties such as the Forum of Incident Response and Security Teams (FIRST) can help build trust and the exchange of best practices and tools. 

The pace of cyber-attacks is quickening. Currently fragmented efforts need rapidly to coalesce into a comprehensive set of common principles to align action and facilitate cooperation that raises the costs for malicious actors.

Digital cooperation among the private sector, governments and international organisations should seek to improve transparency and quality in the development of software, components and devices.180 While many best practices and standards exist, they often address only narrow parts of a vast and diverse universe that ranges from talking toys to industrial control systems.181 Gaps exist in awareness and application. Beyond encouraging a broader focus on security among developers, digital cooperation should address the critical need to train more experts specifically in cybersecurity:182 by one estimate, the shortfall will be 3.5 million by 2021.183
 


 

4. Mechanisms for Global Digital Cooperation

No single approach to digital cooperation can address the diverse spectrum of issues raised in this report – and as technologies evolve, so will the issues, and the most effective ways to cooperate. We should approach digital cooperation using all available tools, making dynamic choices about the best approach based on specific circumstances. In some cases, cooperation may be initiated and led by the private sector or civil society, and in some cases by governments or international organisations.184

Most current mechanisms of digital cooperation are primarily local, national or regional. However, digital interdependence also necessitates that we strengthen global digital cooperation mechanisms to address challenges and provide opportunities for all.

Most current mechanisms of digital cooperation are primarily local, national or regional. However, digital interdependence also necessitates that we strengthen global digital cooperation mechanisms to address challenges and provide opportunities for all.

This chapter identifies gaps and challenges in current arrangements for global digital cooperation and summarises the functions any future cooperation architecture could perform and what principles could underpin them. It then outlines three possible options for digital cooperation architectures and concludes with a discussion of the role the United Nations can play. There was not unanimity of opinion among the Panel members about the shape, function and operations of these different models. Instead, they are presented as useful alternatives to explore in the spirit of digital cooperation and as an input for the broad consultations we call for in Recommendation 5A.

Ultimately, success of any proposed mechanisms and architecture will depend on the spirit in which they are developed and implemented. All governments, the private sector and civil society organisations need to recognise how much they stand to gain from a spirit of collaboration to drive progress toward the achievement of the SDGs and to raise the costs of using digital technologies irresponsibly. The alternative is further erosion of the trust and stability we need to build an inclusive and prosperous digital future.

4.1 CHALLENGES AND GAPS

The international community is not starting from scratch. It can build on established mechanisms for digital cooperation involving governments, technical bodies, civil society and other organisations. Some are based in national and international law,185 others in “soft law” – norms, guidelines, codes of conduct and other self-regulatory measures adopted by business and tech communities.186 Some are loosely organised, others highly institutionalised.187 Some focus on setting agendas and standards, others on monitoring and coordination.188 Many could evolve to become better fit for purpose.

The need for better digital cooperation is not so much with managing the technical nuts and bolts of how technologies function, as mechanisms here are generally well-established, but with the unprecedented economic, societal and ethical challenges they cause. How to tell, in context, when conversations on social media cross the line into inciting violence? How to limit the use of cyber weapons possessed not only by states but non-state actors and individuals?189 How to adapt trade systems designed for a different era to the newly emerging forms of online commerce?

The 2003 and 2005 World Summit on the Information Society (WSIS) established the Internet Governance Forum (IGF) as a platform for multi-stakeholder dialogue.190 Global, national and regional IGF meetings have contributed to many important digital debates. But the IGF, in its current form, has limitations in addressing challenges that are now emerging from new digital technologies.

The need for strengthened cooperation mechanisms has been raised many times in recent years by broad initiatives – such as the NetMundial Conference,191 the Global Commission on Internet Governance192 and Web Foundation’s Contract for the Web193 – and more narrowly focused efforts such as the Broadband Commission, the Alliance for Affordable Internet, the Internet & Jurisdiction Policy Network, the Global Commission on the Stability of Cyberspace, the Charter of Trust, Smart Africa, and the International Panel on AI recently announced by Canada and France.194

In our consultations, we heard a great deal of dissatisfaction with existing digital cooperation arrangements: a desire for more tangible outcomes, more active participation by governments and the private sector, more inclusive processes and better follow-up. Overall, systems need to become more holistic, multi-disciplinary, multi-stakeholder, agile and able to convert rhetoric into practice. We have identified six main gaps:

First, despite their growing impact on society, digital technology and digital cooperation issues remain relatively low on many national, regional and global political agendas. Only recently have forums such as the G20 started regularly to address the digital economy.195 In 2018, the UN Secretary- General for the first time delivered an opening statement in person at the IGF in Paris.196

Second, digital cooperation arrangements such as technical bodies and standard-setting organisations are often not inclusive enough of small and developing countries, indigenous communities, women, young and elderly people and those with disabilities. Even if they are invited to the table, such groups may lack the capacity to participate effectively and meaningfully.197
Third, there is considerable overlap among the large number of mechanisms covering digital policy issues. As a result, the digital cooperation architecture has become highly complex but not necessarily effective. There is no simple entry point. This makes it especially hard for small enterprises, marginalised groups, developing countries and other stakeholders with limited budgets and expertise to make their voices heard.198

Fourth, digital technologies increasingly cut across areas in which policies are shaped by separate institutions. For example, one body may look at data issues from the perspective of standardisation, while another considers trade, and still another regulates to protect human rights.199 Many international organisations are trying to adjust their traditional policy work to reflect the realities of the digital transformation, but do not yet have enough expertise and experience to have well-defined roles in addressing new digital issues. At a minimum there needs to be better communication across different bodies to shape awareness. Ideally, effective cooperation should create synergies.

Fifth, there is a lack of reliable data, metrics and evidence on which to base practical policy interventions. For example, the annual cost of cybercrime to the global economy is variously estimated at anything from $600 billion200 to $6 trillion.201 Estimates of the value of the AI market in 2025 range from $60 billion202 to $17 trillion.203 The problem is most acute in developing countries, where resources to collect evidence are scarce and data collection is generally uneven. Establishing a knowledge repository on digital policy, with definitions of terms and concepts, would also increase clarity in policy discussions and support consistency of measurement of digital inclusion, as we have noted in our Recommendation 1D.

Sixth, lack of trust among governments, civil society and the private sector – and sometimes a lack of humility and understanding of different perspectives – can make it more difficult to establish the collaborative multi-stakeholder approach needed to develop effective cooperation mechanisms.

Inter-governmental work must be balanced with work involving broader stakeholders. Multi-stakeholder and multilateral approaches can and do co-exist. The challenge is to evolve ways of using each to reinforce the effectiveness of the other.

VALUES AND PRINCIPLES

As noted in the discussion of values in Chapter 1, we believe global digital cooperation should be: inclusive; respectful; human-centred; conducive to human flourishing; transparent; collaborative; accessible; sustainable and harmonious. Shared values become even more important during periods of rapid change, limited information and unpredictability, as with current discussions of cooperation relating to artificial intelligence.

It would be useful for the private sector, communities and governments to conduct digital cooperation initiatives by explicitly defining the values and principles that guide them. The aim is to align stakeholders around a common vision, maximise the beneficial impacts and minimise the risk of misuse and unintended consequences.

Alongside these shared values, we believe it is useful to highlight operational principles as a reference point for the future evolution of digital cooperation mechanisms. The principles we propose for global digital cooperation mechanisms include that they should: be easy to engage in, open and transparent; inclusive and accountable to all stakeholders; consult and debate as locally as possible; encourage innovation of both technologies and better mechanisms for cooperating; and, seek to maximise the global public interest. These are set forth in more detail in Annex VI, based on the experience of internet governance and technical coordination bodies – such as the WSIS process, UNESCO and the NetMundial conference.204

Defining values and principles is only the first step: we must operationalise them in practice in the design and development of digital technology and digital cooperation mechanisms. Where the reach of hard governance is limited or ambiguous – for example, at the stage of innovation or when the long-term impact of technologies is hard to predict – values-based cooperation approaches can play a vital role.

We should look for opportunities to operationalise values and principles at each step in the design and development of new technologies, as well as new policy practices. For example, educational institutions could encourage software developers, business executives and engineers to integrate values and principles in their work and use professional codes of conduct akin to the medical profession’s Hippocratic Oath. Businesses can integrate values into workflows, use values-based measures to assess risk and institute a suitable incentive structure for staff to follow shared values. Self-assessments and third-party audits can also help institutionalise a business culture based on shared values.

4.2 THREE POSSIBLE ARCHITECTURES FOR GLOBAL DIGITAL COOPERATION

The Panel had many discussions about possible practical next steps to improve the architecture of global digital cooperation and the merits of proposing new mechanisms or updating existing ones. Some suggested that many cooperation challenges could be best addressed by strengthening implementation capacities of current agencies and mandates.

There was broad agreement that improved cooperation is needed, that such cooperation will need to take multiple diverse forms, and that governments, the private sector and civil society will need to find new ways to work together to steer an effective path between extremes of over-regulation and complete laissez-faire.

While no single vision emerged, there was broad agreement that improved cooperation is needed, that such cooperation will need to take multiple diverse forms, and that governments, the private sector and civil society will need to find new ways to work together to steer an effective path between extremes of over-regulation and complete laissez-faire. Based on our consultations, the Panel felt that presenting options for digital cooperation architectures would best contribute to the discourse on global digital cooperation.

Annex VI sets out functions that a digital cooperation architecture could be designed to improve. These include generating political will, ensuring the active and meaningful participation of all stakeholders, monitoring developments and identifying trends, creating shared understanding and purpose, preventing and resolving disputes, building consensus and following up on agreements.

Below three possible models are proposed that could address some of these functions. The first enhances and extends the multi-stakeholder IGF. The second is a distributed architecture which builds on existing mechanisms. The third envisions a ‘commons’ approach with loose coordination by the UN. All have benefits and drawbacks. They are put forward here to provide concrete starting points for the further discussion and broad consultation which we recommend the UN Secretary-General initiate in our Recommendation 5A.

 

A note on inclusive representation

All three models highlighted below would need to take special steps to ensure that they are broadly representative and develop specific mechanisms to ensure equitable participation of developing countries, women and other traditionally marginalised groups who have often been denied a voice.

 “INTERNET GOVERNANCE FORUM PLUS"205

The proposed Internet Governance Forum Plus, or IGF Plus, would build on the existing IGF which was established by the World Summit on Information Society (Tunis, 2005). The IGF is currently the main global space convened by the UN for addressing internet governance and digital policy issues. The IGF Plus concept would provide additional multi-stakeholder and multilateral legitimacy by being open to all stakeholders and by being institutionally anchored in the UN system.

The IGF Plus would aim to build on the IGF’s strengths, including well-developed infrastructure and procedures, acceptance in stakeholder communities, gender balance in IGF bodies and activities, and a network of 114 national, regional and youth IGFs206. It would add important capacity strengthening and other support activities.

The IGF Plus model aims to address the IGF’s current shortcomings. For example, the lack of actionable outcomes can be addressed by working on policies and norms of direct interest to stakeholder communities. The limited participation of government and business representatives, especially from small and developing countries, can be addressed by introducing discussion tracks in which governments, the private sector and civil society address their specific concerns.

The IGF Plus would comprise an Advisory Group, Cooperation Accelerator, Policy Incubator and Observatory and Help Desk.

The Advisory Group, based on the IGF’s current Multi-stakeholder Advisory Group, would be responsible for preparing annual meetings, and identifying focus policy issues each year. This would not exclude coverage of other issues but ensure a critical mass of discussion on the selected issues. The Advisory Group could identify moments when emerging discussions in other forums need to be connected, and issues that are not covered by existing organisations or mechanisms.

Building on the current practices of the IGF, the Advisory Group could consist of members appointed for three years by the UN Secretary-General on the advice of member states and stakeholder groups, ensuring gender, age, stakeholder and geographical balance.

The Cooperation Accelerator would accelerate issue-centred cooperation across a wide range of institutions, organisations and processes; identify points of convergence among existing IGF coalitions, and issues around which new coalitions need to be established; convene stakeholder-specific coalitions to address the concerns of groups such as governments, businesses, civil society, parliamentarians, elderly people, young people, philanthropy, the media, and women; and facilitate convergences among debates in major digital and policy events at the UN and beyond.

The Cooperation Accelerator could consist of members selected for their multi-disciplinary experience and expertise. Membership would include civil society, businesses and governments and representation from major digital events such as the Web Summit, Mobile World Congress, Lift:Lab, Shift, LaWeb, and Telecom World.

The Policy Incubator would incubate policies and norms for public discussion and adoption. In response to requests to look at a perceived regulatory gap, it would examine if existing norms and regulations could fill the gap and, if not, form a policy group consisting of interested stakeholders to make proposals to governments and other decision-making bodies. It would monitor policies and norms through feedback from the bodies that adopt and implement them.207

The Policy Incubator could provide the currently missing link between dialogue platforms identifying regulatory gaps and existing decision-making bodies by maintaining momentum in discussions without making legally binding decisions. It should have a flexible and dynamic composition involving all stakeholders concerned by a specific policy issue.

The Observatory and Help Desk would direct requests for help on digital policy (such as dealing with crisis situations, drafting legislation, or advising on policy) to appropriate entities, including the Help Desks described in Recommendation 2; coordinate capacity development activities provided by other organisations; collect and share best practices; and provide an overview of digital policy issues, including monitoring trends, identifying emerging issues and providing data on digital policy.

The IGF Trust Fund would be a dedicated fund for the IGF Plus. All stakeholders – including governments, international organisations, businesses and the tech sector – would be encouraged to contribute. The IGF Plus Secretariat should be linked to the Office of the United Nations Secretary-General to reflect its interdisciplinary and system-wide approach.

“DISTRIBUTED CO-GOVERNANCE ARCHITECTURE”

The proposed distributed co-governance architecture (COGOV) would build on existing mechanisms while filling gaps with new mechanisms to achieve a distributed, yet cohesive digital cooperation architecture covering all stages from norm design to implementation and potential enforcement of such norms by the appropriate authorities.

COGOV relies on the self-forming ‘horizontal’ network approach used by the Internet Engineering Task Force, the Internet Corporation for Assigned Names and Numbers (ICANN), the World Wide Web Consortium, the Regional Internet Registries, the IEEE and others to host networks to design norms and policies. This proposal would extend this agile network approach to issues affecting the broader digital economy and society.

Given the wide range of issues which the COGOV architecture could encompass, it will be imperative to ensure there is broad representation beyond the relatively homogenous expert communities which predominate for some of the technical issues discussed above.

The COGOV architecture decouples the design of digital norms from their implementation and enforcement. It seeks to rapidly produce shared digital cooperation solutions, including norms, and publish them for stakeholders to consider and potentially adopt. These norms would be voluntary solutions rather than legal instruments. In themselves, the COGOV networks would not have governing authority or enforcement powers. However, the norms could be taken up by government agencies as useful blueprints to establish policies, regulations or laws.

The COGOV could consist of three functional elements: a) Digital Cooperation Networks; b) Network Support Platforms; and, c) a Network of Networks.

a) Digital Cooperation Networks. These networks would be issue-specific horizontal collaboration groups, involving stakeholders from relevant vertical sectors and institutions. They could be formed freely by stakeholders in a bottom-up way, self-governed, and share the same goal of cooperation – including potentially the design of digital norms. They could be created or supported by one or more governments and/ or intergovernmental organisations with the same concerns. Their functions would include developing shared understandings and goals for a specific digital issue, strengthening cooperation, designing or updating digital norms, providing norm implementation roadmaps and developing capacity to adopt policies and norms.
Participation in digital cooperation networks should be open for all relevant and concerned stakeholders, including governments, intergovernmental institutions, the private sector, civil society, academia and the technical community. Special efforts would need to be made to include and support representatives from developing countries and traditionally marginalised groups. The digital cooperation networks may be stand-alone voluntary networks or hosted by the network support platforms described below. 

b) Network Support Platforms. These platforms could host and enable the dynamic formation and functioning of multiple digital cooperation networks. While the digital cooperation networks would operate in defined and limited timeframes, the network support platforms are proposed as stable long-term elements of the architecture, supporting the digital cooperation networks and enabling them to evolve as necessary to update their cooperation and relevant digital norms.

The network support platforms should not interfere in the work product or composition of the self-governed and stakeholder-initiated digital cooperation networks; they should simply support the networks to operate efficiently. The platforms would help the networks to identify emerging issues, secure the commitment of relevant participants, provide necessary resources and facilities, and promote their outcomes.

c) Network of Networks. The network of networks would loosely coordinate and support activities across all digital cooperation networks and network support platforms. The role of the network of networks is to ensure integrity and enable coherent outcomes that account for the complex inter-dependencies across digital policy issues.

The network of networks would consist of: 1) a support function, which would organise an annual forum, a ‘research cooperative’ and a ‘norm exchange’; and 2) a voluntary peer coordination network, which would bring issues to the attention of the annual forum and follow up on its recommendations by promoting action from specific stakeholders to form digital cooperation networks.

The network of networks should avoid a controlling top-down form of administration: it is simply there to loosely coordinate the activities across the decentralized COGOV architecture; its decisions would not be binding.

Once norms are available, governing authorities may choose to establish enforcement mechanisms and may choose to use these norms as policy input or blueprints. The following table summarises the mechanisms across the norm design, implementation, and enforcement stages:

Norm Design
• Identify digital governance issues
• Form digital cooperation networks
• Support networks through digital cooperation platforms
Norm Implementation
• Develop norm design and adoption capacity
• Provide a ‘norm exchange’ to connect communities
• Offer implementation incentives
Norm Enforcement
• Develop norms into laws/regulations
• Adjudicate/resolve disputes and conflicts
• Establish clear guard rails for digital technologies

“DIGITAL COMMONS ARCHITECTURE”

In areas such as space, climate change and the sea, the international community has entered into treaties and developed principles, norms and functional cooperation to designate certain spaces as international ‘commons’ and then govern ongoing practice and dialogue.208 For instance, the “common heritage” principle, introduced by the United Nations Convention on the Law of the Sea, imposes a duty to protect resources for the good of future generations.209

While norm-making and guidance in digital technologies will pose different challenges, some aspects of the digital realm, such as common internet protocols, already share characteristics with ‘commons’ requiring responsible and global stewardship. ‘Digital commons’ have also been mentioned recently in the context of data and AI developments.210

The proposed “Digital Commons Architecture” would aim to synergise efforts by governments, civil society and businesses to ensure that digital technologies promote the SDGs and to address risks of social harm. It would comprise multi-stakeholder tracks to create dialogue around emerging issues and communicate use cases and problems to be solved to stakeholders, and an annual meeting to act as a clearing house.

Each track could be owned by a lead organisation – a UN agency, an industry or academic consortium or a multi-stakeholder forum, with the choice of participants governed by guiding principles of the kind listed in this report to ensure inclusiveness and broad representation. Light coordination of the tracks, and servicing of the annual meeting where their reports are considered, could be ensured by a small secretariat housed within the UN.

Analogous to processes such as the International Competition Network, the Digital Commons Architecture tracks would have flexible, project-oriented and results-based working groups. They would enable learning on governance and related capacity development to be driven by practice. Annual meetings could aggregate lessons for use in soft law or more binding approaches in the appropriate forums. This could rapidly build a repository of norms and governance practices to guide stakeholders in their respective roles and responsibilities.

The Digital Commons Architecture tracks could focus on issues agreed by the participants. For example, they might initially wish to address issues emerging from the preceding chapters, such as using data in support of the SDGs, using AI to improve agriculture and health, or developing a global values/ethics certification process for new technology.

Multi-stakeholder collaboration around these issues could pave the way for wider cooperation. For example, realising the potential of AI to provide insights to a global health challenge might require the pooling of reliable data, clear privacy measures, a common data architecture and interoperable standards. Successful outcomes could then be progressively extended to other areas. An additional benefit would be to promote transparency and build confidence.

The annual meeting would not make rules, but provide guidance to stakeholders, which they can use in the appropriate forums. The meeting would discuss the output of the various tracks as well as implementation of the governance guidance produced by these tracks through a ‘soft’ review of reports by stakeholders.

The Digital Commons Architecture might not specify technical solutions, but instead propose technical models, and standards of accountability and trustworthiness, which could be applied across the globe. It could also facilitate a discussion of lessons from around the globe on implementation of existing norms in specific areas.

The annual meeting could build on and connect discussions taking place in other fora and could in turn feed its results into discussions taking place in other fora. This would reduce the current burden of multiplicity of forums by clarifying who is doing what, eliminating potential overlap, and identifying partnership opportunities.

The Digital Commons Architecture could be funded through voluntary contributions. Along the lines of the International Chamber of Commerce, membership fees could be considered for private sector participation; these could be waived for certain categories such as small businesses or civil society participants. 211 A dedicated trust fund could assist with civil society and least developed country participation.


The three potential models share common elements, such as multi-stakeholder participation, dedicated trust funds to enhance inclusivity, reducing policy inflation by consolidating discussions across for a, and a light coordination and convening role for the UN. The values in Chapter 1 and principles and functions in Annex VI provide shared design elements that further emphasise inclusivity and multi-stakeholder participation.

Equally, there are differences in emphasis and approach. The COGOV, for example, foresees a larger role for new networks of experts and multi-stakeholder governance; the Digital Commons Architecture presumes more of a focus on iterative learning of governance through practice in both multilateral and multi-stakeholder tracks; and the IGF Plus adds functionalities to an existing multi-stakeholder forum with a UN mandate.

The common design elements across the models could be flexibly brought together once the broad thrust of a new digital cooperation architecture has been defined. As suggested in Recommendation 5A, a common starting point could be a Global Commitment for Digital Cooperation based on shared values and principles. 

4.3 THE ROLE OF THE UN

The UN’s three foundational pillars – peace and security, human rights and development – position it well to help spotlight issues emerging in the digital age and advocate on behalf of humanity’s best interests. In our consultations, we heard that despite its well-known weaknesses, the UN retains a unique role and convening power to bring stakeholders together to create the norms and frameworks and assist in developing the capacity we need to ensure a safe and equitable digital future for all people.

the UN retains a unique role and convening power to bring stakeholders together to create the norms and frameworks and assist in developing the capacity we need to ensure a safe and equitable digital future for all people.

Digital technologies are increasingly impacting the work of the UN in three ways: changing the political, social and economic environment in the ways this report has discussed; providing new tools for its core mandates; and creating new policy issues.

UN entities have begun to embrace the digital transformation and are revamping programmes and launching initiatives to apply digital technology to further their missions. Some UN agencies – such as UNICEF, UNESCO, the World Food Programme (WFP) and the United Nations Development Programme (UNDP) – have made a priority of exploring how the digital transformation can provide them with new approaches to achieve their mandates. The Task Force on Digital Financing of the SDGs, for example, will explore how digital technologies can be leveraged to finance the SDGs.212

When digital issues often do not fit neatly within the traditional mandates of UN agencies, some have sought to expand their mandates, causing overlaps and friction. This duplication also causes confusion for external partners and stakeholders, who find it difficult to discern among the many fora, events and initiatives hosted by various parts of the UN on science, technology and innovation issues and policy setting. Some UN entities have responded to converging mandates by launching cross-cutting initiatives. For example, in 2010 the ITU and UNESCO established the Broadband Commission for Sustainable Development; in 2016 the ITU, UN Women, the International Trade Centre (ITC), the GSM Association (GSMA), UNESCO and the United Nations University set up the EQUALS partnership to tackle the digital gender gap.

UN entities have also tended to go about digital issues in their own way, often without sharing information, at times duplicating each other’s work, and not reflecting on whether the systems they are building might scale to other UN entities. UN agencies can do much more to pool their human and computing capacities and develop shared tools and common standards – for example, through joint procurement of cloud computing, to reduce price and increase interoperability, and promoting open and interoperable standards for data produced and used by the UN.

The UN has begun to engage the private sector and tech community much more directly. For example, Tech Against Terrorism, a public/private partnership launched in April 2017 by the Counter-Terrorism Committee Executive Directorate, aims to support the technology industry to develop more effective and responsible approaches to tackling terrorists’ use of the internet, while respecting human rights. However, working with stakeholders such as the private sector and civil society is still not part of the DNA of many UN agencies. More can be done to partner with other stakeholders effectively and consistently.
 

How can the UN add value in the digital transformation?

As a convener – The AI for Global Good Summit, the Broadband Commission for Sustainable Development, ITU’s Global Symposium for Regulators, the WSIS Forum, the Multi-stakeholder Forum on Science, Technology and Innovation for the Sustainable Development Goals (STI Forum).

Providing a space for debating values and norms – the IGF, the Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security, Special Rapporteurs on the Right to Privacy and on the promotion and protection of the Right to Freedom of Opinion and Expression, UNESCO’s Artificial Intelligence with Human Values for Sustainable Development initiative, UNICEF’s efforts around children’s online safety.

Standard setting – ITU’s Telecommunication Standardization Sector, the UN Statistical Commission and its Global Working Group on Big Data for Official Statistics, WHO guidelines on digital health interventions, the Humanitarian Data Exchange – an open platform and standard for sharing data across crises and organisations.

Multi-stakeholder or bilateral initiatives on specific issues – EQUALS: The Global Partnership for Gender Equality in the Digital Age, the Emergency Telecommunications Cluster hosted by WFP, the UN Global Compact’s Breakthrough Innovation for the SDGs Action Platform, the Famine Action Mechanism hosted by the World Bank and the UN in partnership with industry.

Developing the capacity of member states – UNDP’s Accelerator Labs, the Technology Facilitation Mechanism, UN Global Pulse Labs, the United Nations Conference on Trade and Development’s trainings, the Digital Blue Helmets initiative, the UN Office on Drugs and Crime’s Global Programme on Cybercrime.

Ranking, mapping and measuring – the annual E-Government Survey produced by the United Nations Department of Economic and Social Affairs, the United Nations Institute for Disarmament Research’s Cyber Policy Portal, an online reference tool that maps the cybersecurity and cybersecurity-related policy landscape, ITU’s Measuring the Information Society report and Global Cybersecurity Index.

Arbitration and dispute-resolution – The World Intellectual Property Organization’s Internet Domain Name Process, the United Nations Commission on International Trade Law.
 

 

Created by the innovation units of several UN agencies in 2015, the UN Innovation Network is working on sharing best practices and recommending harmonisation of policies which may help reduce fragmentation across the UN system. The UN’s highest-level coordination body, the Chief Executives Board for Coordination, is trying to encourage more system-wide coordination through initiatives such as the UN Data Innovation Lab and UN data privacy principles. The High-level Committee on Programmes could also have a role to enable more knowledge sharing, efficiencies of scale and scaling up of successful practices and initiatives across the UN system.

The development of the UN Secretary-General’s Strategy on New Technologies, issued in September 2018, has helped identify points of overlap and convergence, and UN agencies meet regularly to track progress. The strategy notes that the Secretary-General may consider appointing a “Tech Envoy” following the work of this Panel.

The UN can play a key role in enhancing digital cooperation by developing greater organisational and human capacity on digital governance issues and improving its ability to respond to member states’ need for policy advice and capacity development. 
 


 

5. Recommendations

The preceding chapters of this report have shown that our ‎rapidly changing and interdependent digital world urgently ‎needs improved digital cooperation founded on common ‎human values. Based on our analysis and consultations with ‎diverse stakeholders, and noting that not all Panel members ‎were supportive of all recommendations, we make the following ‎recommendations: ‎

AN INCLUSIVE DIGITAL ECONOMY ‎AND SOCIETY

‎1A: We recommend that by 2030, every adult should have ‎affordable access to digital networks, as well as digitally-‎enabled financial and health services, as a means to make a ‎substantial contribution to achieving the SDGs. Provision of ‎these services should guard against abuse by building on ‎emerging principles and best practices, one example of ‎which is providing the ability to opt in and opt out, and by ‎encouraging informed public discourse. ‎

1B: We recommend that a broad, multi-stakeholder alliance, ‎involving the UN, create a platform for sharing digital public ‎goods, engaging talent and pooling data sets, in a manner ‎that respects privacy, in areas related to attaining the SDGs. ‎

1C: We call on the private sector, civil society, national ‎governments, multilateral banks and the UN to adopt specific ‎policies to support full digital inclusion and digital equality for ‎women and traditionally marginalised groups. International ‎organisations such as the World Bank and the UN should ‎strengthen research and promote action on barriers women ‎and marginalised groups face to digital inclusion and digital ‎equality. ‎

1D: We believe that a set of metrics for digital inclusiveness ‎should be urgently agreed, measured worldwide and detailed ‎with sex disaggregated data in the annual reports of ‎institutions such as the UN, the International Monetary Fund, ‎the World Bank, other multilateral development banks and the ‎OECD. From this, strategies and plans of action could be ‎developed. ‎

In this report we have emphasised that the role of digital technologies in achieving the Sustainable Development Goals goes far beyond simply promoting greater access to the internet. With the right blend of policy, investment in infrastructure and human capacity, and cooperation among stakeholders, they can revolutionise fields as diverse as health and education, governance, economic empowerment and enterprise, agriculture and environmental sustainability.

The specific decisions needed to promote inclusivity and ‎minimise risks will depend on local and national conditions. ‎They should consider four main factors. ‎

First, the broader national policy and regulatory frameworks ‎should make it easy to create, run and grow small businesses. ‎These frameworks should ensure that digital service providers – ‎including e-commerce and inclusive finance platforms – support ‎the growth of local enterprises. This requires enabling policies ‎on investment and innovation, and structural policies to ensure ‎fair competition, privacy rights, consumer protection and a ‎sustainable tax base. Efforts to agree regional or global ‎standards in these areas are welcome. ‎

Second, investments should be made in both human capacity ‎‎(see Recommendation 2 below) and physical infrastructure. ‎Creating the foundation of universal, affordable access to ‎electricity and the internet will often require innovative ‎approaches, such as community groups operating rural ‎networks, or incentives such as public sector support. ‎

Third, targeted measures should address the barriers faced by ‎women, indigenous people, rural populations and others who ‎are marginalised by factors such as a lack of legal identity, low ‎literacy rates, social norms that prevent them from fully ‎participating in civic and economic life, and discriminatory land ‎ownership, tenure and inheritance practices. ‎

Fourth, respect for human rights – including privacy – is ‎fundamental. Panel members had divergent views on digital ID ‎systems in particular: they have immense potential to improve ‎delivery of social services, especially for people who currently ‎lack legal identity, but they are also vulnerable to abuse. As ‎digital ID becomes more prevalent, we must emphasise ‎principles for its fair and effective use. ‎

Achieving this ambition will require multi-stakeholder alliances ‎involving governments, private sector, international ‎organisations, citizen groups and philanthropy to build new ‎models of collaboration around “digital public goods” and data ‎sets that can be pooled for the common good. SDG-related ‎areas include health, energy, agriculture, clean water, oceans ‎and climate change. These alliances could establish minimum ‎criteria for classifying technologies and content as “digital public ‎goods” and connect with relevant communities of practice that ‎can provide guidance and support for investment, ‎implementation and capacity development. ‎

We are concerned that women face particular challenges in ‎meaningfully accessing the internet, inclusive mobile financial ‎services and online commerce, and controlling their own digital ‎IDs and health records. Policies should include targeted capacity ‎development for female entrepreneurs and policy makers. We ‎call on the technology sector to make more sustained and ‎serious efforts to address the gap in female technology ‎employees and management, include women’s voices when ‎determining online terms and conditions, and act to prevent ‎online harassment and promotion of domestic abuse, building ‎upon the work of existing initiatives such as the High-level Panel ‎on Women’s Economic Empowerment. ‎
 
While some preliminary work is underway, there is currently no ‎agreed set of clear metrics or standards for the inclusiveness of ‎digital technologies and cooperation. While any metrics will ‎evolve over time, we call for research and multi-stakeholder ‎consultation to establish a basis of shared global understanding ‎as promptly as possible. We encourage the UN, international ‎development agencies and multilateral banks such as the Asian ‎Development Bank, the New Development Bank and the World ‎Bank to drive this process by incorporating digital inclusion as a ‎key metric in approving and evaluating projects. Facets of digital ‎inclusion which may be considered include gender, financial ‎services, health, government services, national digital economy ‎policies, use of online e-commerce platforms and mobile ‎device penetration. ‎

HUMAN AND INSTITUTIONAL ‎CAPACITY ‎

2: We recommend the establishment of regional and global ‎digital help desks to help governments, civil society and the ‎private sector to understand digital issues and develop ‎capacity to steer cooperation related to social and economic ‎impacts of digital technologies. ‎

Many countries urgently need to make critical choices about the ‎complex issues discussed in this report. In what types of ‎infrastructure should they invest? What types of training do their ‎populations require to compete in the global digital economy? ‎How can those whose livelihoods are disrupted by technological ‎change be protected? How can technology be used to deliver ‎social services and improve governance? How can regulation ‎be appropriately balanced to encourage innovation while ‎protecting human rights? ‎

Policy decisions will have profound impact, but many of the ‎decision-makers lack sufficient understanding of digital ‎technologies and their implications. Capacity development for ‎government officials and regulators could help to harness ‎technology for inclusive economic development to achieve the ‎SDGs. Priorities could include diagnostics on digital capacities ‎and how they interact with society and the economy, and ‎identifying skills workers will need. Capacity development ‎initiatives with the private sector would also develop the capacity ‎of officials and regulators to engage with the private sector so ‎they can understand the operations of the digital economy and ‎respond in an agile way to emerging issues (see ‎Recommendation 5B). ‎

For decisions to be well informed and inclusive, all stakeholders ‎and the public need also to better understand the benefits and ‎risks of digital technologies. Decisions around technology ‎should be underpinned by a broad social dialogue on its costs, ‎benefits and norms. We encourage capacity development ‎programs for governments, civil society organisations, the ‎private sector – including small- and medium-sized enterprises ‎and start-ups – consumers, educators, women and youth. ‎Existing capacity development initiatives by civil society, ‎academia and technical and international organisations could ‎benefit from the promotion of best practices. ‎

A regional approach is recommended to develop capacity, to ‎enable differing local contexts to be addressed. Regional help ‎desks could be led by organisations such as the African Union ‎or the Association of Southeast Asian Nations, in collaboration ‎with UN Regional Commissions. The regional help desks would: ‎conduct research and promote best practice in digital ‎cooperation; provide capacity development training and ‎recommend open-source or licensed products and platforms; ‎and support requests for advice from governments, local private ‎sector (particularly small and medium enterprises) and civil ‎society in their regions. Staff would have regional expertise, and ‎coordinate closely with the private sector and civil society. ‎

A global help desk to coordinate the work of regional help desks ‎could form part of the new digital cooperation architecture we ‎recommend exploring in Recommendation 5A. ‎

HUMAN RIGHTS AND HUMAN ‎AGENCY

3A: Given that human rights apply fully in the digital world, we ‎urge the UN Secretary-General to institute an agencies-wide ‎review of how existing international human rights accords ‎and standards apply to new and emerging digital ‎technologies. Civil society, governments, the private sector ‎and the public should be invited to submit their views on how ‎to apply existing human rights instruments in the digital age in ‎a proactive and transparent process. ‎

3B: In the face of growing threats to human rights and safety, ‎including those of children, we call on social media ‎enterprises to work with governments, international and local ‎civil society organisations and human rights experts around ‎the world to fully understand and respond to concerns about ‎existing or potential human rights violations. ‎

3C: We believe that autonomous intelligent systems should ‎be designed in ways that enable their decisions to be ‎explained and humans to be accountable for their use. Audits ‎and certification schemes should monitor compliance of AI ‎systems with engineering and ethical standards, which ‎should be developed using multi-stakeholder and multilateral ‎approaches. Life and death decisions should not be ‎delegated to machines. We call for enhanced digital ‎cooperation with multiple stakeholders to think through the ‎design and application of these standards and principles ‎such as transparency and non-bias in autonomous intelligent ‎systems in different social settings. ‎

As discussed in Chapter 3, while human rights apply online as ‎well as offline, technology presents challenges that were not ‎foreseen when many foundational human rights accords were ‎created. National laws and regulations must prevent advances ‎in technology being used to erode human rights or avoid ‎accountability. We need to cooperate to ensure that digital ‎technologies advance the inherent dignity and equal and ‎inalienable rights of every human. ‎

Applying human rights in the digital age requires better ‎coordination and communication between governments, ‎technology companies, civil society and other stakeholders. ‎Companies have often reacted slowly and inadequately to ‎learning that their technologies are being deployed in ways that ‎undermine human rights. We need more forward-looking efforts ‎to identify and mitigate risks in advance: companies should ‎consult with governments, civil society and academia to assess ‎the potential human rights impact of the digital technologies ‎they are developing. From risk assessment to ongoing due ‎diligence and responsiveness to sudden events, it should be ‎clarified what society can reasonably expect from each ‎stakeholder, including technology firms. ‎

In some areas there is consensus that much more needs to be ‎done – notably, companies providing social media services ‎need to do more to prevent the dissemination of hatred and ‎incitement of violence, and companies providing online services ‎and apps used by children need to do more to ensure ‎appropriate design and meaningful data consent. ‎

Consensus is also emerging that more needs to be done to ‎safeguard the human right to privacy: individuals often have ‎little or no meaningful ‎
understanding of the implications of providing their personal ‎data in return for digital services. We believe companies, ‎governments and civil society should agree to clear and ‎transparent standards that will enable greater interoperability of ‎data in ways that protect privacy while enabling data to flow for ‎commercial, research and government purposes, and ‎supporting innovation to achieve the SDGs. Such standards ‎should prevent data collection going beyond intended use, limit ‎re-identification of individuals via datasets, and give individuals ‎meaningful control over how their personal data is shared. ‎

We also emphasise our belief that autonomous intelligent ‎systems should be designed in ways that enable their decisions ‎to be explained and humans to be held to account for their use. ‎Audits and certification schemes should monitor compliance of ‎AI systems with engineering and ethical standards. Humans ‎should never delegate life and death decisions to machines. ‎

TRUST, SECURITY AND STABILITY

4. We recommend the development of a Global Commitment ‎on Digital Trust and Security to shape a shared vision, identify ‎attributes of digital stability, elucidate and strengthen the ‎implementation of norms for responsible uses of technology, ‎and propose priorities for action. ‎

As the digital economy increasingly merges with the physical ‎world and deploys autonomous intelligent systems, it depends ‎ever more on trust and the stability of the digital environment. ‎Trust is built through agreed standards, shared values and best ‎practices. Stability implies a digital environment that is peaceful, ‎secure, open and cooperative. More effective action is needed ‎to prevent trust and stability being eroded by the proliferation of ‎irresponsible use of cyber capabilities. ‎

The Global Commitment on Digital Trust and Security could ‎build on and create momentum behind the voluntary norms ‎agreed in the report of the 2015 GGE, and complement relevant ‎global processes. It could address areas such as ways to ‎strengthen implementation of agreed norms; developing ‎societal capacity for cybersecurity and resilience against ‎misinformation; encouraging companies to strengthen ‎authentication practices, adhere to stricter software ‎development norms and be more transparent in the use of ‎software and components; and improving the digital hygiene of ‎new users coming online. ‎

GLOBAL DIGITAL COOPERATION ‎

‎5A: We recommend that, as a matter of urgency, the UN ‎Secretary- General facilitate an agile and open consultation ‎process to develop updated mechanisms for global digital ‎cooperation, with the options discussed in Chapter 4 as a ‎starting point. We suggest an initial goal of marking the UN's ‎‎75th anniversary in 2020 with a “Global Commitment for ‎Digital Cooperation” to enshrine shared values, principles, ‎understandings and objectives for an improved global digital ‎cooperation architecture. As part of this process, we ‎understand that the UN Secretary-General may appoint a ‎Technology Envoy. ‎

‎5B: We support a multi-stakeholder “systems” approach for ‎cooperation and regulation that is adaptive, agile, inclusive ‎and fit for purpose for the fast-changing digital age. ‎

Enhancing digital cooperation will require both reinvigorating ‎existing multilateral partnerships and potentially the creation of ‎new mechanisms that involve stakeholders from business, ‎academia, civil society and technical organisations. We should ‎approach questions of governance based on their specific ‎circumstances and choosing among all available tools. ‎

Where possible we can make existing inter-governmental ‎forums and mechanisms fit for the digital age rather than rush to ‎create new mechanisms, though this may involve difficult ‎judgement calls: for example, while the WTO remains a major ‎forum to address issues raised by the rapid growth in cross-‎border e-commerce, it is now over two decades since it was last ‎able to broker an agreement on the subject. ‎

Given the speed of change, soft governance mechanisms – ‎values and principles, standards and certification processes – ‎should not wait for agreement on binding solutions. Soft ‎governance mechanisms are also best suited to the multi-‎stakeholder approach demanded by the digital age: a fact-‎based, participative process of deliberation and design, ‎including governments, private sector, civil society, diverse ‎users and policy-makers. ‎

The aim of the holistic “systems” approach we recommended is ‎to bring together government bodies such as competition ‎authorities and consumer protection agencies with the private ‎sector, citizens and civil society to enable them to be more agile ‎in responding to issues and evaluating trade-offs as they ‎emerge. Any new governance approaches in digital cooperation ‎should also, wherever possible, look for ways – such as pilot ‎zones, regulatory sandboxes or trial periods – to test efficacy ‎and develop necessary procedures and technology before ‎being more widely applied.213

We envisage that the process of developing a “Global ‎Commitment for Digital Cooperation” would be inspired by the ‎‎“World We Want” process, which helped formulate the SDGs. ‎Participants would include governments, the private sector from ‎technology and other industries, SMEs and entrepreneurs, civil ‎society, international organisations including standards and ‎professional organisations, academic scholars and other ‎experts, and government representatives from varied ‎departments at regional, national, municipal and community ‎levels. Multi-stakeholder consultation in each member state and ‎region would allow ideas to bubble up from the bottom. ‎

The consultations on an updated global digital cooperation ‎architecture could define upfront the criteria to be met by the ‎governance mechanisms to be proposed, such as funding ‎models, modes of operation and means for serving the ‎functions explored in this report. ‎

More broadly, if appointed, a UN Tech Envoy could identify ‎over-the-horizon concerns that need improved cooperation or ‎governance; provide light-touch coordination of multi-‎stakeholder actors to address shared concerns; reinforce ‎principles and norms developed in forums with relevant ‎mandates; and work with UN member states, civil society and ‎businesses to support compliance with agreed norms. ‎

The Envoy’s mandate could also include coordinating the digital ‎technology-related efforts of UN entities; improving ‎communication and collaboration among technology experts ‎within the UN; and advising the UN Secretary- General on new ‎technology issues. Finally, the Envoy could promote ‎partnerships to build and maintain international digital common ‎resources that could be used to help achieve the SDGs. ‎
 


We believe in a future which is inclusive and empowering; a ‎future in which digital technologies are used to reduce ‎inequalities, bring people together, enhance international peace ‎and security and promote economic opportunity and ‎environmental sustainability. ‎
 
Our recommendations toward that future will require sustained commitment to fundamental human values. They will require ‎leadership and political will, clarity about roles and responsibilities, shared meanings to ease communication, inclusive partnerships ‎with capacity development, aligned incentives, greater coherence of currently fragmented efforts, and building a climate of trust. ‎

We hope this report has shown why individuals, civil society, the private sector and governments urgently need to strengthen ‎cooperation to build that better future.‎
 


 

ANNEXES

I. TERMS OF REFERENCE OF THE PANEL

  1. The High-Level Panel on Digital Cooperation convened by ‎the UN Secretary-General will advance proposals to strengthen ‎cooperation in the digital space among Governments, the ‎private sector, civil society, international organisations, the ‎technical and academic communities and all other relevant ‎stakeholders. The Panel’s report and its recommendations will ‎provide a high-level independent contribution to the broader ‎public debate on digital cooperation frameworks and support ‎Member States in their consultations on these issues. ‎
     
  2. The Panel will consist of 20 eminent leaders from ‎Governments, private sector, academia, the technical ‎community, and civil society led by two co-chairs. Its ‎composition will be balanced in terms of gender, age, ‎geographic representation, and area of expertise. The Panel ‎members will serve in their personal capacity. ‎
     
  3. The Panel shall meet in person at least once. Additional ‎interactions shall be organised for the Panel as a whole by ‎electronic means or through ad hoc group consultations. The ‎Panel will engage and consult widely with governments, private ‎sector, academia, technical community, civil society, and inter-‎governmental organisations across the world. It shall be agile ‎and innovative in interacting with existing processes and ‎platforms as well as in harnessing inputs from diverse ‎stakeholders. ‎
     
  4. In its report to the Secretary-General, the Panel shall identify ‎good practices and opportunities, gaps and challenges in digital ‎cooperation. It shall also outline major trends in the ‎development and deployment of emerging digital technologies, ‎business models, and policies and the possibilities and ‎challenges they generate for digital cooperation.
  5. In particular, the report shall: ‎
    - Raise awareness among policy makers and the general public ‎about ‎ the transformative impact of digital technologies across society ‎and the ‎ economy; ‎
    - Suggest ways to bridge disciplines on digital cooperation by ‎identifying ‎ policy, research and information gaps as well as ways to ‎improve ‎ interdisciplinary thinking and cross-domain action on digital ‎ technologies; ‎
    - Present recommendations for effective, inclusive, accountable ‎systems ‎ of digital cooperation among all relevant actors in the digital ‎space. ‎
     
  6. ‎The recommendations in the report shall seek to maximise ‎the potential of digital technologies to contribute inter alia to the ‎achievement of the 2030 Agenda for Sustainable Development ‎and to support progress across a range of themes, including ‎digital empowerment, inclusive finance, employment, ‎entrepreneurship, trade and cross border data flows. ‎
     
  7. They shall also contribute to raising individual and systemic ‎capacities to ‎​maximise the benefits of emerging digital technologies; to ‎facilitating the participation of all stakeholder groups, especially ‎youth and women, in the digital sphere and; to enhancing ‎implementation of existing digital policies as well as norms. ‎
     
  8. The Panel shall avoid duplication with existing forums for ‎digital cooperation. It shall fully respect current UN structures as ‎well as national, technical community and industry prerogatives ‎in the development and governance of digital technologies.
  9. The Panel will complete its deliberations and submit its final ‎report, including actionable recommendations, within a nine-‎month period. ‎
     
  10. The deliberations of the Panel will be supported by a small ‎secretariat and funded by donor resources. The Secretariat shall ‎seek to leverage existing platforms and partners, including UN ‎agencies, working in the related domains. ‎

II. PANEL MEMBERS ‎

Co-Chairs ‎
‎• Melinda Gates (USA), Co-Chair of the Bill & Melinda Gates Foundation ‎
‎• Jack Ma (China), Executive Chairman, Alibaba Group ‎

Members ‎
‎• Mohammed Abdullah Al Gergawi (UAE), Minister of Cabinet Affairs and the ‎
Future, UAE ‎
‎• Yuichiro Anzai (Japan), Senior Advisor and Director of Center for Science ‎
Information Analysis, Japan Society for the Promotion of Science ‎
‎• Nikolai Astrup (Norway), Former Minister of International Development, ‎
now Minister of Digitalisation, Norway ‎
‎• Vinton Cerf (USA), Vice President and Chief Internet Evangelist, Google ‎
‎• Fadi Chehadé (USA), Chairman, Chehadé & Company ‎
‎• Sophie Soowon Eom (Republic of Korea), Founder of Adriel AI and ‎
Solidware ‎
‎• Isabel Guerrero Pulgar (Chile), Executive Director, IMAGO Global ‎
Grassroots and Lecturer, Harvard Kennedy School ‎
‎• Marina Kaljurand (Estonia), Chair of the Global Commission on the ‎
Stability of Cyberspace ‎
‎• Bogolo Kenewendo (Botswana), Minister of Investment, Trade and ‎
Industry, Botswana ‎
‎• Marina Kolesnik (Russian Federation), senior executive, entrepreneur ‎
and WEF Young Global Leader ‎
‎• Doris Leuthard (Switzerland), former President and Federal Councillor of ‎
the Swiss Confederation, Switzerland ‎
‎• Cathy Mulligan (United Kingdom), Visiting Researcher, Imperial College ‎
London and Chief Technology Officer of GovTech Labs at University ‎
College London ‎
‎• Akaliza Keza Ntwari (Rwanda), ICT advocate and entrepreneur ‎
‎• Edson Prestes (Brazil), Professor, Institute of Informatics, Federal ‎
University of Rio Grande do Sul ‎
‎• Kira Radinsky (Israel), Director of Data Science, eBay ‎
‎• Nanjira Sambuli (Kenya), Senior Policy Manager, World Wide Web ‎
Foundation ‎
‎• Dhananjayan Sriskandarajah (Australia), Chief Executive, Oxfam GB ‎
‎• Jean Tirole (France), Chairman of the Toulouse School of Economics and ‎
the Institute for Advanced Study in Toulouse ‎

Ex officio
‎• Amandeep Singh Gill (India), Executive Director, Secretariat of the High-‎
level Panel on Digital Cooperation ‎
‎• Jovan Kurbalija (Serbia), Executive Director, Secretariat of the High-level ‎
Panel on Digital Cooperation

III. PANEL SECRETARIAT AND SUPPORT TEAMS ‎

Panel Secretariat ‎
‎• Isabel de Sola, Senior Adviser, Engagement ‎
‎• Amandeep Singh Gill, Executive Director ‎
‎• Jovan Kurbalija, Executive Director ‎
‎• Ananita Maitra, Project Officer, Policy and Engagement ‎
‎• Chengetai Masango, Senior Adviser (on loan from the IGF Secretariat, ‎
July-October 2018) ‎
‎• Lisa McMonagle, Intern ‎
‎• Madeline McSherry, Project Officer, Engagement ‎
‎• Claire Messina, Deputy Executive Director ‎
‎• AJung Moon, Senior Adviser, Research & Industry ‎
‎• Athira Murali, Intern ‎
‎• Anoush Rima Tatevossian, Senior Communications Officer ‎
‎• Talea von Lupin, Intern ‎
‎• Andrew Wright, Writer ‎

Sherpas and Support Teams ‎
‎• Co-Chair Melinda Gates: Gargee Ghosh, John Norris ‎
‎• Co-Chair Jack Ma: James Song, Jason Pau, Sami Farhad, Yuan Ren

IV. DONORS ‎

The Panel gratefully acknowledges the financial and in-kind contributions of the following governments and partners, without whom it ‎would not have been able to carry out its responsibilities: ‎
Robert Bosch Stiftung ‎
Government of the People’s Republic of China ‎
Government of Denmark ‎
Government of Finland ‎
Ford Foundation ‎
Global Challenges Foundation ‎
IGF Secretariat ‎
Government of Israel ‎
Government of Norway ‎
Government of Qatar ‎
Government of Switzerland ‎
Government of the United Arab Emirates ‎
UN Foundation ‎

V. THE PANEL’S ENGAGEMENT ‎

As per its terms of reference, the Panel engaged widely with ‎governments, private sector, academia, the technical ‎community, civil society, and inter-governmental organisations ‎across the world. The aims of its engagement strategy were to ‎provide stakeholders with an opportunity to contribute ‎meaningfully to the reflection process of the Panel; catalyse ‎multi-stakeholder and interdisciplinary cooperation on digital ‎issues; and co-create the report’s recommendations with ‎stakeholders, with a view to building buy-in for their ‎implementation. ‎

The engagement strategy was guided by three main tenets: ‎
‎• Breadth and inclusivity: The Panel aimed to consult as ‎broadly as ‎
possible across regions, demographics, topics, sectors and ‎disciplines. ‎
The process strove to be as inclusive as possible of diverse ‎groupings. ‎
‎• Depth: The Panel worked with experts and conducted ‘deep ‎dives’ on ‎
specific focus areas through virtual or in-person consultations ‎as well as ‎
bilateral interviews. ‎
‎• Interdisciplinarity: Many digital challenges are currently ‎addressed in ‎
policy or agency silos; to promote more holistic approaches, the ‎Panel’s ‎
activities invited interdisciplinary and multisectoral perspectives ‎to the ‎
table. ‎

The Panel was conscious of the importance of avoiding ‎duplication of efforts and ‘consultation fatigue’ amongst digital ‎stakeholders. Building on existing networks and policy forums, ‎engagement activities took place as close as possible to ‎stakeholders on the ground. The Panel also consciously ‎assumed the learnings of previous commissions and existing ‎working groups while also harnessing opportunities to connect ‎the issues in new ways. ‎

ACTIVITIES
Conducting a global consultation in the span of few months ‎would not have been possible without the immense support of ‎dozens of organisations and governments worldwide who lent ‎their resources and networks to the Panel. ‎

Engagement proceeded in two phases: in the ‘listening’ phase, ‎in the autumn of 2018, the Panel actively collected stakeholders’ ‎concerns and ideas on digital cooperation. Feedback from ‎stakeholders was fed into the Panel’s scoping of its work and ‎formed the basis of the nine “enablers of digital cooperation” ‎articulated mid-way through the Panel process. In the spring of ‎‎2019, the focus shifted to ‘road-testing’ the Panel’s emerging ‎recommendations. Stakeholders from across sectors were ‎invited to comment on and critique the draft recommendations ‎with a view to improving them. ‎

Overall, the Panel and its Secretariat carried out 125 ‎engagement activities; these included participating in 44 digital ‎policy events and organising 10 thematic workshops (on ‎subjects such as values and principles, digital trust and security, ‎data, digital health), 28 briefings to various stakeholder ‎communities, 11 visits to digital hubs and capitals, 22 virtual ‎meetings with subject-matter experts, and 10 townhall meetings ‎open to the public. In addition, the Panel held a large number of ‎bilateral meetings with a variety of stakeholders. ‎

A virtual window for consultation was opened via the Panel’s ‎website. In October 2018, an open Call for Contributions was ‎launched; by January 2019, when the call closed, 167 ‎stakeholders had sent written submissions. Additionally, an ‎informal public opinion survey was set up to capture the views ‎of stakeholders on the digital issues of greatest concern. ‎

In total, the Panel and its Secretariat engaged with over 4,000 ‎individuals representing 104 states, 80 international ‎organisations, 203 private sector companies, 125 civil society ‎organisations, 33 technical organisations, and 188 think tanks ‎and academic institutions. ‎

Our analysis of approximately 1200 core participants in our ‎engagement process finds that 40% were women; 3% were ‎aged under 30; and the regional breakdown was 20% North ‎America, 19% Europe, 13% Sub-Saharan Africa, 8% Latin ‎America and the Caribbean, 7% South and Central Asia, 7% ‎Southeast and East Asia, and 4% Middle East (the rest had a ‎global remit). ‎

These results show that we did not wholly avoid a skew towards ‎male and Western voices, though they compare favourably with ‎many such exercises in the technology sector. They indicate the ‎continuing need for digital cooperation mechanisms to make ‎specific efforts to ensure inclusivity, and highlight in particular ‎the challenge of bringing the “digital native” youth generation ‎into digital policymaking. ‎

PARTNERS ‎
The Panel would like to thank the following partners for their ‎generous assistance and support to its engagement process: ‎

Access Now ‎
African Union Commission ‎
Alibaba Group ‎
APEC China Business Council (ACBC) ‎
Ministry of Foreign Affairs and Worship of Argentina ‎
Asia Pacific Network Information Centre (APNIC) ‎
Association for Progressive Communication (APC) ‎
Government of Benin ‎
Botnar Foundation ‎
Business Council for the United Nations ‎
Consulate General of Canada in San Francisco ‎
CERN ‎
China Chamber of International Commerce (CCOIC) ‎
Data2x ‎
Digital Empowerment Foundation ‎
Digital Impact Alliance (DIAL) ‎
Diplo Foundation ‎
Delegation of the European Union to the United Nations and ‎Other International Organisations in Geneva ‎
Direction interministérielle du numérique et du système ‎d’information et de communication de l’Etat, France ‎
Freedom Online Coalition ‎
Gateway House ‎
Geneva Internet Platform ‎
Global Commission on Stability of Cyberspace ‎
Global Partners Digital ‎
Global Partnership on Sustainable Development Data ‎
Global Tech Panel ‎
GSM Association (GSMA) ‎
Hangzhou Normal University ‎
Impact Hub Basel ‎
Infosys ‎
International Chamber of Commerce (ICC)‎
International Telecommunications Union (ITU) ‎
Internet Corporation for Assigned Names and Numbers (ICANN) ‎
iSPIRT ‎
JD.com ‎
JSC National ICT Holding Zerde ‎
Government of Kazakhstan ‎
King’s College London ‎
Lee Kwan Yew School of Public Policy ‎
New America Foundation ‎
Nokia ‎
Observer Research Foundation ‎
Office of Denmark’s Technology Ambassador ‎
Omidyar Foundation ‎
Organisation for Economic Cooperation and Development ‎‎(OECD) ‎
Organisation Internationale de la Francophonie (OIF) ‎
Schwarzman Scholars, Tsinghua University ‎
Ministry of Foreign Affairs of Singapore ‎
Stanford University ‎
Tata Consultancy Services, Mumbai ‎
United Nations Conference on Trade and Development ‎‎(UNCTAD) ‎
United Nations Economic Commission for Latin America and ‎the Caribbean (ECLAC) ‎
United Nations Educational, Scientific and Cultural Organization ‎‎(UNESCO) ‎
United Nations Children’s Fund (UNICEF) ‎
United Nations Global Pulse ‎
United Nations Institute for Disarmament Research (UNIDIR) ‎
United Nations Office at Geneva ‎
United Nations University ‎
University of California, Berkeley ‎
University of Geneva ‎
Verizon Wireless ‎
Web Summit ‎
Western Balkans Digital Summit ‎
Wonder Ventures ‎
World Bank ‎
World Economic Forum ‎
World Economic Forum Center for the Fourth Industrial ‎Revolution, San Francisco ‎
World Government Summit, Dubai ‎
World Intellectual Property Organization (WIPO) ‎
World Internet Conference ‎
World Summit AI

VI. PRINCIPLES AND FUNCTIONS OF DIGITAL COOPERATION ‎

In the course of our outreach, many stakeholders suggested ‎principles to which digital cooperation mechanisms should ‎adhere and functions they should seek to serve. Drawing also ‎on work of previous initiatives in these areas, this annex ‎summarises the principles and functions we suggest are most ‎important to guide the future evolution of digital cooperation. ‎

KEY PRINCIPLES OF DIGITAL COOPERATION ‎
‎• Consensus-oriented: Decisions should be made in ways that ‎seek ‎ consensus among public, private and civic stakeholders. ‎
‎• Polycentric: Decision-making should be highly distributed and ‎loosely ‎yet efficiently coordinated across specialised centres. ‎
‎• Customised: There is generally no “one size fits all” solution; ‎different ‎communities can implement norms in their own way, according ‎to ‎circumstances. ‎
‎• Subsidiarity: Decisions should be made as locally as possible, ‎closest to ‎where the issues and problems are. ‎
‎• Accessible: It should be as easy as possible to engage in ‎digital ‎cooperation mechanisms and policy discussions. ‎
‎• Inclusive: Decisions should be inclusive and democratic, ‎representing diverse interests and accountable to all stakeholders. ‎
‎• Agile: Digital cooperation should be dynamic, iterative and ‎responsive to ‎fast-emerging policy issues. ‎
‎• Clarity in roles and responsibility: Clear roles and shared ‎language ‎should reduce confusion and support common understanding ‎about the ‎responsibilities of actors involved in digital cooperation ‎‎(governments, ‎private sector, civil society, international organisations and ‎academia). ‎
‎• Accountable: There should be measurable outcomes, ‎accountability and ‎means of redress. ‎
‎• Resilient: Power distribution should be balanced across ‎sectors, without ‎centralised top-down control. ‎
‎• Open: Processes should be transparent, with minimum ‎barriers to entry. ‎
‎• Innovative: It should always be possible to innovate new ways ‎of ‎cooperating, in a bottom-up way, which is also the best way to ‎include ‎diverse perspectives. ‎
‎• Tech-neutral: Decisions should not lock in specific ‎technologies but allow ‎for innovation of better and context-appropriate alternatives. ‎
‎• Equitable outcomes: Digital cooperation should maximise the ‎global ‎public interest (internationally) and be anchored in broad public ‎benefit ‎‎(nationally). ‎

KEY FUNCTIONS OF DIGITAL COOPERATION ‎
‎• Leadership – generating political will among leaders from ‎government, ‎business, and society, and providing an authoritative response ‎to digital ‎policy challenges. ‎
‎• Deliberation – providing a platform for regular, comprehensive and impactful deliberations on digital issues with the active and effective participation of all affected stakeholders.
• Ensuring inclusivity – ensuring active and meaningful participation of all stakeholders, for example by linking with existing and future bottom-up networks and initiatives.214
• Evidence and data – monitoring developments and identifying trends to inform decisions, including by analysing existing data sources.
• Norms and policy making – building consensus among diverse stakeholders, respecting the roles of states and international organisations in enacting and enforcing laws.
• Implementation – following up on policy discussions and agreements.
• Coordination – creating shared understanding and purpose across bodies in different policy areas and at different levels (local, national, regional, global), ensuring synchronisation of efforts, interoperability and policy coherence, and the possibility of voluntary coordination between interested stakeholder groups.
• Partnerships – catalysing partnerships around specific issues by providing opportunities to network and collaborate.
• Support and capacity development – strengthening capacity development, monitoring digital developments, identifying trends, informing policy actors and the public of emerging risks and opportunities, and providing data for evidence-based decision making – allowing traditionally marginalised persons or other less-resourced stakeholders to actively participate in the system.
• Conflict resolution and crisis management – developing the skills, knowledge and tools to prevent and resolve disputes and connect stakeholders with assistance in a crisis.

NOTES

‎1 See Annex I for the Panel’s terms of reference. ‎
‎2 United Nations Commission on Science and Technology for Development, Mapping of international Internet ‎public policy issues, 17 April 2015, ‎
E/CN.16/2015/CRP.2, available at https://unctad.org/meetings/en/SessionalDocuments/ecn162015crp2_en.pdf
‎3 GIP Digital Watch Observatory, May 2019, available at https://dig.watch
‎4 AI Impacts, “Trends in the cost of computing”, 10 March 2015, available at https://aiimpacts.org/trends-in-the-cost-‎of-computing/ ‎
‎5 Internet World Stats, “World Internet users and population statistics”, March 2019, available at ‎https://www.internetworldstats.com/stats.htm; and IoT
Analytics, “State of the IoT 2018: Number of IoT devices now at 7B – Market accelerating”, August 2018, available ‎at https://iot-analytics.com/state-of-the-iot-update-q1-q2-2018-number-of-i...
‎6 The World Bank, Global Findex Database 2017, April 2018, available at https://globalfindex.worldbank.org
‎7 Council on Foreign Relations, “Hate Speech on Social Media: Global Comparisons”, 11 April 2019, available at ‎https://www.cfr.org/backgrounder/ hate-speech-social-media-global-comparisons; United Nations General ‎Assembly, resolution on the right to privacy in the digital age (A/RES/73/179), December 2018, available at ‎https://www.un.org/en/ga/search/view_doc.asp?symbol=A/RES/73/179; FireEye, M-Trends 2019 (Annual Threat ‎Report), 2019, available at https://content.fireeye.com/m-trends; Freedom House, “Freedom on the Net 2018: ‎The rise of digital authoritarianism”, October 2018, ‎
available at https://freedomhouse.org/report/freedom-net/freedom-net-2018/rise-digita...
‎8 Internet World Stats, “World Internet users and population statistics”, March 2019, available at ‎https://www.internetworldstats.com/stats.htm
‎9 The International Telecommunication Union (ITU) is one of the many entities that recognise the multiple ‎dimensions of the digital divide and work toward ‎
facilitating digital inclusion of marginalised groups. More details at ITU, Digital Inclusion, available at ‎https://www.itu.int/en/ITU-D/Digital-Inclusion/ Pages/default.aspx ‎
‎10 Our public call for contributions received a number of suggestions on values, available at ‎www.digitalcooperation.org/responses. We also engaged a ‎
diverse set of stakeholders and experts to elicit relevant values and how they could be embedded in policy ‎approaches and cooperation architectures. ‎
Our engagement built on a recent surge of interest in values and ethics in the digital context: see Future of Life ‎Institute, Asilomar Principles, 2017, ‎
available at https://futureoflife.org/ai-principles/; WEF White Paper on Values, Ethics and Innovation, August 2018, ‎available at http://www3.weforum. org/docs/WEF_WP_Values_Ethics_Innovation_2018.pdf ; Montreal ‎Declaration for a responsible development of AI, ‎
‎2018, available at https://www.montrealdeclaration-responsibleai.com/the-declaration; the World Wide Web ‎Foundation’s Contract for the Web, ‎
available at https://contractfortheweb.org; the EU High-Level Expert Group on Artificial Intelligence’s Ethics ‎Guidelines for Trustworthy Artificial ‎
Intelligence, 2019, available at https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines#Top
‎11 WEF report “Our Shared Digital Future Building an Inclusive, Trustworthy and Sustainable Digital Society”, ‎December 2018, available at http://www3. ‎weforum.org/docs/WEF_Our_Shared_Digital_Future_Report_2018.pdf ‎
‎12 For an introduction to the underlying technology trends and impact on the economy, see “Vectors of Digital ‎Transformation”, OECD Digital Economy
Papers: January 2019, No. 273. ‎
‎13 World Bank, World Development Report 2016: Digital Dividends, “How the Internet Promotes Development”, ‎‎2016. ‎
‎14 Financial inclusion is defined as the ability to “access and use a range of appropriate and responsibly provided ‎financial services offered in a well-‎
regulated environment.” (UNCDF, Financial Inclusion, available at https://www.uncdf.org/financial-inclusion) ‎
‎15 World Bank, World Bank Global Findex Database: Measuring Fintech Inclusion and the Fintech Revolution, 2017, ‎available at https://globalfindex. worldbank.org/ ‎
‎16 Mobile money serves as a tool for financial inclusion, allowing those without traditional bank accounts to ‎participate in the economy on a greater level ‎
‎(McKinsey, “Mobile money in emerging markets: The business case for financial inclusion”, March 2018). ‎
‎17 Women Deliver, “If We Want to Go Far, We Must Go Together”, 21 January 2019, available at ‎https://womendeliver.org/2019/if-you-want-to-go-far-you-must-go-together/
‎18 Financial Stability Board, “FinTech and market structure in financial services: Market developments and potential ‎financial stability implications”, ‎
‎14 February 2019, available at https://www.fsb.org/wp-content/uploads/P140219.pdf
‎19 The Economist, “Financial inclusion in the rich world”, 4 May 2018, available at ‎https://www.economist.com/special-report/2018/05/04/financial-inclusion-...
‎20 M-Pesa is a mobile money service that allows users to transfer cash using their mobile phone numbers without ‎the need for a bank account. It serves ‎
over 17 million Kenyans and offers loan and savings products as well. See The Economist, “Why does Kenya lead ‎the world in mobile money?”. 02 March ‎
‎2015, available at https://www.economist.com/the-economist-explains/2015/03/02/why-does-ken...‎world-in-mobile-money. ‎
‎21 Ming Zeng, “Smart Business: What Alibaba Success Reveals about the Future of Strategy”, Harvard Business ‎Review 2018, pp 58-59. ‎
‎22 Harvard Business School, “Replicating MPESA: Lessons from Vodafone (Safaricom) on why mobile money fails ‎to gain traction in other markets”, 20 ‎
November 2016, available at https://rctom.hbs.org/submission/replicating-mpesa-lessons-from-‎vodafonesafaricom-on-why-mobile-money-fails-to-gain-traction-in-other-markets/ ‎
‎23 Accion, “The game-changing innovation that could bring financial services to millions in India”, 30 October 2017, ‎available at https://www.accion.org/ the-game-changing-innovation-that-could-bring-financial-services-to-‎millions-in-india ‎
‎24 GSM Association, State of the Industry Report on Mobile Money 2018, available at https://www.gsma.com/r/wp-‎content/uploads/2019/05/GSMA-State-of-the-Industry-Report-on-Mobile-Money-2018.pdf ‎
‎25 World Bank, Global ID4D Dataset, 2017, and World Bank, ID4D-Findex survey. ‎
‎26 MGI, “Digital Identity: A Key to Inclusive Growth”, MGI (Jan 2019). The report focuses on 7 diverse economies: ‎Brazil, China, Ethiopia, India, Nigeria, the United Kingdom, and the United States. ‎
‎27 See for example Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the ‎Poor (St. Martin's Press, 2018), excerpt available at https://us.macmillan.com/excerpt?isbn=9781250074317
‎28 ID4D, available at http://id4d.worldbank.org/
‎29 MOSIP, available at https://www.mosip.io/
‎30 Luohan Academy, “Digital Technology and Inclusive Growth”, 2019, available at ‎https://gw.alipayobjects.com/os/antfincdn/DbLN6yXw6H/Luohan_ Academy-‎Report_2019_Executive_Summary.pdf ‎
‎31 World Bank: “E-commerce Participation and Household Income Growth in Taobao Villages”, April 2019, ‎available at http://documents.worldbank.org/ curated/en/839451555093213522/pdf/E-Commerce-Participation-‎and-Household-Income-Growth-in-Taobao-Villages.pdf; World Bank, ‎
‎“E-commerce for poverty alleviation in rural China: from grassroots development to public-private partnerships”, 19 ‎March 2019, available at http:// beta-blogs.worldbank.org/eastasiapacific/e-commerce-poverty-alleviation-rural-‎china-grassroots-development-public-private-partnerships; World Development Report 2016, “E-commerce with ‎Chinese characteristics: inclusion, efficiency and innovation in Taobao villages”. ‎
‎32 United Nations Conference on Trade and Development, Information Economy Report 2015, Unlocking the ‎Potential of E-Commerce for Developing ‎
Countries, 2015, available at https://unctad.org/en/PublicationsLibrary/ier2015_en.pdf
‎33 “Riding the Big Data Wave in 2017”, Medium, 17 April 2017, available at ‎https://medium.com/@Byte_Academy/due-to-an-exponential-increase-in-data-in-the-21st-century-a-new-term-‎big-data-was-coined-few-8f02a5973023 ‎
‎34 United Nations, The Sustainable Development Goals Report: 2018. ‎
‎35 World Health Organization, “Civil registration: why counting births and deaths is important”, 30 May 2014, ‎available at https://www.who.int/news-room/ fact-sheets/detail/civil-registration-why-counting-births-and-deaths-‎is-important ‎
‎36 The World Bank, PovcalNet, available at http://iresearch.worldbank.org/PovcalNet/povOnDemand.aspx
‎37 This definition is substantially drawn from Recital 26 of the GDPR which defines anonymized data as “data ‎rendered anonymous in such a way that the ‎
data subject is not or no longer identifiable.” ‎
‎38 United States Agency for International Development, “Fighting Ebola with Information”, available at ‎http://www.digitaldevelopment.org/fighting-ebola-
information ‎
‎39 World Health Organization, Global Strategy on Digital Health, 26 March 2019, available at ‎https://extranet.who.int/dataform/upload/surveys/183439/ ‎files/Draft%20Global%20Strategy%20on%20Digital%20Health.pdf ‎
‎40 CGAIR Platform for Big Data in Agriculture, available at https://bigdata.cgiar.org/
‎41 Jason Plautz, “Cheap, Portable Sensors are Democratizing Air-Quality Data”, Wired, 7 November 2018, available ‎at https://www.wired.com/story/cheap-portable-sensors-are-democratizing-air...
‎42 For more information on global digital public goods, see: https://digitalpublicgoods.net/public-goods/
‎43 Paul Krugman and Robin Wells, Microeconomics (Worth Publishers, New York, NY, 2013). ‎
‎44 See “About India Stack”, available at: https://indiastack.org/about/
‎45 Pathways to Prosperity Commission, 2018, available at https://www.itu.int/en/ITU-‎D/Statistics/Pages/stat/default.aspx ‎
‎46 WIRED, “Global Internet Access is Even More Worse than Dire Reports Suggest”, 23 October 2018, available at ‎https://www.wired.com/story/global-internet-access-dire-reports/
‎47 The index measures 84 countries from 2018-2019. The Economist, The Inclusive Internet Index 2019, available ‎at https://theinclusiveinternet.eiu.com/
‎48 Ibid. ‎
‎49 In India nearly two-thirds of urban areas have connectivity, compared to just over a fifth of rural regions. See The ‎Internet and Mobile Association of India (IAMAI), Mobile Internet Report, 2017. ‎
‎50 World Economic Forum, “Delivering Digital Infrastructure Advancing the Internet Economy”, April 2014, available ‎at http://www3.weforum.org/docs/ WEF_TC_DeliveringDigitalInfrastructure_InternetEconomy_Report_2014.pdf ‎
‎51 The Alliance for Affordable Internet, available at https://a4ai.org/
‎52 Broadband Commission for Sustainable Development, available at ‎https://www.broadbandcommission.org/Pages/default.aspx
‎53 UNICEF, "Project Connect, in Partnership with UNICEF’s Office of Innovation, Launches First of Its Kind, ‎Interactive Map Visualizing the Digital Divide in Education", 02 November 2017, available at ‎http://unicefstories.org/2017/11/02/schoolmappingprojectconnect/
‎54 World Bank, “Connecting for Inclusion: Broadband Access for All”, available at ‎http://www.worldbank.org/en/topic/digitaldevelopment/brief/connecting-fo...
‎55 IEEE Spectrum, "How Project Loon Built the Navigation System That Kept Its Balloons Over Puerto Rico", 8 March ‎‎2018, available at https://spectrum. ieee.org/tech-talk/telecom/internet/how-project-loon-built-the-navigation-‎system-that-kept-its-balloons-over-puerto-rico ‎
‎56 Reuters, "Amazon plans to launch over 3,000 satellites to offer broadband internet", 04 April 2019, available at ‎https://www.reuters.com/article/ us-amazon-com-broadband/amazon-plans-to-launch-over-3000-satellites-to-‎offer-broadband-internet-idUSKCN1RG1YW; Reuters, "U.S. regulator approves SpaceX plan for broadband ‎satellite services", 29 March 2018, available at https://www.reuters.com/article/us-spacex-fcc/u-s-regulator-‎approves-spacex-plan-for-broadband-satellite-services-idUSKBN1H537E
‎57 The Jakarta Post, "Govt to expand broadband connectivity as internet use grows", 20 February 2018, available at ‎https://www.thejakartapost.com/ news/2018/02/20/govt-to-expand-broadband-connectivity-as-internet-use-‎grows.html ‎
‎58 ITU, “Universal Service Fund and Digital Inclusion for All Study”, June 2013, available at ‎https://www.itu.int/en/ITU-D/Conferences/GSR/Documents/ ITU%20USF%20Final%20Report.pdf ‎
‎59 One example of building internet access around community needs, in this case health, is a collaboration ‎between the Basic Internet Foundation and health centres in Tanzania; see Vision 2030, available at ‎https://www.vision2030.no/index.php/en/visjon2030-projects/non-discrimin.... The ‎Panel has been informed that a ‘common bid’ for connectivity is being prepared by ITU, UNICEF and the World ‎Bank. ‎
‎60 BBC, Video: Internet access in Africa - Are mesh networks the future?, 28 March 2019, available at ‎https://www.bbc.co.uk/news/world-africa-47723967. There is another example from rural England of the power ‎of a cooperative approach: farmers waived right of way charges and volunteered to help dig up trenches for ‎fibre optic cable in exchange for shares in the network. See ISPreview, “B4RN Set to Hit 5000 Rural UK FTTH ‎Broadband Connections Target”, 11 September 2018, available at ‎https://www.ispreview.co.uk/index.php/2018/09/b4rn-set-to-hit-5000-rural.... ‎html ‎
‎61 Alliance for Affordable Internet, available at https://a4ai.org/rethinking-affordable-access/
‎62 Written contribution, Centre for Socio-Economic Development. This does not take away from the tremendous ‎role that digital technologies have played in improving the lives of people with disabilities. ‎
‎63 UNESCO, “Multilingualism in Cyberspace: Indigenous Languages for Empowerment”, 27-28 November 2015, ‎available at http://www.unesco.org/new/ ‎fileadmin/MULTIMEDIA/HQ/CI/CI/pdf/Events/multilingualism_in_cyberspace_concept_paper_en.pdf; Brookings ‎Institute, “Rural and urban America divided by broadband access”, 18 July 2016, available at ‎https://www.brookings.edu/blog/techtank/2016/07/18/rural-and-urban-ameri...
‎64 ITU Facts and Figures 2017, available at https://www.itu.int/en/ITU-‎D/Statistics/Documents/facts/ICTFactsFigures2017.pdf ‎
‎65 Pathways for Prosperity Commission, Digital Lives: Meaningful Connections for the Next 3 Billion, 2018, ‎available at https://pathwayscommission.bsg. ox.ac.uk/sites/default/files/2018-11/digital_lives_report.pdf ‎
‎66 Recognising the importance of marketing in addressing socio-cultural issues, the Unstereotype alliance, an ‎initiative convened by UN Women, unites leaders from across business technology and creative industries to ‎use marketing-based techniques to combat gender stereotypes. Available at http:// ‎www.unstereotypealliance.org/en/about
‎67 The OECD and WTO-led inter-agency Task Force on International Trade Statistics is one example of work being ‎undertaken by the OECD and others to update traditional metrics of macroeconomic change and trade flows ‎‎(OECD, Toward a Framework for Measuring the Digital Economy, 19-21 September 2018). The G20 Toolkit for ‎Measuring the Digital Economy identifies methodologies to measure the digital economy as well as gaps and ‎challenges surrounding measurement; (G20 Digital Economy Ministerial Declaration, available at ‎http://www.g20.utoronto.ca/2018/2018-08-24-digital. html#annex3). The ITU’s ICT Development Index (IDI) ‎measures the level and evolution over time of ICT developments across developed and developing countries ‎‎(available at https://www.itu.int/en/ITU-D/Statistics/Pages/publications/mis/methodolo...). The EIU Index ‎covers 100 countries as of 2019 using benchmarks of national digital inclusion across readiness, relevance, ‎affordability, and availability (Ibid). ‎
‎68 The World Bank, World Development Report: The Changing Nature of Work, 2019. ‎
‎69 Thereza Balliester and Adam Elsheikhi, “The Future of Work: A Literature Review”, March 2018, available at ‎https://www.ilo.org/wcmsp5/groups/public/- --dgreports/---inst/documents/publication/wcms_625866.pdf ‎
‎70 Carl Benedikt Frey and Michael A. Osborne, The Future Of Employment: How Susceptible Are Jobs To ‎Computerisation? (Oxford Martin School, 2013), available at ‎https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Emplo...
‎71 Towards data science, “Humanities Graduates Should Consider Data Science”, 31 August 2017, available at ‎https://towardsdatascience.com/ humanities-graduates-should-consider-data-science-d9fc78735b0c ‎
‎72 Tim Noonan, Director, International Trade Union Confederation, interview, 25 January 2019. ‎
‎73 CNBC, “The future of work won't be about college degrees, it will be about job skills”, 31 October 2018, available ‎at https://www.cnbc.com/2018/10/31/ the-future-of-work-wont-be-about-degrees-it-will-be-about-skills.html ‎
‎74 The Guardian, “All flexibility, no security: why conservative think tanks are wrong on the gig economy”, 23 ‎January 2019, available on https://www. theguardian.com/business/grogonomics/2019/jan/24/all-flexibility-no-‎security-why-conservative-thinktanks-are-wrong-on-the-gig-economy ‎
‎75 International Labour Organization, “Helping the gig economy work better for gig workers”, available at ‎https://www.ilo.org/washington/WCMS_642303/ lang--en/index.htm ‎
‎76 Klaus Schoemann, “Digital Technology to Support the Trade Union Movement”, Open Journal of Social Sciences, ‎Vol. 06 No. 01 (2018), available at https://file.scirp.org/Html/5-1761684_81823.htm
‎77 WIPO, “The informal economy in developing nations: a hidden engine of growth”, June 2017, available at ‎https://www.wipo.int/wipo_magazine/ en/2017/03/article_0006.html ‎
‎78 OECD, “Tax and Digitalisation”, March 2019, available at www.oecd.org/going-digital/tax-and-digitalisation.pdf
‎79 South Center, “The WTO’s Discussions on Electronic Commerce”, January 2017, available at ‎https://www.southcentre.int/wp-content/uploads/2017/01/ AN_TDP_2017_2_The-WTO%E2%80%99s-‎Discussions-on-Electronic-Commerce_EN.pdf ‎
‎80 European Commission, "76 WTO partners launch talks on e-commerce", 25 January 2019, available at ‎http://trade.ec.europa.eu/doclib/press/index. cfm?id=1974 ‎
‎81 UNCTAD, Trade and Development Report 2018: Power, Platforms and the Free Trade Delusion, Chapter III. ‎
‎82 OECD, “Vectors of Digital Transformation” (OECD Publishing, Paris, 22 January 2019). ‎
‎83 Michael Mandel, Data, Trade and Growth, Progressive Policy Institute, April 2014, available at ‎https://www.progressivepolicy.org/wp-content/ uploads/2014/04/2014.04-Mandel_Data-Trade-and-Growth.pdf ‎
‎84 Parminder Jeet Singh, “Digital Industrialisation in Developing Countries”, paper for the Commonwealth ‎Secretariat, 2018. ‎
‎85 OECD, “Tax and Digitalisation”, March 2019, available at www.oecd.org/going-digital/tax-and-digitalisation.pdf
‎86 OECD/G20 Base Erosion and Shifting Project, Tax Challenges Arising from Digitalisation, Interim report 2018, ‎available at: https://read.oecd-library.org/ taxation/tax-challenges-arising-from-digitalisation-interim-‎report_9789264293083-en#page3; Esquire, “Silicon Valley’s Tax-Avoiding, Job-Killing, Soul-Sucking Machine”, ‎available at https://www.esquire.com/news-politics/a15895746/bust-big-tech-silicon-va...
‎87 OECD, Base Erosion and Profit Shifting, available at https://www.oecd.org/tax/beps/
‎88 Bloomberg Tax, “What’s Next for Countries Going it Alone on Digital Taxes”, 21 March, 2019, available at ‎https://news.bloombergtax.com/daily-tax-report-international/whats-next-...‎tax ‎
‎89 KPMG, “Taxation of Digital Assets: New Laws Issued”, 15 May 2018, available at ‎https://home.kpmg/xx/en/home/insights/2018/05/tax-news-flash-issue-380.html
‎90 Jean Tirole, “Regulating Disrupters”, Project Syndicate, 9 January 2019, available at www.project-‎syndicate.org/onpoint/regulating-the-disrupters-by-jean-tirole-2019-01?barrier=accesspaylog. ‎
‎91 For more on these processes, see Jean Tirole, Economics for the Common Good (Princeton University Press, ‎‎2016). ‎
‎92 Since 1979, the International Conference of Data Protection & Privacy Commissioners (ICDPPC) has provided a ‎forum for connecting the efforts of 122 data protection and privacy authorities from across the globe; and since ‎‎2001, the International Competition Network (ICN) has provided a specialised yet informal venue for ‎maintaining regular dialogue across the global antitrust community to build procedural and substantive ‎convergence and address practical competition concerns for the benefit of consumers and economies. ‎
‎93 The National Institute for Transparency, Access to Information and Personal Data Protection (INAI) is an ‎autonomous constitutional body responsible for upholding the right to access to public information. It is also in ‎charge of upholding the right to protection of personal data held by the public and the private sectors. See ‎http://www.networkforintegrity.org/continents/america/instituto-nacional...‎informacion-y-proteccion-de-datos-personales-inai/ ‎
‎94 OECD, “Strengthening digital government”, OECD Going Digital Policy Note, OECD Paris, March 2019, available ‎at www.oecd.org/going-digital/ strengthening-digital-government.pdf ‎
‎95 See Creators, available at https://www.creatorspad.com/pages/govtech-program
‎96 Infocomm Media Development Corporation, available at https://www.imda.gov.sg/imtalent/training-and-courses
‎97 Minister Omar Al Olama, Remarks at the World Government Summit, 10 February 2019. ‎
‎98 The Verge, “The mass shooting in New Zealand was designed to spread on social media”, 15 March 2019, ‎available on https://www.theverge. com/2019/3/15/18266859/new-zealand-shooting-video-social-media-‎manipulation ‎
‎99 Myanmar went from minimal connectivity in 2013 to virtually half the population in 2016 owning smartphones. ‎Facebook became the dominant communications platform almost by accident. See Reuters, “Why Facebook is ‎losing the war on hate speech in Myanmar”, 15 August 2018, available at ‎https://www.reuters.com/investigates/special-report/myanmar-facebook-hate/
‎100 National Public Radio, "#Gamergate Controversy Fuels Debate On Women And Video Games", 24 September ‎‎2014, available at https://www.npr.org/ sections/alltechconsidered/2014/09/24/349835297/-gamergate-‎controversy-fuels-debate-on-women-and-video-games ‎
‎101 The Guardian, “Instagram bans 'graphic' self-harm images after Molly Russell's death”, 7 February 2019, ‎available at https://www.theguardian.com/ technology/2019/feb/07/instagram-bans-graphic-self-harm-images-‎after-molly-russells-death ‎
‎102 Hindustan Times, “24-yr-old commits suicide after being bullied for dressing up as a woman”, 19 October 2019, ‎available at ‎
https://www.hindustantimes.com/india-news/24-yr-old-commits-suicide-afte...‎woman/story- 8PlWvf0fMwcd72A5Tp8tBI.html ‎
‎103 Ofcom and UK Information Commissioner’s Office, “Internet users’ experience of harm online: summary of ‎survey research”, July 2018, available at https://www.ofcom.org.uk/__data/assets/pdf_file/0018/120852/Internet-‎harm-research-2018-report.pdf ‎
‎104 NSPCC, “Net Aware report 2017: ‘Freedom to express myself safely’”, 04 September 2018, available at ‎https://learning.nspcc.org.uk/research-resources/2017/net-aware-report-2...‎safely/ ‎
‎105 India alone had over 100 incidents in 2018. See Freedom House, “Freedom on the Net 2018”, October 2018, ‎available at https://freedomhouse.org/ sites/default/files/FOTN_2018_Final%20Booklet_11_1_2018.pdf ‎
‎106 United Nations, Office of the High Commissioner for Human Rights, “Human Rights Appeal 2019”, 17 January ‎‎2019, available at https://www.ohchr.org/ Documents/Publications/AnnualAppeal2019.pdf ‎
‎107 Electronic Frontier Foundation, “India's Supreme Court Upholds Right to Privacy as a Fundamental Right”, 27 ‎August 2017, available at https://www.eff. org/deeplinks/2017/08/indias-supreme-court-upholds-right-privacy-‎fundamental-right-and-its-about-time ‎
‎108 Written contribution, the Paradigm Initiative. The bill has not received presidential assent. ‎
‎109 United Nations Children’s Fund, United Nations Global Compact, Save the Children, “Children’s Rights and ‎Business Principles”, 03 March 2012, available at ‎https://www.unglobalcompact.org/docs/issues_doc/human_rights/CRBP/Childr...
‎110 United Nations Educational, Scientific and Cultural Organization, “Steering AI and Advanced ICTs for ‎
Knowledge Societies”, available at https://en.unesco.org/system/files/unesco-‎steering_ai_for_knowledge_societies.pdf ‎
‎111 Council of Europe, Freedom of Expression, Standard Setting, available at https:/www.coe.int/en/web/freedom-‎expression/internet-standard-setting; and European Court of Human Rights decisions, for example, in the case ‎of Ahmet Yildirim v. Turkey, available at https://hudoc.echr.coe.int/ eng#{%22itemid%22:[%22001-115705%22]} ‎
‎112 IFEX, “Saudi Arabia arrests at least 13 more human rights defenders”, 14 April 2019. ‎
‎113 UN Global Compact “Guiding Principles for Business and Human Rights: Implementing the United Nations ‎‎“Protect, Respect and Remedy” Framework”, 2011, available at https://www.unglobalcompact.org/library/2
‎114 The Business & Human Rights Resource Centre, available at https://www.business-humanrights.org/
‎115 A Corporate Accountability Index is published annually by Ranking Digital Rights. Available at ‎https://rankingdigitalrights.org/
‎116 Carnegie UK Trust, “Reducing harm in social media through a duty of care”, 08 May 2018, available at ‎https://www.carnegieuktrust.org.uk/blog/ reducing-harm-social-media-duty-care/ ‎
‎117 Pew Research Trust, “Online Harassment 2017”, 11 July 2017, available at ‎https://www.pewinternet.org/2017/07/11/online-harassment-2017/
‎118 Amanda and Noel Sharkey, “Granny and the robots: ethical issues in robot care for the elderly”, University of ‎Sheffield, 03 July 2010. ‎
‎119 United Nations Children’s Fund, “One in Three: Internet Governance and Children’s Rights”, discussion paper, ‎‎2016. ‎
‎120 U.S. Government Publishing Office, “Electronic Code of Federal Regulation”, 26 April 2019, available at ‎https://www.ecfr.gov/cgi-bin/text-idx?SID=49 ‎‎39e77c77a1a1a08c1cbf905fc4b409&node=16%3A1.0.1.3.36&rgn=div5; UK Information Commissioner’s ‎Office, “Age appropriate design: a code of practice for online services”, 15 April 2019, available at ‎https://ico.org.uk/media/about-the-ico/consultations/2614762/age-appropr...‎consultation.pdf ‎
‎121 Elon University, “Survey X: Artificial Intelligence and the Future of Humans”, 2018, available at ‎
http://www.elon.edu/e-web/imagining/surveys/2018_survey/AI_and_the_Futur...
‎122 Pedro Domingos, The Master Algorithm: How the quest for the ultimate learning machine will remake our world ‎‎(Basic Books, 2015). ‎
‎123 Cathy O’Neil, Weapons of Math Destruction (The Crown Publishing Group, 2016); Digital Society, “Human rights ‎in the robot age - Challenges arising from the use of robotics, artificial intelligence, and virtual and augmented ‎reality”, 11 October 2017; Umoja Noble, “Algorithms of Oppression – How Search Engines Reinforce Racism”, ‎‎08 January 2018, available at https://nyupress.org/9781479837243/algorithms-of-oppression/
‎124 Investors and founders are finally waking up to the gender problem in tech after high-profile scandals and ‎walkouts by employees at companies such as Google. See Aliya Ram, “Tech investors put #MeToo clauses in ‎deals”, Financial Times, 22 March 2019. ‎
‎125 Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martin's ‎Press, 2018); excerpt available at https:// us.macmillan.com/excerpt?isbn=9781250074317 ‎
‎126 Harvard Law Today, “Algorithms and their unintended consequences for the poor”, 07 November 2018, ‎available at https://today.law.harvard.edu/ algorithms-and-their-unintended-consequences-for-the-poor/? ‎fbclid=IwAR2yLUMpEYj8YKhvZDQktU0LNHNDateRtqVBgZHW45uHMEYubyQr36h08H8 ‎
‎127 Institute of Electrical and Electronics Engineers, “Ethically Aligned Design: A Vision for Prioritizing Human Well-‎being with Autonomous and Intelligent Systems”, available at https://standards.ieee.org/content/dam/ieee-‎standards/standards/web/documents/other/ead_v2.pdf ‎
‎128 One important discussion is on the applicability of International Humanitarian Law and accountability thereunder ‎for the use of military systems that might deploy AI. See Convention on Prohibitions or Restrictions on the Use of ‎Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate ‎Effects, “Report of the 2018 session of the Group of Governmental Experts on Emerging Technologies in the ‎Area of Lethal Autonomous Weapons Systems”, 23 October 2018, available at ‎https://undocs.org/en/CCW/GGE.1/2018/3
‎129 Wendell Wallach, An Agile Ethical/Legal Model for the International and National Governance of AI and Robotics ‎‎(Association for the Advancement of Artificial Intelligence, 2018). ‎
‎130 António Guterres, United Nations Secretary-General, remarks at the Web Summit, Lisbon, 05 November 2018, ‎available at https://www.un.org/sg/en/ content/sg/speeches/2018-11-05/remarks-web-summit; “Autonomous ‎weapons that kill must be banned, insists UN chief”, March 29, 2019, available at ‎https://news.un.org/en/story/2019/03/1035381
‎131 Provisions similar to the U.S. Fourth Amendment exist in several Constitutions and the 1980 OECD Guidelines ‎codified 8 principles that have influenced privacy regulations since then. These were updated in 2013 as ‎Guidelines on the Protection of Privacy and Trans-border Flows of Personal Data and are available at ‎https://www.oecd.org/sti/ieconomy/oecd_privacy_framework.pdf
‎132 David Dodwell, “The integration of mass surveillance and new digital technologies is unnerving”, The South ‎China Morning Post, 17 February 2018, available at https://www.scmp.com/comment/insight-‎opinion/article/2133617/integration-mass-surveillance-and-new-digital-technologies; United Nations General ‎Assembly, Summary of the Human Rights Council penal discussion on the right to privacy in the digital age, 19 ‎December 2014. ‎
‎133 The 2016 Privacy Shield framework (earlier Safe Harbor), which governs personal data flows between the U.S. ‎and the E.U. and Switzerland based on self-certification by companies, is an example of the former; available at ‎https://www.privacyshield.gov/welcome
‎134 National Public Radio, “A year after San Bernadino and Apple-FBI, where are we on encryption?”, 3 December ‎‎2016, available at https://www.npr.org/ sections/alltechconsidered/2016/12/03/504130977/a-year-after-san-‎bernardino-and-apple-fbi-where-are-we-on-encryption?t=1532518316108 ‎
‎135 For example, Australia’s Telecommunications and Other Legislation Amendment (Assistance and Access) Act ‎‎2018, available at https://www.legislation. gov.au/Details/C2018A00148 ‎
‎136 U.S. Clarifying Lawful Overseas Use of Data Act (CLOUD Act, H.R. 4943). ‎
‎137 Brave automatically blocks ad trackers (ad trackers collect data from users’ online behaviours for the purpose of ‎boosting the effectiveness of ads and marketing campaigns). DuckDuckGo does not track user search ‎behaviours. ‎
‎138 UK Open Data Institute, “UK’s first ‘data trust’ pilots to be led by the ODI in partnership with central and local ‎government”, 20 November 2018, available at https://theodi.org/article/uks-first-data-trust-pilots-to-be-led-by-the-‎odi-in-partnership-with-central-and-local-government/ ‎
‎139 India Stack, “About Data Empowerment and Protection Architecture”, available at https://indiastack.org/depa/
‎140 United Nations Secretary-General, Address to the General Assembly, 25 September 2018, available at ‎https://www.un.org/sg/en/content/sg/ speeches/2018-09-25/address-73rd-general-assembly ‎
‎141 Mareike Möhlmann and Andrea Geissinger, Trust in the Sharing Economy: Platform-Mediated Peer Trust ‎‎( Cambridge University Press, July 2018), available at ‎https://www.researchgate.net/publication/326346569_Trust_in_the_Sharing_...‎Mediated_Peer_Trust ‎
‎142 European Political Strategy Centre, “Report from the High Level-Hearing: Preserving Democracy in the Digital ‎Age”, 22 February 2018, available at https://ec.europa.eu/epsc/sites/epsc/files/epsc_-_report_-‎‎_hearing_on_preserving_democracy_in_the_digital_age.pdf ‎
‎143 The Guardian, “You thought fake news was bad? Deep fakes are where truth goes die”, 12 November 2018, ‎available at: https://www.theguardian.com/
technology/2018/nov/12/deep-fakes-fake-news-truth ‎
‎144 Kai-Fu Lee, AI Superpowers: China, Silicon Valley, and the New World Order (Houghton Miller Harcourt, 2018), ‎available at https://aisuperpowers.com/
‎145 Here capacity is understood as “the ability of people, organizations, systems of organizations, and society as a ‎whole to define and solve problems, make informed choices, order their priorities, plan their futures, and to ‎implement programmes and projects to sustain them.” See Swiss Agency of Development and Cooperation, ‎‎“Glossary Knowledge Management and Capacity Development”, available at https://bit.ly/2FwORDl
‎146 5Rights Foundation, “5Rights Partner with BT to Co-Create with Children on Digital Literacy”, 2017, available at ‎https://5rightsfoundation.com/in-action/5rights-partner-with-bt-to-co-cr...
‎147 United Nations Volunteers, “Shape the Future of Volunteering: Online Conversations”, 25 April 2019, available at ‎https://www.unv.org/planofaction/ dialogues ‎
‎148 The Times of India, “Fake news: WhatsApp, DEF host training for community leaders in Jaipur”, 19 November ‎‎2018. ‎
‎149 The “security by design” approach is described in the 2015 White Paper of Amazon Web Services, available at ‎https://www.logicworks.com/wp-content/ uploads/2017/01/Intro_to_Security_by_Design.pdf; the EU General ‎Data Protection Regulation (GDPR) contains the “privacy by design” principle; some examples of what it means ‎in practice are available at https://ec.europa.eu/info/law/law-topic/data-protection/reform/rules-bus...‎organisations/obligations/what-does-data-protection-design-and-default-mean_en. ‎
‎150 European Commission, Final Report of the High Level Expert Group on Fake News and Online Disinformation, ‎‎12 March 2018, available at https:// ec.europa.eu/digital-single-market/en/news/final-report-high-level-expert-‎group-fake-news-and-online-disinformation; Facebook, “Protecting Elections in the EU”, 28 March 2019. ‎
‎151 See for example Joseph Nye, “Nuclear Learning and U.S.-Soviet security regimes”, International Organization, ‎‎41, 3, (1987), p. 371-402; Emmanuel Adler, “The emergence of cooperation: national epistemic communities ‎and the international evolution of the idea of nuclear arms control”, International Organization, 46, (1992), p. ‎‎101-145; Clifton Parker, “Cooperation of U.S., Russian Scientists Helped Avoid Nuclear Catastrophe at Cold ‎War’s End, CISAC Scholar Says”, June 28, 2016, available at https://cisac.fsi.stanford.edu/news/cooperation-us-‎russian-scientists-helped-avoid-nuclear-catastrophe-cold-war%E2%80%99s-end-says-cisac ‎
‎152 World Economic Forum, The Global Risks Report 2019, 15 January 2019, available at ‎https://www.weforum.org/reports/the-global-risks-report-2019
‎153 World Economic Forum Global Risks Perception Survey 2018-2019. ‎
‎154 WIRED, “That Insane, $81m Bangladesh Bank Heist? Here's What We Know”, 17 May 2016, available at ‎https://www.wired.com/2016/05/insane-81m-bangladesh-bank-heist-heres-know/
‎155 CBS, “What can we learn from the ‘most devastating’ cyberattack in history?”, 22 August 2018, available at ‎https://www.cbsnews.com/news/lessons-to-learn-from-devastating-notpetya-...
‎156 Bromium, Inc., “Hyper-Connected Web Of Profit Emerges, As Global Cybercriminal Revenues Hit $1.5 Trillion ‎Annually”, 20 August 2018, available at https://www.bromium.com/press-release/hyper-connected-web-of-profit-‎emerges-as-global-cybercriminal-revenues-hit-1-5-trillion-annually/ ‎
‎157 Business Insider, “Travis Kalanick lasted in his role for 6.5 years — five times longer than the average Uber ‎employee”, 20 August 2017, available at https://www.businessinsider.com/employee-retention-rate-top-tech-‎companies-2017-8 ‎
‎158 Symantec, Internet Security Threat Report, April 2016, available at https://www.nu.nl/files/nutech/Rapport-‎Symantec2016.pdf ‎
‎159 Europol’s Internet Organised Crime Threat Assessment (IOCTA) 2018 has a summary of the evolving threat ‎environment; Japan’s National Institute of Information and Communications Technology (NICT) estimates on ‎the basis of scans of the darknet that 54% of the attacks it detected in 2017 targeted IoT devices: see NICT, “The ‎‎‘NOTICE’ Project to Survey IoT Devices and to Alert Users”, 1 February 2019. ‎
‎160 IOT Analytics, “State of the IoT 2018, Number of IoT devices now at 7B – Market accelerating”, 08 August 2018, ‎available at https://iot-analytics.com/ state-of-the-iot-update-q1-q2-2018-number-of-iot-devices-now-7b/ ‎
‎161 CBS, “Stuxnet: Computer Worm Opens Era of Warfare”, 04 June 2012, available at ‎https://www.cbsnews.com/news/stuxnet-computer-worm-opens-new-era-of-warf...
‎162 CNN, “US announces new set of Russia Sanctions”, 20 December 2018, available at ‎https://edition.cnn.com/2018/12/19/politics/us-treasury-russia/index.html; The New York Times, “Signs of ‎Russian Meddling in Brexit Referendum”, 15 November 2017, available at https://www.nytimes. ‎com/2017/11/15/world/europe/russia-brexit-twitter-facebook.html ‎
‎163 Gail Kent, Stanford Law School Center for Internet and Society, “The Mutual Legal Assistance Problem ‎Explained”, 23 February 2015, available at http:// cyberlaw.stanford.edu/blog/2015/02/mutual-legal-assistance-‎problem-explained ‎
‎164 Bloomberg, “Huawei Reveals the Real Trade War with China”, 6 December 2018; Associated Press, “German ‎leader Angela Merkel testifies on alleged U.S. surveillance revealed by Snowden, 16 February 2017 and “Costs ‎of Snowden leak still mounting 5 years later”, 4 June 2018. ‎
‎165 TechRepublic, “Governments and nation states are now officially training for cyberwarfare: An inside look”, 1 ‎September 2016, available at https://www. techrepublic.com/article/governments-and-nation-states-are-now-‎officially-training-for-cyberwarfare-an-inside-look/ ‎
‎166 The Wall Street Journal, “Cyberwar Ignites a New Arms Race”, 11 October 2015; The Wall Street Journal, ‎‎“Cataloging the World’s Cyberforces”, 11 October 2015. ‎
‎167 The Register, “Everything you need to know about the Petya, er, NotPetya nasty trashing PCs worldwide”, 28 ‎June 2017. ‎
‎168 IBM researchers have shown it is possible to conceal known malware in video-conferencing software and ‎trigger it when it sees a specific individual, available at https://securityintelligence.com/deeplocker-how-ai-can-‎power-a-stealthy-new-breed-of-malware/ ‎
‎169 Russia placed information security on the agenda of the UN in 1998. Since then several Groups of ‎Governmental Experts have studied ICT security and three of them have adopted reports by consensus. See ‎https://www.un.org/disarmament/ict-security/ and https://www.diplomatie.gouv.fr/IMG/pdf/ ‎paris_call_cyber_cle443433-1.pdf ‎
‎170 They are composed on the basis of equitable geographical distribution, and each has included the five ‎permanent members of the UN Security Council. ‎
‎171 UN GGE report of 2013 (A/68/98), paragraph 19, available at https://undocs.org/A/68/98; reconfirmed by the UN ‎GGE report of 2015 (A/70/174), ‎
paragraph 24, available at https://undocs.org/A/70/174
‎172 United Nations General Assembly, Group of Governmental Experts on Developments in the Field of Information ‎and Telecommunications in the Context of International Security, report A/70/174, page 13, 22 July 2015, ‎available at http://undocs.org/A/70/174
‎173 Government of France, “Cybersecurity: Paris Call of 12 November 2018 for Trust and Security in Cyberspace”, ‎available at https://www.diplomatie.gouv. fr/en/french-foreign-policy/digital-diplomacy/france-and-cyber-‎security/article/cybersecurity-paris-call-of-12-november-2018-for-trust-and-security-in ‎
‎174 Cybersecurity Tech Accord, available at https://cybertechaccord.org; Siemens, Charter of Trust, available at ‎https://www.siemens.com/press/pool/de/ feature/2018/corporate/2018-02-cybersecurity/charter-of-trust-e.pdf ‎
‎175 The case has been made strongly in recent studies such as Samir Saran (ed.), Our Common Digital Future ‎‎(GCCS and ORF, 2017), available at https:// www.orfonline.org/research/our-common-digital-future-gccs-2017/
‎176 United Nations General Assembly, “Advancing responsible State behaviour in cyberspace in the context of ‎international security”, 18 October 2018, available at https://undocs.org/A/C.1/73/L.37
‎177 United National General Assembly, “Developments in the field of information and telecommunications in the ‎context of international security”, 29 October 2019, available at https://undocs.org/A/C.1/73/L.27/Rev.1
‎178 Oman ITU-Arab Regional Cybersecurity Centre, available at https://www.itu.int/en/ITU-‎D/Cybersecurity/Pages/Global-Partners/oman-itu-arab-regional-cybersecurity-centre.aspx ‎
‎179 CSIRTs Network, available at https://www.enisa.europa.eu/topics/csirts-in-europe/csirts-network
‎180 Cathy Mulligan, “A Call to (Software) Arms”, LinkedIn, 30 March 2019. ‎
‎181 International Organization for Standardization, ISO/IEC 27034, 2011; SAFECode, Fundamental Practices for ‎Secure Software Development, March 2018, available at https://safecode.org/wp-‎content/uploads/2018/03/SAFECode_Fundamental_Practices_for_Secure_Software_Development_March_20‎‎18. pdf; SAFECode, Managing Security Risks Inherent in the Use of Third-Party Components, 2017, available at ‎https://www.safecode.org/wp-content/ uploads/2017/05/SAFECode_TPC_Whitepaper.pdf; SAFECode, Tactical ‎Threat Modeling, 2017, available at https://www.safecode.org/wp-content/ ‎uploads/2017/05/SAFECode_TM_Whitepaper.pdf; and Microsoft, Security Development Lifecycle. Microsoft, ‎available at https://www.microsoft.com/ en-us/securityengineering/sdl. ‎
‎182 The Global Cybersecurity Capacity Centre at Oxford University has created a repository of existing efforts in ‎partnership with the GFCE: the Cybersecurity Capacity Portal, available at ‎https://www.sbs.ox.ac.uk/cybersecurity-capacity/explore/gfce. The report “Cybersecurity Competence Building ‎Trends” provides examples of public-private partnerships in OECD countries: see Diplo, Cybersecurity ‎Competence Building Trends, 2016. ‎
‎183 Cybersecurity Ventures, “Cybersecurity Jobs Report 2018-2021”, 31 May 2017, available at ‎https://cybersecurityventures.com/jobs/. The Delhi Communiqué on a GFCE Global Agenda for Cyber Capacity ‎Building provides a framework for such efforts: see GFCE, Delhi Communiqué, 2017, available at ‎https://www.thegfce.com/delhi-communique
‎184 OECD, “Unlocking the potential of e-commerce”, OECD Going Digital Policy Note, OECD, Paris, 2019, available ‎at www.oecd.org/going-digital/unlocking-the-potential-of-e-commerce.pdf. Page 2 notes that “SMEs could also ‎benefit from multistakeholder initiatives such as the Electronic World Trade Platform, which aims to foster a ‎more effective policy environment for online trading”. ‎
‎185 In the areas of cybersecurity and cybercrime, for example, national laws and regional and international ‎conventions create frameworks for digital cooperation in addressing cyber-risks. One example is the Council of ‎Europe Cybercrime Convention, available at https://www.coe.int/en/web/ conventions/full-list/-‎‎/conventions/treaty/185 ‎
‎186 Content policy is one area where there are many examples of “soft law” instruments, such as the “Code of ‎conduct on countering illegal hate speech online” (agreed in 2016 by the European Commission and major ‎internet companies; available at https://ec.europa.eu/newsroom/just/item-detail. cfm?item_id=54300), the ‎‎“Manila Principles on Internet Intermediaries” (developed in 2015 by the Electronic Frontier Foundation and ‎other civil society groups and endorsed by many entities, available at https://www.manilaprinciples.org), and the ‎‎“Guidelines for industry on child protection online” (initially developed in 2015 through a consultative process ‎led by the International Telecommunication Union and UNICEF, available at https://www. ‎unicef.org/csr/files/COP_Guidelines_English.pdf). ‎
‎187 The Internet Governance Forum can be seen as a loosely organised framework for digital cooperation (more ‎details at https://www.intgovforum.org/ multilingual/tags/about), while the Internet Corporation for Assigned ‎Names and Numbers (with its multiple advisory committees and supporting organisations) can be seen as a ‎more institutionalised framework (more details at https://www.icann.org/resources/pages/groups-2012-02-06-‎en). ‎
‎188 The Internet Engineering Task Force, for example, develops technical standards for the internet (more details at ‎https://www.ietf.org/standards/), while the European Commission’s High Level Group on Internet Governance ‎has the role of facilitating coordination among EU member states on internet governance issues (more details ‎at http://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.grou...). ‎
‎189 See Anderson, C., Cyber Security and the Need for International Governance (Southern University Law Center ‎‎24 April 2016). ‎
‎190 Paragraph 72 of the WSIS Agenda lists this and other functions of the IGF. Available at ‎https://www.itu.int/net/wsis/docs2/tunis/off/6rev1.html
‎191 NETmundial, “NETmundial Multistakeholder Statement”, April 2014, available at ‎http://netmundial.br/netmundial-multistakeholder-statement/
‎192 Global Commission on Internet Governance, “One Internet”, June 2016, available at ‎https://www.cigionline.org/publications/one-internet
‎193 World Wide Web Foundation, “Contract for the Web”, available at https://contractfortheweb.org
‎194 Government of France, “France and Canada Create new Expert International Panel on Artificial Intelligence”, 7 ‎December 2018, available at https://www. gouvernement.fr/en/france-and-canada-create-new-expert-‎international-panel-on-artificial-intelligence ‎
‎195 In 2016, at the G20 Summit in Hangzhou the G20 leaders adopted a “G20 Digital Economy Development and ‎Cooperation Initiative”, available at https:// www.mofa.go.jp/files/000185874.pdf. Annual G20 Digital Economy ‎Ministerial Meetings have been held since 2017. ‎
‎196 UN Secretary-General António Guterres, Address to the Internet Governance Forum 2018, available at ‎http://www.intgovforum.org/multilingual/content/ igf-2018-address-to-the-internet-governance-forum-by-un-sg-‎antónio-guterres ‎
‎197 Many documents and publications released over the past decade underline the need for better inclusion of ‎underrepresented communities in internet governance and digital policy processes. Examples include the ‎report of the Working Group on Improvements to the Internet Governance Forum, 2012, available at ‎https://unctad.org/meetings/en/SessionalDocuments/a67d65_en.pdf, and the NetMundial Multistakeholder ‎Statement, 2014, available at http://netmundial.br/wp-content/uploads/2014/04/NETmundial-Multistakehol...‎Document.pdf. ICANN has also recognised the need for better inclusion of under-represented communities and ‎is working on addressing this through initiatives such as its Fellowship Program (more details at ‎https://www.icann.org/fellowshipprogram). ‎
‎198 According to the updated estimate of the 2014 UNCTAD study, there are more than 680 digital cooperation ‎mechanisms developed and used by governments, businesses, technical and international organisations. See ‎United Nations Commission Mapping of International Internet Public Policy Issues, E/CN.16/2015/CRP.2, 17 ‎April 2015, available at https://unctad.org/meetings/en/SessionalDocuments/ecn162015crp2_en.pdf
‎199 One recent example is the impact of the introduction of the GDPR on ICANN’s policies concerning the collection ‎and publication of domain name registration data. When the GDPR requested that data on EU registrants be ‎made private, ICANN was unprepared to adapt its so-called WHOIS policies to the new EU regulation. A ‎coordination mechanism for interdisciplinary policy approaches could have helped ICANN be better prepared ‎for the GDPR. ‎
‎200 CSIS, “Economic Impact of Cybercrime – No Slowing Down”, February 2018, available at https://csis-‎prod.s3.amazonaws.com/s3fs-public/publication/ economic-impact-cybercrime.pdf ‎
‎201 Cybersecurity Ventures, 2017 Cybercrime Report, 2017. ‎
‎202 Digital Full Potential, “Artificial Intelligence Market Size Projected to Be $60 Billion by 2025”. ‎
‎203 MarketWatch, “The Global Artificial Intelligence (AI) Market by Technology and Industry Vertical - A $169.4 Billion ‎Opportunity by 2025 - ResearchAndMarkets.com”, 24 August 2018. ‎
‎204 WSIS, available at https://www.itu.int/net/wsis/; UNESCO ROAM Principles, available at ‎https://en.unesco.org/internetuniversality/indicators; NETmundial, available at https://netmundial.org/
‎205 The IGF Plus proposal builds on previous policy and academic discussions on strengthening the Internet ‎Governance Forum, including: Report of the Working Group on Improvements to the Internet Governance Forum, ‎‎2012, available at https://unctad.org/meetings/en/SessionalDocuments/ a67d65_en.pdf; Milton Mueller and Ben ‎Wagner, “Finding a Formula for Brazil: Representation and Legitimacy in Internet Governance,” Internet Policy ‎Observatory, February 2014, available at https://global.asc.upenn.edu/app/uploads/2014/09/Finding-a-Formula-‎for-Brazil-Representation-and- Legitimacy-in-Internet-Governance.pdf; IGF Retreat Proceedings: Advancing the ‎‎10-Year Mandate of the Internet Governance Forum, July 2016, New York, available at ‎https://www.intgovforum.org/multilingual/content/igf-retreat-documents; Wolfgang Kleinwächter, "The Start of a ‎New Beginning: The Internet Governance Forum on Its Road to 2025", CircleID, 3 April 2016, available at ‎http://www.circleid.com/posts/20160403_start_of_a_new_beginning_ the_internet_governance_forum/; Raúl ‎Echeberría, "Let’s Reform the IGF to Ensure Its Healthy Future", Internet Society blog, 17 March 2018, available ‎at https://www.internetsociety.org/blog/2018/03/lets-reform-igf-ensure-heal... the WSIS Tunis Agenda ‎for the Information Society, Tunis, United Nations, available at ‎https://www.itu.int/net/wsis/docs2/tunis/off/6rev1.html. In addition to the IGF, the other outcomes of the WSIS ‎process are action line follow-ups (WSIS Forum), system-wide follow-up (UN CSTD), and enhanced ‎cooperation. ‎
‎206 As of 31 May 2019 there were 82 national, 17 regional and 16 youth Internet Governance Forums. ‎
‎207 This approach was developed by the World Bank and 4IRC. In Singapore, the Technology Office of the Prime ‎Minister developed mechanisms that enable continuity, dialogue, feedback loops, and agility in decision-‎making, particularly in relation to experimentation or piloting of new technologies. ‎
‎208 On the applicability of the concept of global public good to the internet please refer to ‎https://www.diplomacy.edu/calendar/internet-global-public-resource
‎209 Malta proposed that the UN consider the internet as a common heritage of mankind. See ‎
Statement by Dr. Alex Sceberras Trigona, Special Envoy of the Prime Minister of Malta, World Summit on ‎Information Society Review Process, New York, 15 November 2015, available at ‎https://www.academia.edu/19974250/Protecting_the_Internet_as_Common_Heri...
‎210 The data commons idea has emerged over the past year at the ITU’s AI for Good Summit and the World ‎Government Summit. See https://news.itu.int/ roadmap-zero-to-ai-and-data-commons/ and http://the-‎levant.com/uae-world-leader-ai-global-data-commons-roundtable ‎
‎211 Members of the International Chamber of Commerce pay an annual membership fee, set either by ICC national ‎committees (where they exist) or by the ICC itself (for direct members). More details at ‎https://iccwbo.org/become-a-member/joining-icc-direct-member/
‎212 United Nations Secretary-General’s Task Force on Digital Financing of the SDGs, available at ‎https://digitalfinancingtaskforce.org/
‎213 One of the first regulatory sandboxes was launched in 2015 in the UK; at the beginning of 2018, there were ‎more than 20 jurisdictions actively implementing or exploring the concept. See Briefing by UN Secretary-‎General’s Special Advocate for Inclusive Finance, available at https://www.unsgsa. ‎org/files/1915/3141/8033/Sandbox.pdf ‎
‎214 We understand ‘inclusion’ to be more than simple participation of a few ‘missing actors’ in digital events. Meaningful ‎representation requires bottom-up capacity development, preparatory discussions and inter-ministerial coordination at the ‎national level.‎

Book IGF 2020

Dynamic Coalitions

Glossary on Platform Law and Policy Terms

Glossary of Platform Law and Policy Terms

Developed by the IGF Coalition on Platform Responsibility

Status: DRAFT (updated after the 2020 IGF session of the Coalition)

 

 

This document is a DRAFT for comments, prepared by the “Glossary Working Group” of the IGF Coalition on Platform Responsibility, a multistakeholder group under the auspices of the UN Internet Governance Forum. The elaboration process is documented at this link. A pdf version of the Glossary can be downloaded here.

The members of the working group are (in alphabetical order): Luca Belli, Vittorio Bertola, Yasmin Curzi de Mendonça, Giovanni De Gregorio, Rossana Ducato, Luã Fergus Oliveira da Cruz, Catalina Goanta, Tamara Gojkovic, Terri Harel, Cynthia Khoo, Stefan Kulk, Paddy Leerssen, Laila Neves Lorenzon, Chris Marsden, Enguerrand Marique, Michael Oghia, Milica Pesic, Courtney Radsch, Roxana Radu, Konstantinos Stylianou, Rolf H. Weber, Chris Wiersma, Richard Wingfield, Monika Zalnieriute and Nicolo Zingales.

 

 

 

Guidelines on how to comment the DRAFT Glossary document using this platform

 

Please comment by 31 January 2021. Please leave your comments stating the number and title of the section to which the comments refer as well as the number of the paragraph to which the comments refer.

Example 1.

If you wish to comment the first line of the “Introduction” section, your comment should read as follows:

Comment on Introduction Section, paragraph 1

The first line should consider blab la bla. I suggest reformulating as follows blab la bla

 

Example 2.

If you wish to comment the last line of section 37 “Governance”, your comment should read as follows:

Comment on section 37 “Governance”, paragraph 5

The last line does not include the perspective of bla bla bla. Please see this article analyzing bla bla bla

 

IMPORTANT: Please, feel free to download a pdf version of the entire DRAFT Glossary on the webpage of the IGF Coalition on Platform Responsibility, on the IGF website, and share your comments using the mailing list of the Coalition or sending them via email to luca.belli[at]fgv.br and nicolo.zingales[at]fgv

Introduction

Why a Glossary of Key Terms on Platform Law and Policy?

Luca Belli and Nicolo Zingales

 

  1. At the 2019 Internet Governance Forum (IGF), during the customary stocktaking meeting of the Coalition on Platform Responsibility1, hereinafter “the Coalition”, taking place after the annual session, the main suggestion emerging from participants as a next step in the Coalition work has been the elaboration of a Glossary of Platform Law and Policy Terms, so as to provide a common language for academics, regulators and policy-makers when discussing issues of platform responsibility.

 

  1. The Coalition relies on spontaneous contributions of its members and thus, the Glossary initiative was launched issuing a request for suggested term, in order to shape the Glossary structure, based on the Coalition collective intuition of which list of terms may be most useful. Stakeholders agreed that the Glossary would be a “living document” that could be updated over time, while aiming at bringing together contributions from a heterogeneous range of disciplines and vocabularies. It was also agreed that the definitional efforts should recognize as much as possible the existence of competing/alternative views on the topic. For this reason, contributors were encouraged to conceived of definitions as a springboard for learning more about those views, through links and references to external sources. Links to external sources will be added in the version of the Glossary that will be uploaded on the Coalition´s website, after having received and incorporated any comments arising in the discussions at the IGF 2020.

 

  1. The following action plan was shared for feedback, and subsequently implemented between May and October 2020:

 

  1. reception of expressions of interest for the development of the Glossary and participation to the Coalition session
  2. consolidation of the proposed terms and circulation of a draft list of terms to be used to compose the Glossary
  3. reception of feedback on the draft list and suggestion of further terms
  4. creation of a multistakeholder working group dedicated to the elaboration of the glossary (the Glossary Working Group) including all the individuals who expressed interest in the initiative2
  5. elaboration of draft entries describing the proposed terms
  6. consolidation of the draft entries into a first draft version of the glossary and request for comment on the first draft
  7. consolidation of the updated version into a consolidated draft to be circulated at the IGF 2020 for further feedback from the IGF community
  8. discussion of the draft at the 2020 session of the Coalition, during the IGF, and elaboration of a strategic approach aimed at maximising the impact of the Glossary.
  1. While this may not be the first attempt to create a glossary of platform-related terms3, the above illustrates the uniqueness of the open and transparent bottom-up process that was followed to achieve these results, encapsulating at its core the IGF´s principles of multistakeholder collaboration. We hope that this provides a basis for much needed mutual understanding and enables more meaningful and inclusive discussion among academics, policymakers, journalists, and other stakeholders with a keen interest in platform governance. To be continued!

 

About the IGF Coalition on Platform Responsibility

  1. The following paragraphs provide a background picture of the origins of the Platform Responsibility debate at the IGF and its progression to the current state.

 

  1. To start, it should be acknowledged that a core achievement of the Coalition, well beyond the IGF’s community of stakehoders, is to have coined and promoted the concept of “Platform Responsibility.”4 Such concept aims on the one hand to highlight the impact that private ordering regimes designed and implemented by platforms have on individuals’ capability to enjoy their fundamental rights, and on the other hand, to interrogate the moral, social and human rights responsibilities5 that platforms bear when setting up such regimes. Indeed, the initial goal of this Coalition was to stimulate debate and participatory analysis on the meaning of platform providers’ responsible behaviour. From the early steps, it was clear to participants that the starting point should be an analysis of the application to digital platforms of the UN Guiding Principles on Business and Human Rights6, in particular their responsibility to respect Human Rights and to grant effective grievance mechanisms.7 To lay the foundations of such work, the participants to the inception meeting of the Coalition, in 2014 at the IGF in Istanbul, suggested the development of a set of recommendations on core dimensions of platform responsibility.8

 

  1. The resulting Recommendations on Terms of Service and Human Rights9 (hereinafter “the Recommendations”) presented at the 2015 IGF demonstrated that the cross-disciplinary effort facilitated by the Coalition could lead to concrete outcomes, providing a sound response to all those arguing that the IGF is a mere talking shop, unable to achieve tangible outcomes. The Recommendations provide concrete evidence that the IGF can elaborate solid outputs, in line with the IGF mandate, which prescribes that the Forum shall “find solutions to the issues arising from the use and misuse of the Internet” as well as “identify emerging issues […] and, where appropriate, make recommendations.”10

 

  1. Indeed, the Recommendations served as an inspiration for (and were annexed to) both the study on Terms of Service and Human Rights11, co-sponsored by the Council of Europe and FGV Law School, and the 2017 outcome of the Coalition - a volume entitled ‘Platform regulations: how platforms are regulated and how they regulate us’, featuring research by an ample range of stakeholders .12 It also bears noting that the “platform responsibility” approach and a conspicuous number of elements of the Recommendations can be found in the Council of Europe Recommendation CM/Rec(2018)2 of the Committee of Ministers to member States on the roles and responsibilities of internet intermediaries.13 Fostering this kind of multi-stakeholder and cross-institutional discussion is a core component of the vision behind the creation of the Coalition: to critically analyse challenging questions and collaborative develop potential solutions that, if deemed suitable and efficient, can inspire policymaking exercises.

 

  1. The Recommendations and of the 2017 volume on Platform Regulations stressed the need to advance further the Coalition’s work with two different yet complementary initiatives. First, the elaboration of concrete suggestions on how to implement the right to due process within regard to the remedies provided by online platforms’ dispute resolution mechanisms. Such goal was achieved by organising a year-long participatory process, leading to the Best Practices Platforms’ Implementation of the Right to an Effective Remedy14. Second, the various debates, cooperative processes and research developed by the Coalition members highlighted the need for a deeper analysis going beyond the notion of platform responsibility and platform regulations, but on the very values underlying the operation of digital platforms.

 

  1. Before reaching this latest phase of the coalition’ work, we discussed the nuances of the “Platform Values” debate, with a special issue of the Computer Law and Security Review, dedicated to “Platform Values: Conflicting Rights, Artificial Intelligence and Tax Avoidance.”15 This volume aimed at promoting a discussion on the multiform notion of platform value(s) and the term “value” was construed broadly to embrace a range of social, ethical and juridical values underpinning digital platforms, as well as the economic value that is generated and extracted within platform ecosystems.

 

  1. Digital platforms play a central role in the digital ecosystem, shaping the structure of online as well as offline activities. They have acquired a predominant role in digital policy circles and amongst Internet scholars, due to the enormous impact that their choices, activities and self-regulatory initiatives can have on the lives of several billion individuals. This impact is poised to increase over the incoming years 16, and for this reason, we hope that this Glossary, as well as the previous work of the Coalition will positively contribute to a better understanding of the operation of digital platforms and, consequently, more accurate and convergent policy initiatives.

 

     Notes

1 For further information on the Coalition, please visit the dedicate section of the Internet Governance Forum website https://www.intgovforum.org/multilingual/content/dynamic-coalition-on-platform-responsibility-dcpr

2 The members of the working group are (in alphabetical order): Luca Belli, Vittorio Bertola, Yasmin Curzi de Mendonça, Giovanni De Gregorio, Rossana Ducato, Luã Fergus Oliveira da Cruz, Catalina Goanta, Tamara Gojkovic, Terri Harel, Cynthia Khoo, Stefan Kulk, Paddy Leerssen, Laila Neves Lorenzon, Chris Marsden, Enguerrand Marique, Michael Oghia, Milica Pesic, Courtney Radsch, Roxana Radu, Konstantinos Stylianou, Rolf H. Weber, Chris Wiersma, Monika Zalnieriute and Nicolo Zingales.

3 See for instance http://cyberlaw.stanford.edu/blog/2018/01/glossary-internet-content-blocking-tools

4 See L. Belli, P. De Filippi, N. Zingales, A New Dynamic Coalition on Platform Responsibility within the IGF. Medialaws, 11 June 2014, http://www.medialaws.eu/a-new-dynamic-coalition-on-platform-responsibility-within-the-igf/

5 See Report of the Special Representative of the Secretary-General on the issue of human rights and transnational corporations and other business enterprises, John Ruggie: Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework, UN Human Rights Council Document A/HRC/17/31, 21 March 2011. www.ohchr.org/Documents/Publications/GuidingPrinciplesBusinessHR_EN.pdf

6 Idem.

7 See L. Belli, P. De Filippi, N. Zingales (eds.), Recommendations on terms of service & human rights, Outcome Document n°1, 2015, tinyurl.com/toshr2015

8 See Zingales and Belli (2014).

9 See Belli, De Filippi and Zingales (2015).

10 See Tunis Agenda (2005) para. 72.k and 72.g.

11 See Venturini et al. (2016)

12 See Belli and Zingales (2017)

13 See http://bit.ly/CoEinternetintermediaries

14 The Best Practices can be also found on the IGF website https://www.intgovforum.org/multilingual/index.php?q=filedepot_download/4905/1550

15 A non-edited version of the Special Issue can be accessed at https://www.intgovforum.org/multilingual/index.php?q=filedepot_download/4905/1900

16 See, as an instance, Crémer J., de Montjoye Y.A. and Schweitzer H. (2019). Competition Policy for the digital era. European Commission Directorate-General for Competition; Eyler-Driscoll S., Schechter A. and Patiño C. (2019). Digital Platforms and Concentration. ProMarket and Chicago Booth Stigler Center. BRICS Competition Law and Policy entre. (2019). Digital Era Competition Law: A BRICS Perspective. https://cyberbrics.info/digital-era- competition-brics-report/

 

 

 

References

 

Luca Belli and Nicolo Zingales (Eds.), Platform regulations: how platforms are regulated and how they regulate us. (FGV      Direito  Rio         2017) <https://bibliotecadigital.fgv.br/dspace/handle/10438/19402>

 

Luca Belli, Primavera De Filippi and Nicolo Zingales, A New Dynamic Coalition on Platform Responsibility within the IGF, (Medialaws, 11 June 2014). <http://www.medialaws.eu/a-new- dynamic-coalition-on-platform-responsibility-within-the-igf/>

 

Luca Belli, Primavera De Filippi and Nicolo Zingales (eds.), Recommendations on terms of service & human rights. Outcome              Document          n°1        (Internet             Governance       Forum              2014)

<https://www.intgovforum.org/cms/documents/igf-meeting/igf-2016/830-dcpr-2015-output- document-1/file

 

BRICS Competition Law and Policy Centre, Digital Era Competition Law: A BRICS Perspective (2019) <https://cyberbrics.info/digital-era-competition-brics-report/>

 

Jacques Crémer. Yves-Alexandre de Montjoye. Heike Schweitzer, Competition Policy for the digital era. European Commission Directorate-General for Competition (2019).

 

John Ruggie, Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework, Report of the Special Representative of the Secretary- General on the issue of human rights and transnational corporations and other business enterprises (UN Human Rights Council Document A/HRC/17/31, 21 March 2011) <www.ohchr.org/Documents/Publications/GuidingPrinciplesBusinessHR_EN.pdf>

 

Tunis Agenda for the Information Society (18 November 2005). WSIS-05/TUNIS/DOC/6(Rev. 1)-E.

<https://www.itu.int/net/wsis/docs2/tunis/off/6rev1.html>

 

Jamila Venturini et al. Terms of service and human rights: an analysis of online platform contracts. (Revan,           in           collaboration     with      the              Council of           Europe and       FGV       Direito  Rio         2016)

<https://bibliotecadigital.fgv.br/dspace/handle/10438/19402>

 

Nicolo Zingales and Luca Belli, Dynamic Coalition on Platform Responsibility: Report of the “inception”              meeting              at           the              2014      IGF,       (Internet             Governance       Forum  2014)

<https://www.intgovforum.org/multilingual/index.php?q=filedepot_download/4905/631>

List of proposed terms:

 

    1. (Digital) Access
    2. Accountability
    3. Aggregator
    4. Amplification
    5. Appeal
    6. Application Program Interface
    7. Arbitration
    8. Automated decision making
    9. Bot
    10. Child Pornography / Child Sexual Abuse Material
    11. Common Carrier
    12. Content creator/influencer
    13. Content
    14. Content/web monetization
    15. Coordinated flagging
    16. Coordinated Inauthentic Behavior
    17. Co-regulation
    18. Content Curation
    19. Dark patterns
    20. Data Portability
    21. Defamation
    22. Deindexing
    23. Demonetization
    24. Deplatforming
    25. Device Neutrality
    26. Digital Rights
    27. Disinformation
    28. Dispute resolution (online)
    29. Due diligence
    30. Duty of care
    31. End to End encryption
    32. Federated Service
    33. Fact-checking
    34. Flagging
    35. Filter
    36. (Digital) Gatekeeper
    37. Governance
    38. Harassment
    39. Harm (online harm)
    40. Hash/Hash database
    41. Hate Speech
    42. Human exploitation
    43. Human Review
    44. Incitement to violence
    45. Inclusive Journalism
    46. Information Fiduciary
    47. Infrastructure
    48. Intermediary Liability
    49. Internet Safety/Security
    50. Interoperability
    51. Liability
    52. Marketplace
    53. Media Pluralism
    54. Microtargeting
    55. Moderation
    56. Must-carry
    57. Non-discrimination
    58. Notice
    59. Notice-and-notice
    60. Notice-and-takedown
    61. Notice-and-staydown
    62. Nudging
    63. Online Advertising
    64. Open Identity
    65. Open Standard
    66. Optimization
    67. Platform
    68. Platform Governance
    69. Platform Neutrality
    70. Pornography
    71. Prioritization
    72. Proactive measures
    73. Recommender systems
    74. Red Flag Knowledge
    75. Regulation
    76. Remedy
    77. Repeat Infringer
    78. Responsibility
    79. Revenge pornography / Non-consensual intimate images
    80. Right to explanation
    81. Right to be forgotten
    82. Safe harbor
    83. Self injury
    84. Self-preferencing
    85. Self-regulation
    86. Shadowban / Shadow banning
    87. Sharing economy
    88. Social network
    89.  Spam
    90. Technological protection measures
    91. Terms of service
    92. Terrorist content
    93. Transparency
    94. User
    95. User-generated content
    96. Utility
    97. Violence
    98. User warning (of graphic content, etc.)
    99. Wilful Blindness

 

 

Guidelines provided to Authors

Definitions could vary in length and focus (and the more nuance the better), but a typical definition would be between 100 and 1000 words per concept. In addition to collating the definitions in the printed booklet we’ll publish for the IGF, we would aim this to be a “living document” where we can bring together contributions from a range of disciplines and vocabularies. So, by all means, do not hesitate to put your name if you are inclined to provide a definition for a particular term in the list, even if someone else has already indicated their availability to do so. We will then help coordinate to ensure that the inputs from multiple contributors are complementary, rather than duplicating efforts.

 

Timeline:

 

    • 1) by 15 April: we receive your expressions of interest for the development of the Glossary and participation to the session, so that we can submit a request for an IGF 2020 Session of the Coalition (the deadline is the 22of April)
    • 2) by 15 May: we consolidate your proposed terms and share a first draft of the list of terms that will compose the Glossary, to which you can add further terms or express your feedback until the 30th of May.
    • 3) by 30 June: those who have expressed interest regarding the elaboration of specific terms will add their proposed definitions (ideally between 100 and 1000 words) to the shared document
    • 4) by 30 July: we will allow all coalition members to share further updates and/or add alternative definitions to the list of proposed definitions
    • 5) by 30 August we will have a finalised version of the Glossary that we can proofread and send to our designer to print a booklet that will be circulated at the IGF with acknowledgment of contributors.

 

 

 

One intrinsic limitation of this exercise is that it will be extremely hard to make a comprehensive and detailed glossary, especially considering the limited time and resources that people are likely to be able to put into this commitment in these hectic times. We can try to assuage those concerns by:

 

    • (1) making sure the list of terms features some of the most pressing and debated issues in the debate over platform law and policy. It is probably a good choice not to get into the definition of specific types of illegal content, as that would be heavily dependent on the jurisdiction in question, but it may be a remiss not to attempt a definition of the policy issues that are used to claim/justify some form of platform responsibility beyond what the applicable law requires. So, in this sense, defining disinformation, trolling and “inauthentic coordinated behaviour” (a concept that Facebook uses to prevent coordinated forms of “misuse” of their service in violation of community standards) would appear more useful than defining terms like hate speech, defamation, violent extremism, terrorism, bullying, revenge pornography.
    • (2) in the definitions, recognizing as much as possible the existence of competing/alternative views on the topic, possibly also providing links to sources where those views are more fully explained. While our goal is definitely to be schematic and clear, as you suggest, I’d like to think that this group can add value by referencing some of the cases/official documents of public authorities that address those concepts in more detail. In other words, we do not have to entirely sacrifice nuance for the sake of clarity and simplicity, if we can add links/references. Perhaps one challenge there is how to maintain as much as possible an impartial and objective perspective while also recognizing the diversity of approaches, but the two are not irreconcilable if we maintain a sufficiently abstract and high-level approach. After all, it is not uncommon for glossaries to have more than one entry for each concept.

 

1. (Digital) Access

1. In the words of Ribble (2011, 16), Digital Access is defined as "full electronic participation in society". In this context, the digital divide means inequality of access.

2. For Carpentier (2007), digital access includes:

  • (1) access to media technology
  • (2) access to skills to use the technology,
  • (3) access to content that is considered relevant,
  • and (4) access to the content producing organisation.

3. To the above, it seems essential to add access to applications and services of one’s choice as well as to the technology (both hardware and software) enabling the development of applications and services (Belli & De Filippi, 2015; A4AI, 2020). These factors should be taken up together for describing ‘meaningful connectivity (id.)

4. Access barriers related to basic connectivity (‘having an internet connection’) as well as (basic and advanced) digital skills are being progressively monitored nowadays, for example in the European Commission’s Digital Economy and Society Index (DESI). The latter element is especially emphasised in the DESI as a major factor for the improvement of human capital related to digital access. "Digital skills" include not only ‘basic usage skills’ but also ‘advanced skills and development’ that can be used for the development of new digital goods and services (DESI 2020, p. 51). The lack of relevant skills also limits awareness of potential benefits from digitisation. Recent COVID-19 confinement measures made these two challenges more visible (e.g. children not being able to connect to remote schooling due to the lack of connection to digital infrastructure, the hardware or digital skills). Besides academia, digital literacy has been debated in policy as well (see for example Council of Europe 2016, and the forthcoming new (updated) “Digital Education Action Plan” of the European Commission, which is announced for 2020).

5. One of the media technology access barriers is the unequal availability of the (broadband) internet. By mid-2020, 59.6% of the world population use the internet (see Internet World Stats) with North America and Europe leading (over 85%). The access barriers to infrastructure are especially visible in rural and remote areas and developing countries without high-speed networks. For example, according to the State of Broadband report (2019), poor connectivity was a major barrier to using the internet for 43.5% respondents. According to some scholars, it is estimated that the bandwidth gap is still evolving and will be difficult to close (Hilbert, 2016).

6. While all the elements within the definitions that are introduced above are closely associated to the role of online platforms, the third and fourth elements - in connection to "content" - are especially emphasised in the platform governance communities. As a key term, digital access thus refers to issues of facilitating access to content and content-producing organisations. This area of law and policy is linked to public policy opportunities for developing valuable protections of online experiences. For example, in UNESCO's "Internet Freedom"-series, platforms are intermediaries that could foster freedoms online, substantively based on several principles, holding that "the Internet should be human rights-based, open, accessible for all and governed by multi-stakeholder participation" (R.O.A.M-principles; UNESCO 2014). In this way, for example, the Office of the Special Rapporteur for Freedom of Expression of the Inter-American Commission on Human Rights (Edison Lanza 2016, 15) has repeated these principles and linked them to the Rights to Freedom of Thought and Expression, the Rights to Access to Information and the Right to Privacy and Protection of Personal Data. In this line can be mentioned also the increasing number of court rulings holding that government shutdowns of the internet are illegal, such as the judgment of the Community Court Of Justice of the Economic Community Of West African States in Amnesty International Togo VS The Togolese Republic (ECOWAS Court of Justice, June 25, 2020; see also Pollicino, 2020). Similarly, the Council of Europe’s "Human Rights Guidelines for Internet Service Providers" (2008), which sought to raise awareness for those entities providing access, stressed "the importance of users’ safety and their right to privacy and freedom of expression and, in this connection, the importance for the providers to be aware of the human rights impact that their activities can have".

7. In July 2020, in a correspondence between Romano Prodi and David Sassoli, the President of the European Parliament supported the idea of establishing digital access as a human right (Sassoli 2020; Prodi 2020). Similar to previously developed statements, through the opinions of several courts and other institutions throughout the world (see Pollicino 2020), it points to a growing demand for establishing access in a way that grants internet users one or more separate fundamental (digital) rights.

 

References

 

Internet World Stats. Available at: https://www.internetworldstats.com/stats.htm

 

Alliance for Affordable Internet, ' Meaningful Connectivity: A New Target To Raise the Bar for Internet Access. ' ( A4AI 2020) < https://a4ai.org/meaningful-connectivity/ >

 

Luca Belli & Primavera De Filippi, Net Neutrality Compendium. Human Rights, Free Competition and      the                   Future   of              the                      Internet.    (1st,                   Springer   ,                       2015)

<https://www.ohchr.org/Documents/Issues/Expression/Telecommunications/LucaBelli.pdf>

 

Nico Carpentier, ' Participation, Access and Interaction: Changing Perspectives.' [ 2007] New Media Worlds. Challenges for Convergence 214, 228. Edited by V. Nightingale & T. Dwyer. OUP Australia & New Zealand.

 

Council of Europe, ' Council of Europe Strategy for the Rights of the Child' ( Council of Europe 2016)

<https://rm.coe.int/CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId=0 90000168066cff8>

 

Council of Europe, in co-operation with the European Internet Services Providers Association (EuroISPA)., ' Human rights guidelines for Internet service providers. Directorate General of Human Rights and Legal Affairs' ( Council of Europe 2008) < https://rm.coe.int/16805a39d5>

 

Digital Economy and Society Index (DESI), (DESI 2020) < https://ec.europa.eu/digital-single- market/en/desi>

 

European Commission, ' Digital education action plan (updated).' ( European Commission 2020)

< https://ec.europa.eu/education/education-in-the-eu/digital-education-action-plan_en>

 

Martin Hilbert, ' The bad news is that the digital access divide is here to stay: Domestically installed bandwidths among 172 countries for 1986–2014' [ 2016] Telecommunications Policy 567, 581.

<http://doi.org/10.1016/j.telpol.2016.01.006>                             (also                          available                                                                                                                  at

<https://escholarship.org/content/qt2jp4w5rq/qt2jp4w5rq.pdf>).

 

Edison Lanza/Office of the Special Rapporteur for Freedom of Expression of the Inter-American Commission on Human Rights. ‘Standards for a Free, Open and Inclusive Internet.’ [2017] OEA/Ser.L/V/II,   CIDH/RELE/INF.17/17                                      Available                                                               at

<http://www.oas.org/en/iachr/expression/docs/publications/INTERNET_2016_ENG.pdf>.

 

Oreste Pollicino. ‘The Rights to Internet Access: Quid Iuris?’ [2020] In The Cambridge Handbook of New Human Rights: Recognition, Novelty, Rhetoric edited by Andreas von Arnauld Kerstin von der Decken, and Mart Susi, 263-275. CUP. Available at SSRN:

<https://ssrn.com/abstract=3397340>.

 

Romani Prodi. ‘La connessione sia un diritto umano (letter to Sassoli).’ [16 July 2020]                                                                                        La Repubblica.                                                                                                  Available                                                                                                                                                                                                   at

<https://rep.repubblica.it/pwa/generale/2020/07/16/news/prodi_scrive_a_sassoli_la_connession e_sia_un_diritto_umano_-262146830/>.

Mike Ribble. ‘Digital Citizenship in Schools, Second edition.’ [2011] Washington, DC: International Society for Technology in Education. (see also No 1 of the “Nine Elements of Digital Citizenship, at <https://www.digitalcitizenship.net/nine-elements.html>, and: Council of Europe 2019 Digital Citizenship Education Handbook. Available at <https://rm.coe.int/16809382f9>.)

David Sassoli. ‘Il diritto al web sia una battaglia europea (reply to Romano Prodi).La Repubblica, [19              July                                      2020].                                     Available                                           at

<https://www.repubblica.it/politica/2020/07/19/news/sassoli_il_diritto_al_web_sia_una_battaglia

_europea_-262314742/>.

UN     Broadband       Commission.        ‘The      State      of    Broadband       2019.’      [2019]           Availlable          at

<https://broadbandcommission.org/publications/Pages/SOB-2019.aspx>.

UNESCO. ‘Fostering freedom online: the role of Internet intermediaries.’ [2014] UNESCO Series on                       Internet         Freedom. Available             at          <http://www.unesco.org/new/en/communication-and-information/resources/publications-and-communication-materials/publications/full-list/fostering- freedom-online-the-role-of-internet-intermediaries/>.

 

Case law:

Amnesty International Togo VS The Togolese Republic, Application No. ECW/CCJ/APP/61/18IN, ECOWAS Court of Justice (June 25, 2020). Available at <https://africanlii.org/node/7135>.

Websites: https://www.internetworldstats.com/stats.htm

 

2. Accountability

  1. Accountability refers, in the simplest conception of the term, to the condition of subjecting oneself to external oversight and control. This is a general concept which has widely different implications depending on the context in which it is used (e.g. governmental organizations, private companies, and even computer algorithms). However, as it can be evinced from this general definition it typically includes a transparency component, and a component of submission to external control, both of which can be manifested in different forms depending on the content and the target of accountability. For instance, the control component of accountability of an institution can be exercised through budgetary control and judicial review. Thus, it is crucial to understand when talking about accountability who is accountable for what, and to whom.
  1. Traditionally, accountability has been structured as a bidimensional principles. In well-functioning institutions, the executive is subjected to both vertical and v horizontal accountability. (O’Donell, 1999). The latter is imposed upon organisations by individuals (vertical) through their collective monitoring and actions, while the latter is imposed by public bodies that are specifically tasked to control and – when necessary – restrain the undue actions of a given institution. Transparency and freedom to access information is essential for both dimensions. Vertical accountability is maximised when individuals can act via civic organizations (“civil society”) or media. Horizontal accountability is maximised when public entities created to check potential abuses and inefficiencies are well resourced and can act independently.

 

  1. The primary consequence of accountability is responsiveness, meaning that the entity in question effectively responds to the demands of transparency and external control. This typically presupposes the existence of tools and procedures that allow the exercise control and oversight. The form of accountability can be prescribed with some level of specificity by law, as it is the case for instance in the case of data protection law. The EU General Data Protection Regulation (GDPR) and the Brazilian General Data Protection Law, for instance, explicitly contain a principle of accountability. Such principle requires that organisations put in place appropriate technical and organisational measures to be able to demonstrate compliance, such as: adequate documentation on what personal data are processed, how, to what purpose, how long; documented processes and procedures aiming at tackling data protection issues at an early state when building information systems or responding to a data breach; and the appointment of a Data Protection Officer.
  1. As far as platforms are concerned, the issue of accountability has acquired a particular connotation in the context of injunctions. This is because, under virtually every regime of intermediary liability, injunctions can be imposed against intermediaries regardless of the existence of a primary or even secondary duty to undertake a certain action. For instance, in Europe such injunctions are permitted by article 12(3), 13 (3) and 14(3) of the E-Commerce Directive, based on the rationale that every intermediary that gets entangled with harm can be required to provide assistance. In Germany, this rationale led to the doctrine of Störerhaftung, i.e. ‘disturber’ or ‘interferer’ liability, allowing injunctions against persons who causally contribute to an infringement in violation of a reasonable duty to review. Despite terminological differences, the essence remains the same: although there may be no liability for the damages caused by the infringing activity, failure to execute the injunctions will lead to a different kind of liability, potentially of criminal nature, for contempt of court.

 

References

 

Martin Husovec, Injunctions Against Intermediaries in the European Union: Accountable But Not Liable? (1st, Cambridge University Press, 2017)

 

O’Donnell, Guillermo, ' Horizontal Accountability in New Democracies' in Andreas Schedler, Larry Diamond, Marc F. Plattner: (eds), The Self-Restraining State: Power and Accountability in New Democracies (1st, Lynne Rienner Publishers, London 1999).

3. Aggregator

Definition proposed after the second round of comments. To be included by November 2020.

4. Amplification

  1. In the context of platform governance, ‘amplification’ refers to actions (typically by platforms) that magnify the visibility or reach of certain content, viewpoints, or speakers. The phrase is most commonly used to refer to platform content recommendations and other algorithmic rankings, in cases where particular content is seen to be given an unfair or otherwise unwarranted ranking. Other instances of ‘amplification’ can include the use of bots or astroturfing to disseminate content, and the use of platform advertising services to similar ends. The concept has gained traction due to the growing attention for non-illegal forms of online harm, such as disinformation, where the speech as such is unlikely to be prohibited and removed, but its rapid spread is nonetheless seen as a source of concern and a target for regulation.
  1. Amplification is not a legal term, but it is increasingly used in associated policy debates. Perhaps most notably, the European Commission’s Communication “Tackling Online Disinformation: A European Approach” discusses amplification at length (European Commission, 2018). It identifies three different kinds of amplification: (1) “algorithm-based” amplification, which relates recommender systems, (2) “advertising-driven amplification”, which relates to platform advertising services, and (3) “technology-enabled amplification”, which refers to the use of bots and the use of fake accounts. In the Commission’s diagnosis of disinformation, the problem is thus not merely that disinformation is created and disseminated, but that it is amplified by various factors to reach a disproportionate audience.
  1. The UN Special Rapporteur on Freedom of Expression, David Kaye, displays a similar understanding of the term in an open letter to Mark Zuckerberg regarding the Facebook Oversight Board, dated 1 May 2019 (Kaye 2019a). He proposes that the Board should have access to information about “and factors that may amplify the content at issue (e.g. recommendation algorithms, bot accounts, ad policies).” In a note to the UN General Assembly, the Special Rapporteur also suggested that tools be developed to combat hate speech inter alia through “de- amplification” (Kaye 2019b).
  1. The recent White House Executive Order On Preventing Online Censorship does not address amplification in the same length, but does allege that online platforms have “amplified China’s propaganda” and offers this as a ground for further regulation, although it does not define amplification or further elaborate on the claim (US Executive Order on Preventing Online Censorship 2020).
  1. A key challenge in identifying “amplification” in online platforms is that it implies a baseline of non- amplified treatment, which may not be available. For recommender systems, it is often alleged that hate speech or disinformation are amplified by algorithms that prioritize attention and engagement, but this begs the question what an appropriate (i.e. non-amplified) ranking for this content would be instead. For instance, a hateful website may be considered ‘amplified’ if it is ranked as the first result on Google Search, but what if it is the second? The tenth? The 100th? Thus, although claims of ‘amplification’ can seem objective and analytical, they may conceal an ultimately subjective and political assessment about the appropriate configuration of recommender systems, and about media diversity in general.
  1. A narrower conception of ‘amplification’ is possible in which it only singles out direct positive discrimination of content, such as Facebook’s prioritization of trusted news sources and YouTube’s prioritization of coronavirus victims. But this narrower conception does not correspond with current usage, as outlined above, as this tends to also include attention-optimizing systems that benefit hate speech and disinformation indirectly. In this light, amplification should be understood as a broad term that can refer to a wide range of factors in the online media environment which facilitate the spread of certain content, whether as an intentional design feature or as an unintentional by-product.

 

References

 

European Commission, ' Communication on Tackling Online Disinformation: A European Approach.' ( White House 2018) < https://www.whitehouse.gov/presidential-actions/executive- order-preventing-online-censorship/>

 

David Kaye, ' Mandate of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression' ( OHCHR 2019) < https://www.ohchr.org/Documents/Issues/Opinion/Legislation/OL_OTH_01_05_19.pdf>

 

David Kaye, 'UN General Assembly: Promotion and protection of the right to freedom of opinion and  expression'                        (                     OHCHR                        9                     Oct                            2019)

<https://www.ohchr.org/Documents/Issues/Opinion/A_74_486.pdf>

 

The White House, ' US Executive Order on Preventing Online Censorship' ( White House 2020)

<                        https://www.whitehouse.gov/presidential-actions/executive-order-preventing-online- censorship/>

 

 

5. Appeal

  1. This entry: (I) provides an understanding of the notion of appeal in legal doctrine; (II) elucidates the basic functions of an appeal mechanism; (III) offers examples of when an appeal mechanism may be needed; iv) and highlights the recommendations put forward by the IGF Coalition on Platform Responsibility with regard to appeal mechanisms.
  1. The concept of appeal
  1. The concept of appeal is grounded on the necessity of correcting error, which may always occur when decisions are taken. In this perspective, appeal mechanisms that allow for error correction are an essential feature of due process and rule of law principles at the core of any well- functioning legal systems. An appeal is therefore a mechanism thanks to which defendants that deem to have been victim of a wrongful judgement may have these concerns addressed and, eventually, corrected. The fundamental goal of any appeal mechanisms is making sure that decisions are taken observing procedural fairness, correcting errors, including arbitrary or irrational applications of existing rules and procedures.
  1. Appeals are crucial for ensuring that justice is done in each case and, for this reason, they are included in modern human rights instruments. Indeed, modern legal systems provide appeal mechanisms for correcting errors as historical evidence demonstrate that errors about examining specific facts or about applying existing rules to frame those facts are expected to occur regularly.
  1. Appellate procedures vary substantially among legal systems and the scope of appellate review is generally limited to claims and defences addressed in the proceeding that is challenged (usually defined as “first-instance proceeding”). (ALI and UNIDROIT, 2006:27)
  1. There are three general standards of review by appellate bodies: questions about the application of substantial rules (so-called “questions of law”), questions regarding how the facts have been analysed ((so-called “questions of fact”), and matters of procedure or discretion.
  1. In general terms, appeals can:
  1. constitute a repeat exercise of the lower body's decisionmaking, both in terms of fact-finding and the application of the relevant law/rules to those facts;
  1. accepting the factual determinations of the lower court (particularly when the lower court heard evidence but the appellate court didn't) but reviewing whether the relevant laws/rules were applied correctly (or even, in common law countries at least, whether the relevant laws/rules need re- interpreting);
  1. reviewing the procedural aspects of the lower court/body (e.g. was the process flawed in some way by admitting irrelevant evidence, or ignoring relevant evidence).
  1. The so-called “de novo” review describes a review of a lower body (usually a court) by a superior body (i.e. an appellate court). De novo review is used in questions of how specific normative provisions were applied or interpreted. In this type of review, the appellate court can repeat in its entirety the fact-finding exercise of the lower body or court. De novo judicial review can reverse the decision that is challenged, and, for this reason, this type of review is qualified using the Latin expression “de novo” which means “over again” or “anew.” As the appellate body re-examines the issue from the beginning, this type of review is defined as “nondeferential review” because the decision is taken anew without deferring to the lower body’s decision.
  1. Review standards can focus on both questions of fact and questions of law. The former are based on a more deferential approach. This means that the appellate body will limit its analysis to the facts – such as re-evaluating “clearly erroneous” the evidence – and subsequently defer the case to the body that took the contested for a new application of the rules in light of the factual scrutiny conducted by the appellate body.
  1. Lastly the “nuclear option” amongst the standard of review most is the so-called “arbitrary and capricious” standards. This is the most deferential type of review, as the appellate body determines that a previous decision is invalid because it was made on unreasonable grounds or without any proper consideration of circumstances.
  1. The function of appeal mechanisms
  1. The possibility of an error occurring is an unavoidable feature of any decision-making system. Appeals allow to correct possible errors, thus serving several types of functions. As tellingly explained by Marshall (2011), the primary function of the modern right of appeal is to protect against miscarriages of justice and, indeed, appeals aim at mitigating the risks and consequences of wrongful decisions (or, even worst, convictions, in case of criminal law). Wrongful decisions are always possible and arise either when a defendant - or anyone bringing a claim in civil proceeding is wrongly judged or when a defendant does not receive a fair trial.
  1. The core function of an appeal mechanism is therefore to provide redress form a wrongful judgement that may be the result of an extremely ample spectrum of possible reasons, including failure to accurately assess evidence; mislead or deception by irrelevant or fabricated evidence; or lack of consideration of exculpatory evidence.
  1. A second core function of appeal mechanisms is to remedy the lack of a fair trial. Such situation may occur when decisions are taken applying existing (procedural) rules in an anomalous way or when clear and foreseeable procedural rules are missing.
  1. Importantly, the existence of appeals mechanisms per se provides legitimacy to a system, while stimulating trust in such system. When efficient appeal mechanisms exist, all individuals and entities subject to a specific juridical system will know that rules are applied in a fair, transparent, and consistent fashion.
  1. When an appeal mechanism is needed
  1. An appeal mechanism is needed to challenge erroneous decisions based on procedural or substantial ground. Such situations may occur in the following circumstances:
    1. When the body that took the decision had no jurisdiction (or, more generally speaking, no competence) to take such decision or when the powers of have been utilised improperly.
    2. When the procedure was applied unfairly.
    3. When the decision is not reasonable.
    4. When the decision is not proportional.
    5. When the decision is not compatible with Human Rights obligations.
    6. When the decision contradicts the legitimate expectations of an individual or entity subject to a given set of rules.
    7. Or when the decision does not provide sufficient reasons, justifying why it has been taken.
  1. Recommendations put forward by the IGF Coalition on Platform Responsibility

15. All platforms should offer their users the possibility to appeal any decisions concerning them. Appeal systems shall respect the core minimum of the right to be heard, including: (1) a form of process, which is made available to users in clear and explicit an easily-comprehensible terms, mandating the respect of the guarantees of independence and impartiality; (2) the right to receive notice of the allegations and the basic evidence in support, and comment upon them; and (3) the right to a reasoned decision.

 

References

 

ALI (The American Law Institute) and UNIDROIT, UNIDROIT Principles of Transnational Civil Procedure (1st, , 2006)

 

Peter D. Marshall, ' A Comparative Analysis of the Right to Appeal' [ fall 2011] Duke Journal of Comparative & International Law Volume 22, Number 1.

 

 

6. Application Program Interface

  1. An Application Program Interface (also known as API, or “middleware”) is any well defined interface which identifies the service that one component, module or application provides to other software elements (de Souza et al, 2004). APIs can be grouped into two types: those which are more intensively computational, based on an execution engine and those which are declarative, based on presentation engines. In Oracle v Google (N.D. Cal. June 20, 2012), ECF No. 1211), on the scope of copyright protection for interface specifications, the US District Court distinguished three categories of information provided by these interfaces, namely (a) declaration or method header lines; (b) the method and class names; and (c) the grouping pattern of methods. In essence, these interfaces contain the information and instructions which enable third-party applications to run atop existing computer programs without a loss of functionality.
  2. In the context of platform regulation, APIs are being advanced as a possible solution to give more control to individuals, both on how their personal data is collected and used (see My Data Declaration, 2017), and on how the platform moderates their feed (Keller, 2019). There are technical challenges, however, in how to operationalize this model, including the technical standards on which such APIs should be based, the legal safeguards to preserve the protection of individuals’ personal data (in particular when the API allow the transferring of data involving third parties) and the limits to regulation which may impose an API obligation to private entities (including the interference with the right to conduct business and freedom of expression).

 

References

 

C. R. B. de Souza et al, ' Sometimes You Need to See Through Walls- A Field Study of Application Programming Interfaces' [ 2004] Computer supported cooperative work, ACM Press 63, 67

 

Ashwin Van Rooijen, ' The Software Interface between Copyright and Competition Law: A Legal Analysis of Interoperability in Computer Programs' [ 2010] Kluwer Law, Alphen aan den Rijn

 

Daphne Keller ' Platform Content Regulation – Some Models And Their Problems’ (The Center for Internet and Society, 6 May 2019) < http://cyberlaw.stanford.edu/blog/2019/05/platform- content-regulation-–-some-models-and-their-problems>

 

MyData, ‘Declaration of Principles’ [2017] <https://mydata.org/declaration>

7. Arbitration

  1. The World Intellectual Property Organization (WIPO) defines arbitration as a “procedure in which a dispute is submitted, by agreement of the parties, to one or more arbitrators who make a binding decision on the dispute. In choosing arbitration, the parties opt for a private dispute resolution procedure instead of going to court.” A simpler definition given by the Cambridge Dictionary states that the arbitration process is a way of solving an argument between people by helping them to consent and commit to a common and acceptable solution. It is important to highlight that both sides in the dispute have to agree to pursue an arbitral solution, that is to have the matter solved through the mediation of an arbitrator.
  1. Arbitration is a type of dispute/conflict resolution method. In its process, the parties that have previously agreed to arbitration can settle the dispute outside of the courtroom. That way, it is usually much faster than legal procedures, for its informality and privacy, and also the reason why this procedure is often chosen rather than the litigation process. The disputes are resolved by an impartial third party, known as the arbitrator, whose decision is legally binding for all parties.
  1. As for the online process of arbitration, its premise is the same as the real life one. The difference is that the resolution of conflicts can be done entirely online, using video calls for hearings and online systems for uploading multiple types of evidence (documents, photos, videos, etc) making it possible to resolve disputes without having to appear in person, and by that minimizing the costs of the process.
  1. The most prominent example of the use of online arbitration is in disputes over Internet domains (Mania, 2015). A parallel can be made between domain names in the Internet and the system of business identifiers that are protected by intellectual property rights and has existed long before the arrival of the Internet, and conflicts regarding both issues can be resolved through arbitration process. The most common reason for disputes over Internet domain names comes from the practice known as “cybersquatting” (WIPO, nd), which is when a random person register a domain name under famous people or business trademarks and offers them for sale at prices far beyond the cost of registration to give them the rights of the domain name. Therefore, “any institution or person who considers that a registered domain name conflicts with the legitimate rights or interests of that institution or person may file a complaint with any of the competent domain name dispute resolution providers” (CCPIT, nd). If there is a consent between parties to solve their domain name dispute by an arbitration institute, the details of the dispute need to be submitted to the chosen institution.
  1. After the Complaint is made and the entity against whom the Complaint was made files a Response, the dispute resolution service provider is chosen by all parties and “shall implement a system whereby panels of experts are responsible for the resolution of disputes” (CCPIT, nd). The number of panelists who form the panels is one to three, and they all have to be experts listed on an online file available for the complainants and respondents to select from. The panelists must be impartial and independent during the whole arbitration procedure and have no material interest with parties of either sides. After the conclusion of the process, the issuance of the panels of experts decision must be notified to all parties and the decision must be implemented, meaning that the domain name in question may be cancelled or transferred (WIPO, nd).
  1. The WIPO - which is mandated to promote the protection of intellectual property worldwide - conducted extensive consultations with members of the Internet community around the world, after which it prepared and published a report containing recommendations dealing with domain name issues. Based on the report's recommendations, ICANN adopted the Uniform Domain Name Dispute Resolution Policy (UDRP), and under the standard dispute clause of the Terms and Conditions for the registration of a gTLD domain name, the registrant must submit to the UDRP proceedings. Also, the Protocol on Cybersecurity in International Arbitration (“Cybersecurity Protocol”) provides guidance on reasonable information security measures that the parties and arbitrators can take, particularly in light of increasingly virtual hearings and paperless document transfer.
  1. An example of alternative online arbitration practices for resolution of domain name conflicts is the Sistema de Administração de Conflitos de Internet, or the SACI-Adm, created by the Internet Steering Committee in Brazil (CGI.br). This method is specifically made for the resolution of disputes between the holder of a domain name in .br (brazilian ccTLD) and any third party that disputes the legitimacy of the domain name registration made by the holder. The scope of the SACI-Adm procedures is limited to requests for cancellation and transfer of domain - therefore, any claim related to obtaining indemnities cannot be dealt with in this context. The aspects of the SACI-Adm procedure are similar to the ones presented in the UDRP, the main difference between them is that in the brazilian regulation the .br Information and Coordination Center (NIC.br) doesn’t allow the transfer of the domain name in conflict from the beginning of the arbitration with the SACI-Adm procedure until its termination (Angelini, 2012).
  1. There’s also the “baseball-style” arbitration process, that was used in the case between Google and news publishers in Australia. In this form, an arbitration attorney is selected to decide the main issues that are in dispute between two or more parties. “The arbitration attorney, contrary to popular belief, is not usually free to decide anything he or she pleases, instead, each party participating in the arbitration must submit a proposal to him or her in advance”. This kind of arbitration is named as the “baseball-style” due to the discretion exercised by the arbitration attorney to these proposals. It is also sometimes called "final-offer or "either/or" arbitration because of the limits imposed upon the arbitration attorney.
  1. It is important to state that there’s a difference between Online Dispute Resolution (ODR) and online arbitration. The ODR is a wide field and emcompasses many types of dispute resolution practices that use online methods and tools to explore the convenience and efficiency of internet communications and make the resolution process easier and faster. The term addresses every aspect from electronic filing of resolution process submissions and transfer of documents to online hearings. The variety of ODR can envelop interpersonal disputes, “including consumer to consumer disputes (C2C) or marital separation, to court disputes and interstate conflicts” (Petrauskas & Kybartiene, 2011) . On the other hand, online arbitration is an essential part of ODR by which two or more parties can solve any disagreement originated from their contractual relationship online and it is mostly used for domain names conflict resolution, Business to Business (‘B2B’) cross-border e-commerce disputes, and in some degree used for the resolution of traditional cross-border commercial disputes (Amro, 2019).

 

References

 

World     Intellectual      Property      Organization       (WIPO).       ‘What     is     Arbitration?’                Available          at:

<https://www.wipo.int/amc/en/arbitration/what-is-arb.html>

 

Cambridge                    Dictionary.                   ‘Arbitration                  Meaning’.                                        Available                      at:

<https://dictionary.cambridge.org/pt/dicionario/ingles/arbitration>

 

Mania, Karolina. ‘Online dispute resolution: The future of justice’. [2015] International Comparative                      Jurisprudence,                       1(1),                    76-86.                                             Available                                at:

<https://www.sciencedirect.com/science/article/pii/S2351667415000074>

 

WIPO.        ‘Frequently         Asked         Questions:         Internet       Domain        Names’.                    Available             at:

<https://www.wipo.int/amc/en/center/faq/domains.html#5>

 

CCPIT. ‘Resolution of Internet Domain Name Disputes’. Available at: <https://www.ccpit- patent.com.cn/node/1105>

 

WIPO. ‘WIPO Guide to the Uniform Domain Name Dispute Resolution Policy (UDRP)’. Available at: <https://www.wipo.int/amc/en/domains/guide/#b1>

 

WIPO.          ‘WIPO          Internet         Domain          Name          Process.’           [1999].                      Available     at:

<https://www.wipo.int/amc/en/processes/process1/report/index.html>

 

ICCA Report. ‘ICCA-NYC Bar-CPR Protocol on Cybersecurity in International Arbitration’. [2020] Available at: <https://www.arbitration-icca.org/media/14/76788479244143/icca-nyc_bar- cpr_cybersecurity_protocol_for_international_arbitration_-_print_version.pdf>

 

Angelini, Kelli. ‘SACI: o Sistema Administrativo de Conflitos de Internet implementado para domínios no “.br”.’ [2012] Available at: <https://www.politics.org.br/edicoes/saci-o-sistema- administrativo-de-conflitos-de-internet-implementado-para-dom%C3%ADnios-no-

%E2%80%9Cbr%E2%80%9D>

 

Petrauskas, Feliksas & Kybartiene, Eglè. ‘Online dispute resolution in consumer disputes’.

Jurisprudencija,                                 18(3).                              [2011]                              Available                                                             at:

<https://www.mruni.eu/upload/iblock/0c4/8_Petrauskas_Kybartienuy-1.pdf>

 

Amro, Ihab. ‘Online Arbitration in Theory and in Practice: A Comparative Study in Common Law and               Civil                 Law                               Countries’.                        [2019]             Available        at:

<http://arbitrationblog.kluwerarbitration.com/2019/04/11/online-arbitration-in-theory-and-in-practice-a-comparative-study-in-common-law-and-civil-law- countries/?doing_wp_cron=1592416143.9425508975982666015625>

 

Arbitration.com.        What       is       Baseball         Arbitration?.        [9       June        2011]                                       Available at:

<http://www.arbitration.com/articles/what-is-baseball-arbitration.aspx>

 

8. Automated decision making

  1. Automated decision-making (ADM) generally refers to a process or a system where the human decision is supported by or handed over to an algorithm.
  1. ADM are increasingly used in several sectors of our society and by different actors (both private and public). For instance, ADM can be embedded in a standalone software that produces a medical recommendation for a patient, an online behavioural advertising system that shows a certain content to a specific target, a credit score system to determine whether one can get a loan, an algorithm that selects the most interesting CV for a position, a recognition filter that scans and bans a user-generated content from a platform, an automated ticketing system which fines drivers exceeding speed limits, an algorithm to assess the recidivism risk, smart contracts (Finck, 2019), etc.
  1. Given the spread of ADM and the potential impact on individuals, the Council of Europe has issued the Recommendation CM/Rec(2020)1 on the human rights impacts of algorithmic systems, promoting a lawful and human-centric design of ADM.
  1. Otherwise, ADM is not generally regulated as such, but its deployment can be captured by a wide spectrum of laws.
  1. In Europe, for instance, when an ADM processes personal data, the General Data Protection Regulation (GDPR) applies. The GDPR does not expressly define the concept of ADM, but it provides additional rules in case that a solely ADM process produces legal effects concerning the data subject (i.e. the person to whom data refers) or similarly affects him or her (Article 22 GDPR). In the literature, several doubts have been raised with reference to the exact meaning of “decision”, “solely automated”, “legal effects” and “similarly affect” (Mendoza and Bygrave, 2017; Bygrave, 2020). The Working Party Article 29 (WP29, now European data Protection Board) has provided an interpretation of these concepts, suggesting that: 1) the ADM can be fed with any kind of data (whether they are provided directly from the individual, observed or otherwise inferred); 2) a “solely” automated decision means there is no human involvement at any stage of the processing; 3) with “legal effects” entails that the decision must affects the legal rights and freedoms of individuals (e.g. a system that automatically refuse the admission to a country); 4) “similarly affects” intends to include other possible negative effects which may seriously impact the behaviour of individuals, e.g. potentially leading to discrimination (for example, a system denying someone an employment opportunity) (WP29, 2018). However, the provision does not seems to entail an evaluation of the negative impact on groups (Veale and Edwards, 2018), but several authors have argued in favour of expanding the data protection framework from the individual level to the collective one (Taylor, Floridi, and van der Sloot, 2016; Mantelero, 2018; Brkan, 2019).
  1. As a general rule, the GDPR prohibits such kind of processing, unless: 1) it is necessary for entering into, or perform, a contract between the data subject and the controller (i.e. the entity leading the processing); 2) it is authorized by the law, which lays down appropriate safeguards for the rights and legitimate interests of the data subject; 3) the data subject explicitly consent to it. When exceptions 1 and 3 apply, the data controller can carry out the ADM, but it has to implement suitable measures to protect individuals’ rights and freedoms (Article 22(3) GDPR). Among them, the GDPR lists three main rights that have to be guaranteed to individuals: 1) to obtain human intervention on the part of the controller; 2) to express his or her point of view; 3) to contest the decision. The implementation of such measures has been differently embraced by Member States (Malgieri, 2019).
  1. Finally, in case the ADM involves the processing of particular categories of data (defined at Article 9 GDPR), such as health data or data revealing ethical or political opinions, the GDPR provides a specific discipline. In particular, ADM cannot be performed, unless there is the explicit consent of the data subject or the processing is necessary for a substantial public interest. In both cases, the controller must adopt suitable measures to protect data subjects’ rights, freedoms, and legitimate interests.
  1. Another important legal issue concerning ADM in the framework of GDPR relates to the transparency of the system, i.e. the possibility to understand the logic involved in the algorithm performing a decision according to Article 22. There has been a lively debate in the literature about the existence of the so-called “right to explanation” in the GDPR (Goodman, Flaxman, 2016; Malgieri, Comandé, 2017; contra Wachter, Mittelstadt, Floridi, 2017). Whether it can be envisaged directly or indirectly in the black letters of the GDPR, there is a convergence toward the elaboration of solutions that can promote the transparency of ADM and “XAI”, i.e. explainable AI (Wachter, Mittelstadt, Russell, 2017; Edward, Veale, 2017; Kaminski, Malgieri, 2019; Brkan, Bonnet, 2020). The importance of explainability has been stressed by the High-Level Expert Group on Artificial Intelligence among the requirements for trustworthy AI (High-Level Expert Group on AI, 2019).
  1. A similar - although not identical - provision on ADM is included at Art. 11 of Directive (EU) 2016/680 (Law enforcement Directive). Being a Directive, it is not self-executive, therefore Member States have to implement it in their national law. The Law Enforcement Directive explicitly forbids the use of ADM in criminal matters where the decision produces an adverse legal effect concerning the data subject or significantly affects him or her. Such a prohibition can be overcome only by Union or Member State law to which the controller is subject and which provides appropriate safeguards for the rights and freedoms of the data subject, at least the right to obtain human intervention on the part of the controller.
  1. Data protection law is probably the most comprehensive framework tackling the phenomenon of ADM. However, ADM is also regulated by other branches of law.
  1. For instance, when the ADM is likely to produce discriminatory results, the protection granted by anti-discrimination law kicks in. Both direct and indirect discrimination are prohibited by the European Convention on Human Rights and EU Law. For example, an ADM leading to the exclusion of a member from an online platform would be deemed to be illegal if based on race or proxies for it, such as African-American names (direct discrimination, see Edelman, Luca, Svirsky, 2017). Similarly, it would be considered indirect discrimination if a supposed neutral measure is likely to impact in a significative more negative way a protected category if compared to others in a similar situation. For instance, to anchor the earnings of platform’s drivers to the distance and time they travel appears to be a neutral decision. However, studies show that women drive at a lower average speed, therefore they are likely to take less rides, and, as a consequence their pay is substantially lower than their male colleagues (Cook et al., 2018). 
  1. Nevertheless, the anti-discrimination legal framework suffers important limitations, since it covers only certain sectors and certain protected grounds. This situation is particularly critical in the digital discrimination brought by ADM, as the latter often transcends the traditional protected attributes (Borgesius, 2018; Xenidis, Senden, 2020). In online behavioural advertising, for example, people might be discriminated because the inferential analytics draws correlations among apparently neutral data, thus exposing individuals to price discrimination or exclusions from lucrative job ads without them even being aware how and on the basis of which criteria they have been profiled (Wachter, 2020).
  1. In the field of consumer protection, ADM has been recently taken into account in relation to transparency of online marketplaces. The “Omnibus Directive”, amending the Consumer Right Directive (Directive 2011/83/EC), established that when the price is personalized on the basis of an ADM, the consumer must be informed about it. However, the provision imposes to disclose the “whether” but not the “how” of the ADM (Jabłonowska 2019). It must be said, though, that if the system processes personal data and the price personalization falls within the notion of Article 22 GDPR, the explainability and corresponding remedies (Article 22(3) GDPR) will apply to this situation. Such a transparency requirement, in any case, does not extend to ‘dynamic’ pricing which depends on real-time market demands. Differently, in case of rankings, the Omnibus Directive requires to inform the consumers about the main parameters and their weighting behind the “relative prominence given to products, as presented, organised or communicated by the trader”. The concept of ranking is constructed in a technological neutral way, therefore it might consist in an ADM.
  1. Another sector regulating ADM is medical devices. When a software stand-alone can be used for medical purposes, i.e. provide information to support diagnostic or therapeutic decisions or monitor vital physiological parameters, it will have to comply with Regulation (EU) 2017/745 that establishes the steps to bring a medical device for human use on the market.
  1. ADM is also addressed in the field of content recognition technologies. For instance, the new Copyright in the Digital Single Market Directive (Directive (EU) 2019/790) provides a new form of direct liability for online platforms (more specifically, online content-sharing service providers, such as YouTube) for their users’ upload. To avoid this form of liability platforms have two possible options. The golden road traced by the Directive is to negotiate a license with the rightholder in order to make available the content uploaded by users. As an alternative, online platforms have to demonstrate, among other things, to proactively ensure the unavailability of the (infringing) content. This latter option has attracted the criticisms of copyright scholars and civil society representatives, being a provision which will lead to establish upload filters and limit freedoms on the Internet (Cerf et al., 2018; Kretschmer et al., 2019; Reda, 2019). The Directive establishes some guarantees: users rights – such as quotation, criticism, pastiche – shall be preserved (Article 17(7), the proactive measures cannot lead to any general monitoring obligation (Article 17(8)), and the platform must provide for adequate complaint and redress mechanism to allow users contesting the decisions about access denial and removal of content (Article 17(9)). However, several doubts remain as to the impact of these provisions on fundamental rights such as freedom of expression and data protection (Quintais et al., 2019; Quintais, 2020; Romero Moreno, 2020; Samuelson, 2020; Schmon, 2020).

 

References

 

Brkan, Maja, and Grégory Bonnet., ' Legal and Technical Feasibility of the GDPR’s Quest for Explanation of Algorithmic Decisions: of Black Boxes, White Boxes and Fata Morganas.' [ 2020] European Journal of Risk Regulation 18, 50.

 

Brkan, Maja. ‘Do algorithms rule the world? Algorithmic decision-making and data protection in the framework of the GDPR and beyond.’ [2019] International journal of law and information technology 91, 121.

 

Bygrave, Lee. “Article 22”, in Kuner, Christopher, Lee A. Bygrave, and Christopher Docksey. Commentary on the EU General Data Protection Regulation (GDPR). A Commentary. Oxford University Press, 2020, 522.

 

Cerf, Vint et al. ‘Joint Letter to the European Parliament’ [2018], Eletronic Frontier Foundation

<https://www.eff.org/files/2018/06/13/article13letter.pdf.>

 

Cook, Cody, et al. ‘The gender earnings gap in the gig economy: Evidence from over a million rideshare drivers.’ [2018] No. w24732. National Bureau of Economic Research.

 

Edelman, Benjamin, Michael Luca, and Dan Svirsky. ‘Racial discrimination in the sharing economy: Evidence from a field experiment.’ [2017] American Economic Journal: Applied Economics 1, 22.

 

Edwards, Lilian, and Michael Veale. ‘Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for.’ [2017] Duke L. & Tech. Rev. 18.

 

Finck, Michele. ‘Smart Contracts as Automated Decision-Making under Article 22 GDPR.’ [2019] International Data Privacy Law 1, 17.

 

Goodman, Bryce, and Seth Flaxman. ‘EU regulations on algorithmic decision-making and a “right to explanation”.’ [2016] ICML workshop on human interpretability in machine learning (WHI 2016), New York, NY. http://arxiv. org/abs/1606.08813 v1.

 

High-Level Expert Group on AI, Policy and investment Recommendations for Trustworthy AI, [2019]  <https://ec.europa.eu/digital-single-market/en/news/policy-and-investment- recommendations-trustworthy-artificial-intelligence.>

 

Kaminski, M. E., & Malgieri, G. ‘Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations.’ [2019] Available at SSRN 3456224.

 

Kretschmer, Martin et al. The Copyright Directive: Articles 11 and 13 must go Statement from European Academics in advance of the Plenary Vote on 26 March 2019’ [2019],

<https://www.create.ac.uk/wp- content/uploads/2019/03/Academic_Statement_Copyright_Directive_24_03_2019.pdf?x42614>

 

Malgieri, Gianclaudio, and Giovanni Comandé. ‘Why a right to legibility of automated decision- making exists in the general data protection regulation.’ [2017] International Data Privacy Law.

 

Malgieri, Gianclaudio. ‘Automated decision-making in the EU Member States: The right to explanation and other “suitable safeguards” in the national legislations.’ [2019] Computer Law & Security Review 35.5 (2019): 105327.

 

Mantelero, Alessandro. ‘AI and Big Data: A blueprint for a human rights, social and ethical impact assessment.’ [2018] Computer Law & Security Review 754, 772.

 

Mendoza, Isak, and Lee A. Bygrave. ‘The right not to be subject to automated decisions based on profiling.’ [2017] EU Internet Law. Springer, Cham 77, 98.

 

Quintais, João, et al. ‘Safeguarding user freedoms in implementing article 17 of the copyright in the digital single market directive: recommendations from European Academics.’ [2019] Available at SSRN 3484968.

 

Quintais, João. ‘The New Copyright in the Digital Single Market Directive: A Critical Look.’ [2020] European Intellectual Property Review.

 

Reda,         Julia.        ‘EU        copyright        reform:       Our        fight       was        not       in                   vain’         [2019],

<https://juliareda.eu/2019/04/not-in-vain/>

 

Romero Moreno, Felipe. ‘‘Upload filters’ and human rights: implementing Article 17 of the Directive on Copyright in the Digital Single Market.’ [2020] International Review of Law, Computers & Technology 1, 30.

 

Samuelson, Pamela. ‘Pushing Back on Stricter Copyright ISP Liability Rules.’ [2020] Michigan Technology Law Review, Forthcoming.

 

Schmon, Christoph. ‘Copyright Filters Are On a Collision Course With EU Data Privacy Rules’ [2020] <https://www.eff.org/deeplinks/2020/02/upload-filters-are-odds-gdpr>

 

Taylor, Linnet, Luciano Floridi, and Bart Van der Sloot, eds. ‘Group privacy: New challenges of data technologies.’ [2016] Vol. 126. Springer.

 

Veale, Michael, and Lilian Edwards. ‘Clarity, surprises, and further questions in the Article 29 Working Party draft guidance on automated decision-making and profiling.’ [2018] Computer Law & Security Review 398, 404.

 

Wachter, Sandra, Brent Mittelstadt, and Chris Russell. ‘Counterfactual explanations without opening the black box: Automated decisions and the GDPR.’ [2017] Harv. JL & Tech. 31 (2017): 841.

 

Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi. ‘Why a right to explanation of automated decision-making does not exist in the general data protection regulation.’ [2017] International Data Privacy Law 76, 99.

 

Wachter, Sandra. ‘Affinity profiling and discrimination by association in online behavioural advertising.’ [2020] Berkeley Technology Law Journal 35.2.

 

WP29, ‘Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation               2016/679’,                WP251rev.01,         [3               October                             2017-                  6                                  February                   2018],

<https://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=612053>

 

Xenidis, Raphaële, and Linda Senden. ’EU non-discrimination law in the era of artificial intelligence: Mapping the challenges of algorithmic discrimination.’ [2019] Ulf Bernitz et al (eds), General Principles of EU law and the EU Digital Order (Kluwer Law International, 2020) 151, 182.

 

Zuiderveen Borgesius, Frederik. ‘Discrimination, artificial intelligence, and algorithmic decision- making.’ [2018] Study for the Council of Europe, <https://rm.coe.int/discrimination-artificial- intelligence-and-algorithmic-decision-making/1680925d73>

9. Bot

  1. The word “bot” is a tech slang, an abbreviation for “robot”, in this sense, they share the same conceptual core: a reprogrammable machine built to perform a variety of tasks (RIA, s.d.). In the specific case of online bots, they are labeled as automatic or semi-automatic computer programs that run over the Internet (Franklin & Graesser, 1996; Gorwa & Guilbeault, 2018).
  1. One of a bot's main assets is its ability to perform simple and repetitive tasks faster than a human, and at scale, with some arguing that the most repetitive tasks in human jobs (and some jobs entirely) will be replaced by this increasingly software automation (Bort, 2014) . Bots’ activities online may have impressive proportions and, in this perspective, it is worth noting that 37.9% of total internet traffic in 2018 was carried out by bots, with 53.4% of them coming from the United States (Imperva, 2020).
  1. Some experts and companies divide bots into two broad categories: benevolent and malicious bots (Jones, 2015; Cloudfare, s.d.). The friendly ones are subdivided according to the functions they perform: social bots simulates human behavior in automated interactions to manage social media accounts; commercial bots, usually used to increase online engagement in companies or as chatbots to autonomously conduct a conversation instead of employing people to communicate with consumers; web crawlers bots, also known as Google bots, which scan content on webpages all over the Internet and gather useful information; entertainment bots, that are designed to be appreciated aesthetically (art bots) or as characters to play against (game bots); helpful or informational bots, that surface helpful information and usually push notifications and breaking news stories.
  1. The above mentioned examples are usually utilised to help and optimize human actions and tasks. However, several types of malicious bots exist, such as content-scraping bots, or bots that spread spam , or carry out credential stuffing attacks. Malicious bots can be: scrapers bots, that are designed to steal content or huge amounts of data; spam bots, designed to automatically circulate unrequested content around the web in order to drive traffic to the spammer’s website, fill out forms automatically, congest servers or just cause disturbance; scalper bots, also known as automated purchasing, that are designed to purchase sought-after products and services; and hacker bots, that exploit security vulnerabilities to distribute malware, deceive individual people, attack websites or entire networks, in this latter case, devices that are affected are known as “zombies” and infected networks are “botnets” (combination of “robot” and “network”) .
  1. These botnets are programmed to perform mischievous tasks such as DDoS attacks, theft of confidential information, click fraud, cyber-sabotage, and cyber-warfare. As an instance, in September 2016, a botnet called Mirai was responsible for one of the biggest cyber-attack in history when launched a DDoS attack on the servers of Dyn, one of the main DNS providers, which resulted in a blackout for various internet services (Antonakakis et al. 2017).
  1. More recently, a type of malicious bot has dominated digital policy debates: impersonators bots. This type of bot mimic human behavior predominantly in order to manipulate public opinion and exercise social control (Bessi & Ferrara, 2016; Howard et.al, 2018). Twitter and other social networks are the home of such political bots, and for this reason they have been highly criticized and have become subject to regulation around the world.

 

References

 

Antonakakis,        Manos      et     al.     Understanding        the     Mirai      Botnet      [2017].                               Available  at:

<https://www.usenix.org/conference/usenixsecurity17/technical- sessions/presentation/antonakakis>

 

Bessi, Alessandro & Ferrara, Emilio. ‘Social bots distort the 2016 US Presidential election online discussion’.                 [2016]            First                         Monday,                                       21(11-7).            Available       at:

<https://firstmonday.org/ojs/index.php/fm/article/view/7090/5653>

 

Bort, Julie. ‘Bill Gates: People Don't Realize How Many Jobs Will Soon Be Replaced By Software Bots’, [2014] Available at: <https://www.businessinsider.com/bill-gates-bots-are-taking-away- jobs-2014-3>

 

Botnerds. ‘Types of Bots: An Overview’. Available at: <http://botnerds.com/types-of-bots/>

 

Cloudflare. ‘What is a bot?’ Available at: <https://www.cloudflare.com/learning/bots/what-is-a- bot/>

 

Delaney, Kevin J. ‘The robot that takes your job should pay taxes, says Bill Gates’. [2017] Available at: <https://qz.com/911968/bill-gates-the-robot-that-takes-your-job-should-pay-taxes/>

 

Franklin, Stan & Art, Graesser. “Is It an Agent, or Just a Program?: A Taxonomy for Autonomous Agents.” [1996] In Intelligent Agents III Agent Theories, Architectures, and Languages, 21–35. Lecture Notes in Computer Science. Springer, Berlin, Heidelberg. Available at:

<https://www.researchgate.net/profile/Stan_Franklin/publication/221457111_Is_it_an_Agent_or

_Just_a_Program_A_Taxonomy_for_Autonomous_Agents/links/0f317530ba440e7979000000/I s-it-an-Agent-or-Just-a-Program-A-Taxonomy-for-Autonomous-Agents.pdf>

 

Gorwa, Robert & Guilbeault, Douglas. ‘Unpacking the social media bot: A typology to guide research and policy.’ [2018] <https://arxiv.org/pdf/1801.06863.pdf>.

 

Howard, Philip N., Woolley, Samuel & Calo, Ryan. ‘Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration’, [2018] Journal of Information Technology & Politics 81, 93 DOI: 10.1080/19331681.2018.1448735

 

Imperva.       ‘Bad      Bot      Report       2020:       Bad      Bots      Strike       Back.’       [2020]                       Available          at:

<https://www.imperva.com/resources/resource-library/reports/2020-Bad-Bot-Report/>

 

Jones, Steve. ‘How I learned to stop worrying and love the bots’. [2015] Social Media+ Society, 1(1),                         2056305115580344.                                             Available                                                 at:

<https://journals.sagepub.com/doi/pdf/10.1177/2056305115580344>

 

Robotic Industries Association (n.d.). ‘Defining The Industrial Robot Industry and All It Entails’ Available at: <https://www.robotics.org/robotics/industrial-robot-industry-and-all-it-entails>

 

 

 

10. Child Pornography / Child Sexual Abuse Material

  1. NB: While the term “child pornography” has been used traditionally, and continues to be used on occasion, it is increasingly understood to be inappropriate since it suggests a degree of complicity or consent on the part of the child. Instead, the term “child sexual abuse material” (CSAM) (or sometimes “child sexual abuse imagery” (CSAI)) is now considered to be a more appropriate describe the phenomenon and is the term used here.
  1. This entry: (i) sets out the agreed international definitions of the term as found in relevant legal instruments and (ii) provides examples of existing regulatory responses to child sexual abuse material.
  1. Agreed international definitions
  1. Child sexual abuse material is prohibited under a number of international legal instruments, two of which provide relatively clear definitions. The first is Article 2(c) of the Optional Protocol to the Convention on the Rights of the Child on the sale of children, child prostitution and child pornography (OP-SC-CRC), which defines the term “child pornography” as “any representation, by whatever means, of a child engaged in real or simulated explicit sexual activities or any representation of the sexual parts of a child for primarily sexual purposes”. This may be considered the minimum core of what constitutes child sexual abuse material.
  1. The second, broader definition is provided by Article 9 of the Budapest Convention, where “child pornography” includes: “pornographic material that visually depicts (a) a minor engaged in sexually explicit conduct; (b) a person appearing to be a minor engaged in sexually explicit conduct; (c) realistic images representing a minor engaged in sexually explicit conduct”. While paragraph (a) broadly overlaps with the definition in OP-SC-CRC, paragraphs (b) and (c) go further by including persons appearing to be minors and realistic images representing minors. Under the Budapest Convention, states are, however, free not to apply those paragraphs, meaning that paragraph (a) is the core part of the definition.
  1. Instruments vary in terms of the age at which a person is considered a “child” or a “minor”. While OP-SC-CRC does not define “child”, the term is defined in Article 1 of the Convention on the Rights of the Child itself as any “human being below the age of eighteen years unless under the law applicable to the child, majority is attained earlier”. The Budapest Convention is narrower in the discretion it offers, defining “minors” as “all persons under 18 years of age”; while it does allow state parties to set a lower age-limit, however this cannot be less than 16 years.
  1. Existing regulatory responses
  1. Most states have sought to comply with their obligations under international law to prohibit child sexual abuse material through criminalisation, creating specific criminal offences relating to child sexual abuse material. The International Centre for Missing and Exploited Children has published model legislation (and a global review of existing legislation) which include as a minimum definition, “the visual representation or depiction of a child engaged in a (real or simulated) sexual display, act, or performance” with “child” defined as “anyone under the age of 18” (ICMEC, 2018).
  1. In addition to criminalisation, many governments take action, sometimes through regulation and sometimes informally, to prevent access to child sexual abuse imagery online. In many states, governments have encouraged ISPs, in particular, to use filters to block access to certain websites known to carry or have carried child sexual abuse imagery. Governments have also encouraged and supported other self-regulatory initiatives, such as the creation of hash databases of known child sexual abuse imagery by companies and non-governmental organisations, which are then shared so as to more easily block known images across many platforms. These include the Internet Watch Foundation (in the United Kingdom), the National Centre for Missing and Exploited Children (in the USA) and the Canadian Centre for Child Protection (in Canada).

 

References

 

International Centre for Missing and Exploited Children (ICMEC), ‘Child Sexual Abuse Material: Model Legislation & Global Review’, [2018] 9th Edition, Available at: https://www.icmec.org/wp- content/uploads/2018/12/CSAM-Model-Law-9th-Ed-FINAL-12-3-18.pdf.

 

11. Common Carrier

  1. Common carriage is defined by the duties imposed on public networks in exchange for their right to use public property as a right of way, and other privileges:
  1. Common carriers and public carriers are under duty to carry goods lawfully delivered to them for carriage. The duty to carry does not prevent carriers from refusing to transport goods that they do not purport to carry generally. Carriers may indeed restrict the commodities that they will carry. Further, everywhere, carriers may refuse to carry dangerous goods, improperly packed goods, and goods that they are unable to carry on account of size, legal prohibition, or lack of facilities (Longley (1967); Ridley, Jasper and Whitehead (1982); Encyclopædia Britannica (2009).
  1. This definition offers several reasons not to common carry that can be extended to Internet Service Providers – spam and viruses for instance may be refused. In common law countries such as the United Kingdom and United States, carriers are liable for damage or loss of the goods that are in their possession as carriers, unless they prove that the damage or loss is attributable to certain excepted causes (‘Acts of God, acts of enemies of the Crown, fault of the shipper, inherent vices of the goods, and fraud of the shipper’, perils of the sea and particularly jettison). In the wonderfully descriptive language of the English common law (Longley 1967): ‘Fault of the shipper as an excepted cause is any negligent act or omission that has caused damage or loss— for example, faulty packing. Inherent vice is some default or defect latent in the thing itself, which, by its development, tends to the injury or destruction of the thing carried. Fraud of the shipper is an untrue statement as to the nature or value of the goods. And jettison in maritime transport is an intentional sacrifice of goods to preserve the safety of the ship and cargo.’
  1. That provides several more reasons for loss – one thinks of the loss of undersea cables, or alleged foreign power Denial of Service (DoS) attacks, as we saw in Chapter 1. It might be stretching a definition to suggest that P2P streams can be ‘jettisoned’ in order to allow other traffic to progress during peaktime congestion.
  1. It is worth stating what common carriage is not. It is not a flat rate for all packets. It is also not necessarily a flat rate for all packets of a certain size. It is, however, a mediaeval non- discrimination bargain between Sovereign and transport network or facility, in which an exchange is made: for the privileges of classification as a common carrier, those private actors will be granted the rights and benefits that an ordinary private carrier would not. As Barbara Cherry has written, common carriers are not a solution to a competition problem, they far predate competition law. They prevent discrimination between the same traffic type – if I offer you transport of your High Definition video stream of a certain protocol, then the next customer could demand the same subject to capacity, were the Internet to be subject to common carriage (it is not).
  1. Citizens believe they have ancient rights of way and of service. The United Kingdom Carriers Act of 1830 was the first legislation for carriage of goods, codifying the common law. The Act applied to all common carriers by land (‘more effectual Protection of Mail Contractors, Stage Coach Proprietors, and other Common Carriers’ according to the Carriers Act 1830 CHAPTER 68 11_Geo_4_and_1_Will_4) , including road and railway carriage, then in its infancy for passengers but well-established for coal and other commodities. The United Kingdom Railways Act 1844 does include provisions for common carriage and ‘Parliamentary trains’ (low cost trains that stop at all stations, later known as ‘milk trains’ because they ran pre-dawn to avoid inconveniencing more expensive trains at peak hours). Common carriers in mediaeval times included farriers and public houses (every horse to be shoed and person to be allowed shelter without discrimination between travelers). As per Lane v. Cotton (1701) 1Ld.Raym. 646, 654 (per C.J. Holt)`‘If a man takes upon him a public employment, he is bound to serve the public as far as the employment extends; and for refusal an action lies, as against a farrier refusing to shoe a horse...Against an innkeeper refusing a guest when he has room...Against a carrier refusing to carry goods when he has convenience, his wagon not being full.’
  1. Common carriage should not be confused with charging tolls for higher speed networks, though the Turnpike Riots of 18th Century England were associated with turning the King’s Highway into a private road, and UK opposition to road charging continues to this day.
  1. Telecoms networks were established to be common carriers as they achieved maturity, following telegraphs, railways, canals and other networks. Noam explained in 1994 the practice:
  1. Common carriage, after all, is of substantial social value. It extends free speech principles to privately-owned carriers. It is an arrangement that promotes interconnection, encourages competition, assists universal service, and reduces transaction costs. Ironically, it is not the failure of common carriage but rather its very success that undermines the institution. By making communications ubiquitous and essential, it spawned new types of carriers and delivery systems… the pressure on common carriers come from two other directions: private NGNs offered by systems integrators; and broadband services offered by cable television operators. Neither operates as a common carrier, nor is it likely to. Noam (1994, p. 435) explains that: ‘When historically they [infrastructure services] were provided in the past by private firms, English common law courts often imposed some quasi-public obligations, one of which one was common carriage. It mandated the provision of service of service to willing customers, bringing common carriage close to a service obligation to all once it was offered to some.’
  1. He thus forewarned that net neutrality would have to be the argument employed by those arguing for non-discriminatory access, as well as accurately predicting the death of common carriage ten years later. Note under common carriage, discrimination is quite possible, but not between customers, only between identical loads: see National Association of Regulatory Utility Commissioners v. FCC, 525 F.2d 630, 642 (D.C.Cir. 1976).
  1. In the United States, it was finally established that a public telegraph company (and more especially the largest) has a duty of non-discrimination towards the public- see Western Union Telegraph Co. v. Call Publishing Co., 181 U.S. 92, 98 (1901). The loss of common carriage is an epoch-breaking move towards deregulation, which means that attempts to ensure universal access to an unfettered Internet will require new regulation.

 

References

 

Carriers               Act              1830               CHAPTER                  68                              11_Geo_4_and_1_Will_4                                at http://www.opsi.gov.uk/RevisedStatutes/Acts/ukpga/1830/cukpga_18300068_en_1

 

Encyclopædia                              Britannica,                             ‘Common                                                       Carrier’                                   at http://www.britannica.com/EBchecked/topic/128177/common-carrier

 

Longley, Henry N. ‘Common Carriage of Cargo’, [1967] Matthew Bender & Co.: New York.

 

Noam, Eli M. ‘Beyond liberalization II: the impending doom of common carriage’, [1994] Telecommunications Policy 435, 452.

 

Ridley, Jasper and Geoffrey Whitehead. ‘The Law of the Carriage of Goods by Land, Sea and Air’, [1982] 6th ed., Shaw: Crayford, Kent

 

12. Content creator/influencer

 

1. During the past 10 years, peer-to-peer platforms have democratized and decentralized media services. It is currently possible for any individual around the world to make a social media channel or account (e.g. on YouTube, Instagram, or TikTok), and make content for a living. These developments are facilitated by increased opportunities, from a marketing and technological perspective, to monetize online presence (see also the entry for ‘content/web monetization’). Within this framework, a new marketing phenomenon, known as ‘influencer marketing’, has spread online. It consists of a monetization model based on ‘reviews and endorsements of products online, usually communicated through social networks’ (Riefa & Clausen, 2019). Outside marketing aspects, the term ‘content creator’ is used to emphasize the fact that social media’ users take the career path of making media content, especially since controversies surrounding the non-disclosure of advertising, or the low levels of diligence exercises by some social media personalities have attracted a negative connotation of the term ‘influencer’ (The Guardian, 2019).

2. ‘Content creators’ might therefore be a better term to refer to influencers other than those who engage in influencer marketing as their primary business model. However, it is important to stress that so-called “influencers” are only a subset of content creators, which is a general term that may be utilised to identify any individual user creating content either for professional or for personal purposes.

3. Furthermore, it must be noted that defining influencers is no easy task. From a semantic perspective, the Cambridge Dictionary defines influencers as ‘a person who is paid by a company to show and describe its products and services on social media, encouraging other people to buy them’ (Cambridge Dictionary, 2020). In a study on social media advertising, the European Commission proposed a similar definition: ‘a person who has a greater than average reach and impact through word of mouth in a relevant marketplace, and influencer marketing relies on promoting and selling products or services through these individuals’ (European Commission, 2018). So far, the concept has been integrated in numerous self-regulatory measures around the world, such as the Dutch Advertising Code for Social Media & Influencer Marketing, where influencer marketing is understood to be a marketing activity involving an advertiser and its distributors, in relation to a (paid) communication about a product or brand for the benefit of the advertiser (Art 2(e) Advertising Code for Social Media & Influencer Marketing, 2019). The exercise of influence is a core component of these views. According to the Word of Mouth Marketing Association, influence is ‘the ability to cause or contribute to another person taking action or changing opinion/behavior’. These definitions are built around three common considerations: 1) The existence of a transaction whereby a person is paid to promote something; 2) The person operates on social media; 3) The person has a sphere of influence on which it exercises commercial persuasion.

4. While these features are a reasonable representation of part of the influencer industry, they lead to an incomplete picture on three grounds. First, such features only characterize influencers stricto sensu, namely to indicate those social media users who engage in influencer marketing as a business model (see Figure 1 below).

[IMAGE not supported by this platform. To visualize the image, please check the PDF version of the Glossary at page 47]

5. However, the notion of influencers should be seen from a broader perspective. Influencers come in all sizes and species, and they can range from humans to pets, or even accounts of curated content (e.g. meme accounts such as those used by Michael Bloomberg in his 2020 Instagram campaign ads). At the same time, on the basis of the size of their following, there can be mega- influencers (most renowned creators in a given industry), micro-influencers (rising stars with less followers and popularity than mega-influencers), or nano-influencers (small-scale influencers focused on word-of-mouth in more granular communities). Secondly, influencer marketing regards a plethora of monetization models to build their revenue, such as crowdfunding on Patreon, direct selling of own merchandise (‘merch’), or ad revenue through programmes such as AdSense or Instagram TV (see also the entry for ‘content/web monetization’). Thirdly, these strategies do not only concern commercial content but also extend to political speech. Influencers do not exercise commercial persuasion connected to commercial transaction but also engage in communications of a different nature than commercial (e.g. political communication). Moreover, with the rise of social justice influencers, influence can also be exercised through e.g. the promotion of social messages and calls for action to support civil society organizations through donations, which is different than promoting goods/services, although the activity itself may be based on similar monetization models as commercial influencers (e.g. endorsement contracts; De Gregorio & Goanta, 2020).

6. On the basis of these insights, we propose a more all-encompassing definition of an influencer, as the person behind a social media account who creates monetized media content with the goal of exercising commercial or non-commercial persuasion, and that has an impact on a given follower base. From a policy perspective, addressing influencers marketing could affect the right to freedom of expression and, therefore, regulators should take into account the degree of interferences of potential regulatory interventions. Within this framework, consumer law can play a critical role in defining what the boundaries of unfair commercial practices are, and to what extent some practices are prone to manipulating consumer behaviour or political ideas for commercial gain.

 

References

 

European Commission, ‘Behavioural Study on Advertising and Marketing Practices in Online Social Media’ [2018]. Final Report, available at <https://ec.europa.eu/info/files/advertising-and- marketing-practices-online-so- cial-media-final-report-2018_en>

 

Riefa, C & Clausen, L. ‘Towards Fairness in Digital Influencers’ Marketing Practices’, [2019] Journal of European Consumer and Market Law 64.

 

'Advertising Code for Social Media & Influencer Marketing' (Reclamecode 2019) < https://www.reclamecode.nl/nrc/reclamecode-social-media-rsm/>

 

Word of Mouth Marketing Association. ‘The WOMMA Guide to Influencer Marketing’, [2017] available at <http://getgeeked.tv/wp-content/uploads/uploads/2018/03/WOMMA-The-WOMMA- Guide-to-Influencer-Marketing-2017.compressed.pdf>.

 

Stokel-Walker, C. ‘Instagram: beware of bad influencers…’, [2019] The Guardian, available at

<https://www.theguardian.com/technology/2019/feb/03/instagram-beware-bad-influencers- product-twitter-snapchat-fyre-kendall-jenner-bella-hadid>.

 

Goanta, C. & Wildhaber, I. ‘Controlling Influencer Content Through Contracts: A Qualitative Empirical Study of the Swiss Influencer Market’, [2020] C. Goanta & S. Ranchordás (eds.), The Regulation of Social Media Influencers (Elgar).

 

De Gregorio, G. & Goanta, C. ‘The Influencer Republic’, [2020]

 

 

13. Content

  1. This entry: (i) sets out the way that the term “content” is used in common parlance and (ii) provides examples of existing regulation which define the term (or synonyms of it).
  1. Use in common parlance
  1. There is no universally agreed definition of the term “content”; it does not appear in any major international instruments. At its broadest, the term can be considered to refer to the visual and aural elements of the internet that users experience via websites and applications. This would include all of the text, images, videos, animations and sounds that a user can see, hear or otherwise access.
  1. In recent years, the term “content” is also increasingly used to refer to a specific visual or aural element, with the term “piece of content” referring to such an element. This could be a particular post that a user has uploaded to a social media platform, or an image or video uploaded or shared.
  1. Existing regulation
  1. While “content” is the term used in common parlance, it is not the only term used - at least so far in regulation which sets out rules relating to online content. New Zealand’s Harmful Digital Communications Act 2015 and the USA’s Communications Decency Act both use the term “content” (although neither defines it). Australian Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 uses the term “material” instead, noting that “material” can be audio material, visual material or audio-visual material. The European Union’s E-Commerce Directive (2000) uses the term “information”, but without defining it.

 

14. Content/web monetization

1. ‘Web monetization’ is a term of art used to identify two aspects. First, from a broad perspective of the Internet’s history, it may encompass the ways in which content on the Web may be monetized, namely how to turn website traffic into profit. Second, from a more narrow perspective, it may refer to the infrastructure needed to achieve that, and in particular to the recent browser API standard with the same name (‘Web Monetization’), developed by Coil in collaboration with Mozilla Foundation.

Nutshell history of web monetization

2. The Internet as we know it is built on the Transmission Control Protocol (TPC, later complemented with the Internet Protocol resulting in the TCP/IP standard). Focused on the transmission of data across networks, in other words, access to information, this protocol was not originally designed or tailored for commercial gain, as ‘there was no native payment system built into the web at the time’ (Melendez, 2019).

3. To capitalize on the traffic generated on the Internet, web – namely website - monetization entailed, from very early on, reliance on web advertising, namely the displaying of ads on (popular) websites. Web advertising consists of popular practices such as pay per click, pay per impression, paid subscriptions or donations. Pay per click entails a marketing strategy by which an advertiser pays a platform that displays its ads on advertising networks (e.g. Google AdSense). Each time a user clicks on a link displayed e.g. in a query made on a search engine (e.g. Google, but also Youtube), the advertiser has to pay for that click. In comparison, the pay per impression model entails that the advertiser needs to pay for every time an ad is shown to a user, and does not require the user to click it to that end. Additional models rely on other streams of income, such as paid subscriptions (e.g. newspapers that require viewers to pay for access to content), or donations (e.g. Wikipedia).

Social media

4. Apart from traffic generated on regular websites (e.g. Blogger, Medium, but also personal websites), a massive source of online presence for current Internet users is social media. With 3.81 billion active users on social media, 2.49 billion of which are Facebook and 2 billion on Youtube, social media has long been a fertile environment for the monetization of user attention. Social media content creators, also referred to as influencers (see entry for ‘Influencers/content creators’) bring views, likes and clicks to platforms that are keep on making new features to reward them. The business models behind content monetization are in constant fluctuation, and so far can be broadly divided in four different models (see Figure 1 below): influencer marketing, ad revenue, subscription/tokenization/crowdfunding and direct selling. Influencer marketing entails the payment for the endorsement of a good or service, made by an advertiser or brand to an influencer or their representative. Ad revenue is the monetization generated through the  display of ads on the social media channel belonging to a creator (e.g. AdSense or Instagram TV). Subscription models entail paying a fee to access content made by a creator on a given platform (e.g. Youtube), and is similar to crowdfunding on platforms such as Patreon, where a subscription is made to support the creator across any platform they may use. In addition, tokenization allows followers to spend money on platform-specific tokens, which they can give to creators in specific moments during their enjoyment of the content (e.g. on Twitch there is even a combination of subscriptions and tokens, where a subscription offers so-called ‘emotes’, and subscribers can make gifts available to creators). Lastly, direct selling entails content creators selling own branded goods to their fan base.

[IMAGE not supported by this platform. To visualize the image, please check the PDF version of the Glossary at page 51]

The Web Monetization protocol

5. In the past decades, information transfer protocols have been used to get as much information as possible on user behaviour. Under the guise of personalizing advertising to fit individual needs and preferences, behavioural advertising targets users across platforms (Centre for Data Ethics and Innovation). New data sharing architectures such as Solid aim to challenge the Internet’s advertising-based business model and give more privacy and ownership to users with respect to their data (Solid, 2020). The Web Monetization protocol is a proposed browser API standard which supports the generation of a payment stream between the user and the website being viewed (Web Monetization, 2020). Payment streams are based on an open protocol suite called Interledger, used to send payments across different ledgers (Interledger, 2020).

References

 

Centre for Data Ethics and Innovation, ‘Review of online targeting: Final report and recommendations,                                                                  [February                                                                                                    2020],

<https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/ file/864167/CDEJ7836-Review-of-Online-Targeting-05022020.pdf>.

 

Ken      Melendez,       ‘The      State      of     Web      Monetization’,      Medium,      [13             December       2019],

<https://coil.com/p/kenmelendez/The-State-of-Web-Monetization/KTVijO7ah>.

 

Statista,              ‘Social              media              -                             Statistics          &                      Facts’,  [18                         May 2020]<https://www.statista.com/topics/1164/social-networks/> Solid, <https://solid.mit.edu>.

 

Web Monetization, <https://webmonetization.org>.

 

Interledger, <https://interledger.org>.

15. Coordinated flagging

  1. See “flagging. Coordinated flagging refers to a form of large-scale organized campaign where a group of individuals decide to simultaneously flag the social media content of a specific individual or specific group of individuals, marking such content as offensive, with the purpose of having the impacted individual(s) banned or suspended from the platform, or their content taken down. This is commonly understood to be a form of technology-facilitated abuse and harassment, where often the content is not offensive, and in fact may be content that itself calls attention to abuse that the author is experiencing, and then is further targeted for speaking out about. Coordinated flagging is one of several ways in which online abusers game or exploit content moderation features on platforms to target their victims, often members of historically marginalized or vulnerable communities, as a form of silencing with the intent or effect of driving such users away from online spaces. For example, “In 2012, accusations swirled around a conservative group called ‘Truth4Time,’ believed to be coordinating its prominent membership to flag pro-gay groups on Facebook.” (Crawford & Gillespie, 2016).
  1. Such campaigns may also include a central political purpose other than the politics involved in targeting historically oppressed groups online. Brittany Fiore-Silfvast has described coordinated flagging as a type of “user-generated warfare” (Fiore-Silfvast, 2012), and gives the following example of a coordinated flagging campaign known as “Operation YouTube Smackdown” (OYS), with the slogan, “Countering the Cyber-Jihad one video at a time”:
  1. OYS began out of a conversation among conservative bloggers who were inspired by the potential for private citizens to fight the war through the Internet. […] The blogger called on his blogger friends to join the effort by volunteering to scour YouTube for footage from the “enemy” and flag it for YouTube’s corporate staff to review and remove. After one of the bloggers volunteered his blog to serve as the coordinating site of operations, a handful of other bloggers began to connect their blogs and direct their readership to OYS. It was there and then that the conservative bloggers and their readership began organizing themselves into a network army that would fight Internet terrorists on YouTube (Fiore-Silfvast, 2012 p. 1972-73).
  1. Coordinated flagging is also known as “strategic flagging” or “organized flagging”, and results in user flags playing a governing role “not expressing individual and spontaneous concern but as a social and coordinated proclamation of collective, political indignation—all through the tiny fulcrum that is the flag, which is asked to carry even more semantic weight.” (Crawford & Gillespie, 2016 p. 421)

 

References

 

Kate Crawford, Tarleton Gillespie, ‘What is a flag for? Social media reporting tools and the vocabulary of complaint’ [2016] New Media & Society 410, 420.

 

Brittany Fiore-Silfvast, "User-Generated Warfare: A Case of Converging Wartime Information Networks and Coproductive Regulation on YouTube" [2012] International Journal of Communications 1965.

 

 

16. Coordinated Inauthentic Behavior

  1. Coordinated inauthentic behaviour (CIB) is a term frequently associated with concepts such as disinformation, online misinformation, computational propaganda, and the mass leveraging of bots or fake accounts to carry out a particular set of actions or disseminate particular messages across social media platforms. The term originated with Facebook, which has defined CIB as “domestic, non-government campaigns that include groups of accounts and Pages seeking to mislead people about who they are and what they are doing while relying on fake accounts”, including “fake engagement, spam and artificial amplification” (Facebook, 2020). To be clear, CIB can also be engaged in or instigated by foreign actors and governmental actors; in these cases, such activity is still considered a form of CIB, but Facebook then categorizes it as “Foreign or Government Interference (FGI)”. The more general definition that describes the behaviour itself, regardless of the actor involved, may thus be more universally applied outside of Facebook’s specific internal categorization.
  1. Despite the increasing popularization of the term, observers have noted that coordinated inauthentic behaviour still does not have a completely stable or clear definition. For example, platform regulation scholar Evelyn Douek questioned whether or not CIB would include a “tactical and relatively sophisticated” campaign where teenagers on TikTok and K-pop fans purposely reserved tickets to a campaign rally for the U.S. president in Tulsa, Oklahoma, in June 2020, in order to “artificially inflate expected attendance numbers and mess with the Trump campaign’s data collection” while displacing genuine supporters and ensuring large numbers of empty seats at the rally (Gleicher, 2018). In response, “Facebook’s head of security … explained that the teens’ stunt wouldn’t have met Facebook’s definition of CIB because it did not involve the use of fake accounts or coordinate to mislead users of the platform itself (as opposed to misleading people off the platform)” (Douek, 2020). As another example complicating definitional boundaries, Douek cites “how a network of 14 purportedly independent large Facebook pages drove traffic to the conservative site the Daily Wire, one of the most popular publications on Facebook, including by publishing the same articles at the same time with the same text” (Douek, 2020). Facebook’s explanation for not treating this activity as CIB was that “CIB is reserved for the most egregious violations and this didn’t meet the threshold because the accounts weren’t fake” (Douek, 2020).
  1. Thus, the definition of “coordinated inauthentic behaviour” is still in flux, both in attempts to interpret and establish exactly what Facebook itself means by CIB, as well as in establishing what CIB means as a standalone term in the field of platform regulation generally, independent of what Facebook itself may consider to be CIB for its own internal purposes. As Douek points out, “Rare is the piece of online content that is truly authentic and not in some way trying to game the algorithms. Coordination and authenticity are not binary states but matters of degree, and this ambiguity will be exploited by actors of all stripes.” (Douek, 2020).

 

References

‘March      2020      Coordinated       Inauthentic       Behavior       Report’      [2     April                   2020],    Facebook

<https://about.fb.com/news/2020/04/march-cib-report/>.

 

Nathaniel Gleicher, ‘Coordinated Inauthentic Behavior Explained’ [6 December 2018], Facebook

<https://about.fb.com/news/2018/12/inside-feed-coordinated-inauthentic-be....

 

Evelyn Douek, ‘What Does “Coordinated Inauthentic Behavior” Actually Mean?’ [2 July 2020] Slate,      <https://slate.com/technology/2020/07/coordinated-inauthentic-behavior-fa... twitter.html>.

 

17. Co-regulation

  1. Self-regulation as developed by the concerned (market) participants can be strengthened by involving governmental agencies into the rule-developing and rule-implementing processes. Depending on the degree of involvement, the respective forms of co-operative rule-making are called (i) co-regulation, (ii) regulated self-regulation, (iii) directed self-regulation or (iv) audited self-regulation (Weber 2014: 23/24). The most common model balancing the interests of international organizations, States, businesses, and civil society is the co-regulation as described hereinafter. Such kind of multistakeholder approach can serve legitimate State purposes as well as efforts of the private sector.
  1. Co-regulation as model (having been coined by Hoffmann-Riem [2000] in relation to the media markets) means that the government provides for a general framework which is then substantiated by the private sector; i.e. the State legislator sets the legal yardsticks and leaves the codification of the given principles by way of specific rules to private bodies. Thereby, regulation can remain flexible and innovation-friendly. Additionally, the government remains involved in the private rule-making activities at least in a monitoring function supervising the progress and the effectiveness of the initiatives in meeting the perceived objectives (Senn 2011: 43, 139-148, 230; Marsden, Meyer and Brown 2020: 9).
  1. Co-regulation is a regulatory model leaving the actual “regulator” independent from the government as long as the rules remain within the legislative framework. Whether for example Codes of Conduct developed by the private sector need to get an approval from a public authority to become effective depends on the applicable legal provisions (being partly the case in financial markets). Such a requirement appears to be justified if private rule-making risks to implement not sufficiently adequate normative standards. Governments can assess the representativeness of self-regulatory standards and judge the appropriateness of best practices; interventions appear justified if a higher level of protection measures is desirable (Marsden, Meyer and Brown 2020: 9).
  1. Many examples for co-regulatory mechanisms exist (Marsden, Meyer and Brown: 9/10), for example in the media markets and in the Internet regulatory ecosystem (i.e. Nominet, EURID). Social media platforms are often exposed to court proceedings (e.g. Delfi, MTE v. Hungary); the implementation of standards and best practices can help to limit the respective risks.

 

References

 

Hoffmann-Riem Wolfgang. ‘Regulierung der dualen Rundfunkverordnung’, [2000] Baden-Baden.

 

Marsden Chris, Trisha Meyer and Ian Brown. ‘Platform values and democratic elections: How can the law regulate digital disinformation?’, [2020] Computer Law & Security Review 36-105373 1,

  1. Available at <https://www.journals.elsevier.com/computer-law-and-security-review>

 

Senn Myriam. ‘Non-State Regulatory Regimes’, Understanding Institutional Transformation’, [2011] Berlin

 

Weber Rolf H. ‘Realizing a New Global Cyberspace Framework’, [2014] Zürich

 

  1.  

18. Content Curation

(I) In the context of online services, “content curation” refers to the selection of relevant content from a larger subset of available content. Thorson and Wells define curation as the “production, selection, filtering, annotation or framing of content” (Thorson & Wells, 2016). In the modern environment of information abundance, curation fulfils an essential function: “To curate is to select and organize, to filter abundance into a collection of manageable size, one that in its smaller shape fulfils an informational or strategic need more efficiently than the buzzing flow of all available options” (Thorson & Wells 2016).

  1. Curation is performed by a variety of actors through a variety of methods. Many discussions focus on the role of dominant online platforms, who curate content primarily through algorithmic features such as search, ranking and recommendation (e.g. Van Couvering 2009, Helberger, Kleinen-Von Königslöw & Van Der Noll 2015). Broader understandings of curation also recognize the role of others including individual users, advertisers, content providers in shaping online information flows (e.g. Thorson & Wells 2017; Napoli 2019). For instance, users can interact with recommender systems through rating and sharing content, and also have their own means to disseminate content through other channels; whereas content providers source the pool of available content from which rankings and recommendations are surfaced.
  1. Content curation is closely connected to, though distinct from, content moderation. They can be seen as two sides of the same coin: Moderation speaks to the combatting of undesired content, whereas curation speaks to the surfacing of desired content. Accordingly, moderation is more associated with the removal of content or the sanctioning of users, whereas curation is associated with policies related to the design of search, recommender and ranking systems. This being said, content moderation can also be effectuated through curation systems, e.g. by down-ranking content or speakers. In this light, the design of ranking algorithms implicates both content moderation and content curation. Indeed, the zero-sum nature of ranking, in which advantaging certain content necessarily disadvantages other content, makes it so that any act of content curation in recommender systems can also be seen as a form of content moderation, and vice- versa.

(II) Content curation is not a legal concept, and it has not yet made its way into legislation or case law. However, content curation has received increasing attention in internet policy debates, reflecting a growing recognition that platforms influence online ecosystems not merely by enforcing content prohibitions but more fundamentally by structuring content visibility. Influential reports on this topic have been issued by the World Wide Web Foundation and the UN Special Rapporteur on Freedom of Expression, amongst others. (World Wide Web Foundation 2019) New legal standards are also developing to regulate platform content recommender systems, as a particularly influential form of content curation (Cobbe & Singh 2019). Key examples include the EU’s Platform-To-Business Regulation and Germany’s pending Medienstaatsvertrag. Ranking systems are also subject to other regulations which constrain curation, such as delisting rights found in data protection law, and the abuse of a dominant position under competition law.

 

References

 

Ávila, Renata, Juan Ortiz Freuler and Craig Fagan. ‘The Invisible Curation of Content: Facebook’s News Feed and our Information Diets.’ [2018] World Wide Web Foundation. Available at: http://webfoundation.org/docs/2018/04/WF_InvisibleCurationContent_Screen_AW.pdf

 

Cobbe, Jennifer and Jatinder Singh. ‘Regulating Recommending: Motivations, Considerations and Principles”. [2019] European Journal of Law and Technology 10(3).

 

Helberger, Natali, Kleinen-Von Königslöw, Katharin & Van der Noll, Rob, ‘Regulating the new information intermediaries as gatekeepers of information diversity’, [2015]

 

Napoli, Philip. ‘Social Media and the Public Interest: Governance of News Platforms in the Realm of Individual and Algorithmic Gatekeepers’, [2015] Telecommunications Policy 39(1).

 

Napoli, Philip. ‘Social Media and the Public Interest: Media Regulation in the Disinformation Age’. [2019] New York: Columbia University Press.

 

Thorson, Kjerstin and Chris Wells. ‘Curated Flows: A Framework for Mapping Media Exposure in the Digital Age’. [2016] Communication Theory 26(3).

 

Van Couvering, Elizabeth. ‘Search engine bias: the structuration of traffic on the World-Wide Web’. [2010] PhD Thesis, The London School of Economics. Available at: http://etheses.lse.ac.uk/41/

  1.  

19. Dark patterns

  1. This entry discusses: (I) the notion of dark pattern, its history and evolution; (II) a taxonomy of dark patterns; (III) existing regulatory and consumer advocacy responses to dark patterns.

(I) A “dark pattern” is a user interface design choice that benefits an online service by coercing, steering, or deceiving users into making unintended and potentially harmful decisions (Mathur et al. 2019). The expression was coined in 2010 by online designer Harry Brignull, who created an online guide and repository of cases (darkpatterns.org, currently maintained by Alexandre Darlington) and referred to a “user interface that has been carefully crafted to trick users into doing things, such as buying or signing up for things”. It should be noted that this early definition included the three central elements of deceptiveness, deliberateness and accomplishment of the deceptive purpose. Later definitions refined the concept following a less deterministic approach, which dispensed with the requirements of specific intent and of specific effects of deception on users: for instance, Mathur et al. (2019) refer more generally to “benefiting an online service by coercing, steering or deceiving” (emphasis added) and Luguri and Strahilevitz (2019) to “user interfaces whose designers knowingly confuse users, make it difficult for users to express their actual preferences, or manipulate users into taking certain actions” (emphasis added). The term is also closely linked to the literature on malicious interface design techniques, defined as “interfaces that manipulate, exploit or attack users” (Conti and Sobiesk 2010); and to the broader concept of nudging, defined as “influencing choice without limiting the choice set or making alternatives appreciably more costly in terms of time, trouble, social sanctions, and so forth” (Hausmann and Welch 2010). The distinctive element however, common to all the existing definitions, is the covert and insidious nature of dark patterns, which in certain cases may fall into legally actionable fraud, unfair commercial practices or other violations of consumer and data protection rules (see section III below).

(II) Existing literature has broken down dark patterns into different categories. The most complete taxonomy to date has been offered by Luguri and Strahilevitz (2019), who have reviewed existing taxonomies and identified 7 general categories, each divided into types or “variants”, for a total of 17 types of dark pattern. The following is the list of categories and their corresponding types:

    • Nagging, which includes only one type, and is constituted by “Repeated requests to do something the firm [as opposed to the user] prefers”;
    • Social Proof, including “Activity Message” (Informing the user about the activity on the website, e.g., purchases, views, visits), “Testimonials” (Testimonials on a product page whose origin is unclear);
    • Obstruction, including “Roach Motel” (Asymmetry between signing up and canceling), “Price Comparison Prevention” (Frustrating comparison shopping), “Intermediate Currency” (Set purchases in virtual currency to obscure cost);
    • Sneaking, including “Sneak into Basket”(Adding additional products to users’ shop- ping carts without their consent), “Hidden Costs” (Revealing previously undisclosed charges to users right before they make a purchase) and “Hidden Subscription” (Charging users for unanticipated / undesired automatic renewal), “Bate and Switch” (Customer sold something other than what’s originally advertised);
    • Interface interference, including “Hidden Information / Aesthetic Manipulation / False Hierarchy” (Visually obscuring important information), “Pre-selection” (Pre-selecting firm- friendly default), “Toying with Emotion” (Emotionally manipulative framing), “Trick Questions” (Intentional or obvious ambiguity), “Disguised Ad” (Inducing consumers to click on something that isn’t apparent ad) and “Confirmshaming” (Framing choice in way that seems dishonest / stupid);
    • Forced Action, including “Forced Registration” (Tricking consumers into thinking registration necessary);
    • Urgency, including “Low stock / high-demand message” (Falsely informing consumers of limited quantities) and “Countdown Timer” (Giving a message that an opportunity ends soon with a blatant false visual cue).
  1. Domain-specific dark patterns have also been identified, sometimes creating new categories or types. For instance, in the privacy field Bösch et al. (2016) added “Hidden Legalese Stipulations” (hiding malicious information in lengthy terms and conditions) and the French Data Protection Authority identified a range of actions interfering with privacy choices from the perspective of “pushing the individual to accept sharing more than what is strictly necessary”, “influencing consent”, “creating frictions with data protection actions” and “diverting the individual”(Commission nationale de l'informatique et des libertés, 2019); while in the context of users´spatial relationship with digital devices Greenberg et al. (2014) introduced “Captive Audience” (taking advantage of users’ need to be in a particular location or do a particular activity to insert an unrelated interaction) and “Attention Grabber” (visual effects that compete for users’ attention).

(III) As dark patterns may constitute a violation of existing legal rules, some specific guidance has been recently issued by regulators in the consumer protection (Authority for Consumers and Markets, 2020) and data protection field (CNIL, 2019). Furthermore, consumer organizations have published reports finding problematic use of dark patterns with regard to data collection (Norwegian Consumer Council, 2018; Transatlantic Consumer Dialogue and Heinrich Böll Stiftung, 2020) and academic studies have been conducted to demonstrate the influence of dark patterns on the compliance with GDPR requirements for a valid consent (Nowens et al., 2020). These guidance documents and reports highlight the possible liability arising from dark patterns in relation to misleading and aggressive commercial practices, the violation of privacy by design and the rules on free, informed and specific consent. They also note the insufficiency of self- regulation, which by contrast is a central feature of a legislative bill (the DETOUR Act) introduced into the US Senate in 2019 by Senator Mark Warren to prohibit large online platforms from using deceptive user interfaces, known as “dark patterns” to trick consumers into handing over their personal data. The bill would entrust an industry association with the formulation of guidelines, and even a safe harbor against enforcement by the Federal Trade Commission, for design practices of large online platforms.

 

References

 

Authority for Consumers and Markets (ACM), ‘Guidelines on the Protection of the online consumer.                                  Boundaries                 of                              online                                persuasion’             [2020]. Available       at

<https://www.acm.nl/sites/default/files/documents/2020-02/acm-guidelines-on-the-protection-of- the-online-consumer.pdf>

 

Commission nationale de l'informatique et des libertés (CNIL).‘Shaping Choices in the Digital World From dark patterns to data protection: the influence of ux/ui design on user empowerment’. [November 2019] IP Reports Innovation and Foresight N°06 Available at

<https://linc.cnil.fr/sites/default/files/atoms/files/cnil_ip_report_06_shaping_choices_in_the_digit al_world.pdf>

 

Bösch, Christoph, Benjamin Erb, Frank Kargl, Henning Kopp, and Stefan Pfattheicher. ‘Tales from the dark side: Privacy dark strategies and privacy dark patterns’. [2016] Proceedings on Privacy Enhancing                             Technologies      237,                      254.                                           Available                                at

<https://petsymposium.org/2016/files/papers/Tales_from_the_Dark_Side Privacy_Dark_Strate gies_and_Privacy_Dark_Patterns.pdf>

 

Conti, Gregory and Edward Sobiesk. ‘Malicious interface design: exploiting the user’, [2010] Proceedings of the 19th international conference on World wide web 271, 280. Available at

<https://doi.org/10.1145/1772690.1772719>

 

Greenberg, Sault, Sebastian Boring, Jo Vermeulen, and Jakub Dostal. ‘Dark Patterns in Proxemic Interactions: A Critical Perspective’. [2014] Proceedings of the 2014 Conference on Designing Interactive Systems (DIS ’14). ACM, New York 523, 532. Available at https://doi.org/10.1145/2598510.2598541

 

Hausmann, Daniel and Brynn Welch, ‘Debate: To Nudge or Not to Nudge’, [2010] 18 Journal of Political Philosophy 123, 136. Available at https://doi.org/10.1111/j.1467-9760.2009.00351.x

 

Luguri, Jamie and Lior Strahilevitz, ´Shining a Light on Dark Patterns’ [August 1, 2019]. University of Chicago, Public Law Working Paper No. 719; University of Chicago Coase-Sandor Institute for Law & Economics Research Paper No. 879. Available at SSRN: https://ssrn.com/abstract=3431205

 

Mathur, Arunesh, Gunes Acar, Michael J. Friedman, Elena Lucherini, Jonathan Mayer, Marshini Chetty, Arvnid Narayanan. ‘Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites. [July 17, 2019 working paper]. Available at https://arxiv.org/abs/1907.07032

 

Norwegian Consumer Council. ‘Deceived by Design: How tech companies use dark patterns to discourage         us       from       exercising         our       rights       to      privacy´.        [2018]       Available at https://fil.forbrukerradet.no/wp-content/uploads/2018/06/2018-06-27-deceived-by-design- final.pdf

 

Midas, Nouwens, laria Liccardi, Michael Veale, David Karger, Lalana Kagal. ‘Dark Patterns after the GDPR: Scraping Consent Pop-ups and Demonstrating their Influence’, [2020] Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems 1, 13. Available at https://arxiv.org/abs/2001.02479

 

Transatlantic Consumer Dialogue and Heinrich Böll Stiftung. ´Privacy in the EU and US: Consumer          experiences          across         three        global        platforms´         [2020].        Available at https://eu.boell.org/en/2019/12/11/privacy-eu-and-us-consumer-experiences-across-three- global-platforms

 

  1.  

20. Data Portability

  1. Data portability is defined as “the right [of a person] to receive the personal data concerning him or her, which he or she has provided to a controller, in a structured, commonly used and machine- readable format”. This definition is contained in Article 20 of the GDPR, which was the first legislative source establishing such right.
  1. As per the Article 29 Working Party Guidelines, the right only covers data that were provided to the controller by the user, but also includes data acquired by the controller by observing the user’s behaviour, such as activity logs; it does not include further elaborations, such as inferred or derived data.
  1. The Guidelines identify also negative conditions for the exercise of this right, in particular that (1) it does not concern processing necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller; (2) its exercise is of no prejudice to the right to erasure provided by article 17 GDPR; and (3) it does not adversely affect the rights and freedoms of others. Of these negative conditions, the third is the most open-ended and uncertain from a business perspective, particularly considering that personal data that is subject to the request may simultaneously involve personal data of third parties (“networked data”). The Article 29 Working Party gives the example of a directory of data subject’s contacts, suggesting that the data controller can only accept to process such requests to the extent that there is a valid legal basis, for example a legitimate interest, which could be met if the new data controller was to provide a service allowing the data subject to process his personal data for purely personal or household activities. This interpretation, which presumes that the original data controller obtains specific and sufficiently reassuring information about the subsequent use of the received data, seeks to protect the data protection of third parties, which could otherwise be seriously affected by a too broad interpretation of the right to data portability. At the same time, the reference to “private or household uses” is also a safeguard against possible effects on competition derived from a strategic use of the right to data portability in order to gather commercial value from third party data.
  1. Aside from the specific instance of networked data, other concrete possibilities of conflict may arise between the right to data portability and the rights or freedom of third parties. The A29 WP Guidelines merely mention one of these possibilities, specifically the tension with intellectual property or trade secrets, recalling one of the Recitals of the GDPR according to which “the result of those considerations should not be a refusal to provide all information to the data subject”. This is certainly not an exhaustive indication of how such conflicts should be resolved, but provides a hint that one-sided solutions (e.g., absolute refusal in deference to trade secrets) would not be acceptable. It can thus be expected that a data controller takes reasonable measures to provide as much information requested as possible by decontextualizing personal data from proprietary algorithms or trade secrets. This arguably won’t be an issue as far as provided data is concerned, since such data does not reveal anything about the inner working of the systems used to store and analyze them. On the other hand, intellectual property and trade secrets may present some challenges when it comes to observed data, which can be difficult to disentangle from the categories designed by the controller to process the data inputs. Even in cases where de- contextualization is not feasible, however, the fact the data is transferred onto the user or a different data controller does not as such imply that the underlying intellectual property will necessarily be violated: the data subject and the second controller surely bear liability for any illegitimate processing of those data. This somewhat cynical understanding appears reflected in the statement by the Article 29 WP Guidelines that “a potential business risk cannot, in and of itself, serve as the basis for a refusal to answer the portability request”. Yet it obviously raises a question of what is the threshold of substantiation of a risk, such that they entitle a data controller- right holder to prevent future infringements of IP rights in the context of data portability requests. This is a matter largely left open to future guidelines (by the EU Data Protection Board) and legislation, with Recital 73 of the GDPR offering examples and stressing the need for any restrictions to data portability to be in compliance with the EU Charter of Fundamental Rights and the European Convention of Human Rights.
  1. Data portability is one of the rights that are meant to give individuals control over their data. Its purpose is also to allow the individual to switch to a different service provider without having to provide all their information again. Thus, Article 20 of the GDPR also foresees the right to transmit the data to another controller, automatically “if technically feasible”. This would enable more choice and more competition in digital service markets, similar to what happened when number portability was introduced in the mobile telephony market. However, the Internet industry has not enthusiastically embraced the concept, and the implementation of the data portability right mostly remains limited to exporting the user’s data into a file, while “technical solutions for standardised data exchange remain in their infancy” (CERRE, 2020).

 

References

 

Regulation (EU) 2016/679 of the European Parliament and of the Council (EC) on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC [27 April 2016]. Available at: < https://eur- lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32016R0679>

 

Working Party , 'Guidelines on the right to data portability' [ 2016]. Available at: < https://ec.europa.eu/newsroom/document.cfm?doc_id=44099>

 

Jan Krämer, Pierre Senellart & Alexandre de Streel, ' Making data portability more effective for the digital economy:' ( Centre on Regulation in Europe 2020) < https://cerre.eu/wp- content/uploads/2020/07/cerre_making_data_portability_more_effective_for_the_digital_econo my_june2020.pdf>

 

Gianclaudio Maglieri, ‘Trade Secrets v Personal Data: a possible solution for balancing rights’ 2016] International Data Privacy Law 6 (2) 102, 116.

 

Inge Graef, Martin Husovec and Nadezhda Purtova, ‘Data Portability and Data Control: Lessons for an Emerging Concept in EU Law’ [December 15, 2017]. TILEC Discussion Paper No. 2017- 041; Tilburg Law School Research Paper No. 2017/22. Available at SSRN:

<https://ssrn.com/abstract=3071875 or http://dx.doi.org/10.2139/ssrn.3071875> accessed 10 July 2018.

21. Defamation

  1. This entry: (i) sets out guidance on how the term “defamation” is u nderstood and (ii) provides examples of existing regulatory responses to defamation.
  1. Guidance on understanding the term “defamation”
  1. Defamation is prohibited in the majority of states around the world (see below), however there is no universally accepted definition of the term. Definitions largely coalesce around the communication of a statement (ordinarly false) about another person that unjustly harms their reputation. While it is beyond the scope of this glossary to seek to provide a definitive definition of “defamation”, there are two critical considerations for policymakers seeking to address online manifestations of defamation.
  1. First, it is unlikely that a distinct and separate definition of defamation when it takes place online will be necessary. Instead, existing definitions of defamation should be reviewed to ensure that they apply to all forms of defamation, whether offline or online. There should not be different legal processes, sanctions or remedies relating to defamation depending on whether it took place offline or online.
  1. Second, any definitions of defamation should be consistent with international human rights standards, particularly the right to freedom of expression, as set out in relevant guidance. The then UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Ambeyi Ligabo, provided particularly useful guidance in 2007, stating that:
  1. “A statement can be considered as defamatory under certain specific conditions: it must be published, in a spoken, written, pictured or gestured form. Written and pictured statements, which include drawings, video clips, and movies and so on, are considered more serious offences as they last longer than mere verbal statements, which are generally defined as slander. The statement must be false, in the sense that its contents should be totally untrue; it has to be injurious - there is no defamation without injury - and finally, unprivileged, in the sense that certain categories of individuals cannot be sued while making statements, especially in their professional capacity. Last but not least, a statement can be considered as defamatory if done with actual malice, which means that there was a real willingness to harm the defamed person.” (Ligabo, 2007).
  1. There is also a strong consensus that defamation should not be a criminal offence, but dealt with under civil law (Ligabo, 2007 and UN, 2011).
  1. Existing regulatory responses
  1. As noted above, defamation is prohibited in the majority of states around the world. Often this is done via civil law provisions which allow individuals to bring legal proceedings against those that have defamed them, and to seek damages or some other remedy for the harm caused. While considered to be inconsistent with international human rights law and standards, some states also have provisions in their criminal laws prohibiting defamation, thus enabling individuals to be prosecuted and punished for defamation.
  1. While the prohibitions of defamation through civil and/or criminal law mean that legal persons who publish defamatory statements, such as newspapers, can be held liable, a small number of states also allow for online platforms to be held liable for defamatory statements posted or shared by third parties on those platforms. In some states, the liability exists at the point that the defamatory statement is posted; in others, it only arises once the platform has become aware of the defamatory statement and fails to remove it within a reasonable period of time.
  1. In at least two European states, Estonia and Hungary, online platforms have brought cases to the European Court of Human Rights arguing that holding them liable for the defamatory statements of third parties constituted a violation of the right to freedom of expression. In one of the cases, Delf v. Estonia (Application no. 64569/09), the court held that there was no violation on the basis that the comments were clearly unlawful, that the platform professionally managed the comments section of its website where the comments were made, and that the platform took insufficient measures to remove the comments without delay. In the other case, Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v. Hungary (Application no. 22947/13), the court held that there had been a violation on the basis that the comments weren’t clearly unlawful, the platform was not operated on a commercial basis, and the platform took general measures to prevent defamatory comments.

 

References

 

Ambeyi Ligabo, UN Human Rights Council, ‘Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression’, UN Doc. A/HRC/4/27, [2 January 2007], Paragraph 47.

 

ARTICLE 19, ‘Defining Defamation: Principles on Freedom of Expression and Protection of Reputation’,                                         [2017],                                             available                                                                     at: https://www.article19.org/data/files/medialibrary/38641/Defamation-Principles-(online)-.pdf.

 

UN Human Rights Committee, ‘General comment No. 34: Article 19: Freedoms of opinion and expression, UN Doc. CCPR/C/CG/34’, [12 September 2011], Paragraph 47.

 

 

 

22. Deindexing

  1. This term refers to the intentional removal and unintentional removal of results from search engines and/or indexing websites. A website can be deindexed using robot.txt, which can be implemented to prevent other sites from crawling pages or sites, or manually delisted. Search engines can implement themselves to remove or reduce the visibility of unwanted or low quality results, such as spam or “clickbait.” There are paid services that can be hired to remove unwanted results from search engines, such as reputation management companies. Governments have required the deindexing of specific categories of information. For example, through the “Right to be Forgotten” which allows individuals to request that their personal data be removed from search engine results, amounting to deindexing through the delisting of results.
  1. A more technical understanding of “deindexing” refers to the de-indicization of a page from the search engine crawlers, which removes the need for a presentation of a “sanitized version” of the results in the first place. This can be obtained by websites voluntarily by using robot.txt files, which convey that specific message to search engines. However, it is more complex to accomplish when the information should only be removed from search engines in connection with a particular keyword, as that cannot be accomplished (at least for the time being) without human intervention.

 

 

 

 

23. Demonetization

  1. Demonetization refers to a unilateral action in the form of a sanction that a platform (traditionally a social media platform) takes in order to remove a creator/influencer’s access to the streams of revenue the platform controls. As indicated under two other relevant entries (‘Content creators/influencers’ and ‘Content/web monetization’), one of the business models through which users can make money on platforms such as Youtube or Instagram (less so Facebook) is the monetization of their content through platform-specific programmes, allowing advertisers to display ads in various forms (e.g. banners, multimedia clips) throughout or over their content (Goanta & Ranchordás, 2020). Compared to deplatforming, which entails the removal of a user, demonetization is a less stringent sanction, as it only removes potential income for particular videos. Demonetization can take place in two ways: income can be completely removed, or it can be redirected. The latter situation can occur when a creator receives a copystrike, and the claimant of the copystrike has copyright over material used by the creator. This way, the claimaint of the copystrike can ask for the ad-generated income on the video of the creator infringing their copyright.
  1. Demonetization is closely related to content moderation, because it is a way in which platforms control content. However, due to the inherent characteristics of private governance, such as the lack of transparent criteria for the interpretation of community guidelines in determining which content is in contravention to these rules, platform discretion is a serious problem in the content creator community (Caplan & Gillespie, 2020; Lobato, 2016).
  1. Additional problems with demonetization concern the content creators´ rights to due process and to an effective remedy . For example, although an appeal process is available on Youtube, this does not compensate for the loss of revenue that occurs when consent creators are deprived of monetization in the first hours after publication, which are likely to be the most remunerative ones(Caplan & Gillespie, 2020), and sometimes even for months (Koi, 2020). Further, it has been noted that Youtube´s policy establishes that only those creators who have at least 1000 video views in a week or 10000 channel subscribers can request a re-evaluation of demonetization by a human being. This has been criticized as establishing a “tiered governance” system (Caplan & Gillespie, 2020), where the rules governing the relationship with creators are different depending on their monetization potential. While this may be economically sensible, it is in conflict with the universal and unwaivable nature of fundamental rights.

 

References

 

Catalina Goanta & Sofia Ranchordás, ‘The Regulation of Social Media Influencers’, [2020] Cheltengam: Edward Elgar.

 

Caplan, R., & Gillespie, T. ‘Tiered Governance and Demonetization: The Shifting Terms of Labor and Compensation in the Platform Economy’. [2020] Social Media + Society.

Lobato, R. ‘The cultural logic of digital intermediaries: YouTube multichannel networks’. [2016] Convergence: The International Journal of Research into New Media Technologies, 22(4) 348, 360.

 

Koi, C. ‘Wrongfully demonetized, how many months without revenue on average? - YouTube Community              (7/2/20)’,                              [2020]                               Available                                                     at https://support.google.com/youtube/thread/56781687?hl=en

24. Deplatforming

  1. Deplatforming refers to the ejection of a user from a specific technology platform by closing their accounts, banning them, or blocking them from using the platform or its services. It is worth nothing that deplatforming may be permanent or temporary. Temporary suspensions and impossibility to access one's account can be considered as deplatforming.
  1. Deplatforming is an extreme form of content moderation and a form of punishment for violations of acceptable behavior as determined by the platform’s terms or service or community guidelines. Platforms justify the removal or banning of a user and/or their content based on violations of its terms of service, thereby denying the user access to the community or service that it offers. Deplatforming can and does occur across a range of platforms and can refer to:
    • Social media companies, like Facebook, YouTub or Twitter;
    • Commerce platforms such as Amazon of the Apple iStore;
    • Payment platforms, like PayPal or Visa;
    • Service platforms, like Spotify or Stitcher;
    • Internet infrastructure services like Cloudflare or web hosting.
  1. Deplatforming can be a form of content moderation by tech platforms that find certain content objectionable, or face public pressure to restrict a user’s access to the platform, often as a result of the content of that person’s speech or ideas, or in response to harassing behavior. It has also been deployed to reduce online harassment, hate speech, and coordinated inauthentic behaviour, such as propaganda campaigns. Deplatforming can also occur because of pressure from other platforms, at the same or in different levels of the stack.
  1. Because deplatforming can refer to a range of platforms, and is often implemented as a form content moderation, this approach by platforms that do not host content, or which are further down the internet stack, raises concerns about the expansion of content-based censorship beyond content-hosting services or platforms. The review of accounts and content can be automated or the result of human review, of a combination of both.
  1. The term is explicitly political because it often refers to banning a user from a platform because of the content of their speech and ideas. Deplatforming has been used as a response to hate speech, terrorist content, and disinformation/propaganda. For example, the major social media firms have removed hundreds of ISIS accounts since 2015, seeking to reduce the UN-designated terrorist group’s reach online, which forced them onto less public and more closed platforms, reducing their visibility and public outreach, but also making it more difficult to monitor their activities. In 2018, Facebook and Istagram deplatformed (Facebook, 2018) the Myanmar (Facebook, 2018) military after it was involved in the genocide of Rohingya, closing hundreds of pages and accounts related to the military and banning several affiliated users and organizations from its services.
  2. Deplatforming by dominant platforms has pushed extremists to less popular or less public platforms that offer an alternative set of rules or have not yet grappled with what their rules should be. Deplatforming has given rise to alternative platforms, such as the social media site Gab, the crowd-funding site Patreon and the messaging service Telegram.
  1. Several platforms shut down and banned Alex Jones and Infowars from their platforms in mid- 2018 in response to their support for white supremacy and involvement in disinformation campaigns, which helped politicize and publicize the concept of deplatforming.
  1. De-platforming can reduce the ability to inject a message into public discourse and recruit followers, but it can also push supporters to obscure and opaque platforms where it is substantially more difficult for law enforcement to monitor their activities. One rationale for deplatforming controversial people or organizations is to prevent them from negatively influencing others. Critics argue this is an ineffective tactic because the affected person will just go to another platform, but a Georgia Tech study found (Chandrasekharan et al., 2017) that deplatforming was an effective moderation strategy that reduced the unwanted speech or behavior, and created a demonstration effect for other users that helped enforce norms. In response to takedowns on major platforms, extremists often migrate to lesser-known or protected online forums. Research has also shown that people who are deplatformed often fail to transfer audiences from major to minor platforms. Researchers refer to the “online extremists’ dilemma” which describes (Clifford & Powell, 2019) how online extremists are forced to balance public outreach and operational security in choosing which digital tools to utilize.

 

References

 

Facebook, ' Removing Myanmar Military Officials From Facebook' ( Facebook 28 Aug 2018) < https://about.fb.com/news/2018/08/removing-myanmar-officials/>

 

Facebook,               'Update              on             Myanmar'              (Facebook               15                                  Aug                     2018)

<https://about.fb.com/news/2018/08/update-on-myanmar/>

 

Eshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, Eric Gilbert, ' You Can’t Stay Here: The Efficacy of Reddit’s 2015 Ban Examined Through Hate Speech' [ 2017] Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 31 31,

31:22

 

Bennett Clifford, Helen Christy Powell, ' De-platforming and the Online Extremist’s Dilemma' ( Lawfare 2019) < https://www.lawfareblog.com/de-platforming-and-online-extremists-dilemma>

 

 

25. Device Neutrality

  1. Device neutrality ensures the users right of non-discrimination in the services and apps they use, based on platform control by hardware companies (Hermes Center, 2017; ARCEP, 2018). That means users can have the possibility to choose which operating system (software) or application they prefer to use, regardless of the brand of device they are using. In other words, device neutrality is instrumental to achieve equal access to applications, contents and services which is essential to achieve an open Internet. This is a fundamental civil rights issue as it ensures that the user has the right and possibility to use, for example, the information and communication security tools they prefer on their devices. It is usually framed as a consumer protection rather than a technical measure because it enables citizens to fight against aggressive or deceptive commercial practices that limit their use of applications and unfairly favour their own content or demote that of competitors.
  1. The device neutrality argument defends that consumers have the right to uninstall softwares and to remove apps and content they are not interested in, which at the moment is impossible because companies don’t give the right to remove default applications. It also defends the possibility that all content and service developers can access the same device function. The French Telecommunications Regulator ARCEP (2018) also advocates for device neutrality, stressing the need for more transparency in app store rankings and easier access to applications offered by alternative apps stores.
  1. This concept was first introduced in a legislative proposal in Italy in 2014 by MP Stefano Quintarelli, who proposed a bill called “S.2484 Disposizioni in materia di fornitura dei servizi della rete internet per la tutela della concorrenza e della libertà di accesso degli utenti”. The bill was approved at the Chamber of Deputy and it is still waiting to be voted in Senate, and it addresses Device Neutrality in Article 4 stating that that: “Users have the right to find online, in a format suitable to the desidered technology platform, and to use in fair and non-discriminatory ways software, proprietary or open source, contents and legitimate services of their choice”.
  1. Concerning the correlation between the concept of Device Neutrality and Net Neutrality rights, the first one can be seen as a extension of the second, mostly because they both reinforce the principle of “innovation without permission”, which means that anyone, anywhere, can create and reach an audience without anyone standing in the way (Kak & Ben-Avie, 2018). In a similar way Device Neutrality defends that users have right to non discrimination of the services or apps in their devices regardless the hardware companies, Net Neutrality defends the right to non discrimination by Internet service providers, regardless of the content or applications utilised by Internet users, unless such discriminatory tretment is necessary and proportionate to the achievement of a legitimate aim (Belli & De Filippi, 2015; Belli, 2017) . One’s about equal access to applications and the other about equal access to the Internet.
  1. Another set of rules that could possibly go in the opposite direction to that of device neutrality are the anti-circumvention laws, which provides penalties for those who wish to make changes on their devices and operational systems. At first, these rules were created with the objective of protecting intellectual works from copyright infringement, but have proved useless over the years, only harming competition, innovation, freedom of expression and scientific research (Doctorow, 2019; EFF, s.d.).

 

References

 

ARCEP. ‘Devices, The Weak Link in Achieving an Open Internet, Report on their limitations and proposals         for                              corrective                   measures’.                                   [2018]         Available                   at:

<https://www.arcep.fr/uploads/tx_gspublication/rapport-terminaux-fev2018-ENG.pdf>

 

Luca Belli & Primavera De Filippi (Eds.) ‘Net Neutrality Compendium. Human Rights, Free Competition and the Future of the Internet’. [2015] Springer Available at: ttps://www.ohchr.org/Documents/Issues/Expression/Telecommunications/LucaBelli.pdf

 

Luca Belli. ‘Net Neutrality, Zero-rating and the Minitelisation of the Internet’. [2017] Journal of Cyber Policy. Routledge. Vol 2, nº 1. 96, 122

 

Borchert, Katharina. ‘EU Copyright Law Undermines Innovation and Creativity on the Internet’. [2016] Available at: <(https://medium.com/mozilla-internet-citizen/eu-copyright-law-undermines- innovation-and-creativity-on-the-internet- 6ef65147809d#:~:text=A%20key%20part%20of%20what,anyone%20standing%20in%20the%2 0way>

 

Doctorow, Cory. ‘Bird Scooter tried to censor my Boing Boing post with a legal threat that's so stupid,        it's      a                   whole      new          kind     of                    wrong’.                       [2019]  Available                at:

<https://boingboing.net/2019/01/11/flipping-the-bird.html>

 

Electronic Frontier Foundation (EFF) (n.d.). ‘Digital Millennium Copyright Act.’ Available at:

<https://www.eff.org/issues/dmca>

 

Hermes        Center.        ‘After      Net      Neutrality,       Device        Neutrality’,       [2017]                      Available     at:

<https://www.hermescenter.org/net-neutrality-device-neutrality/>

 

Italian Parliament. ‘S.2484 Disposizioni in materia di fornitura dei servizi della rete internet per la tutela della concorrenza e della libertà di accesso degli utenti’, [2016] Available at:

<https://parlamento17.openpolis.it/atto/documento/id/255634>

 

Kak, Amba U. & Ben-Avie, Jochai. ‘ARCEP report: “Device neutrality” and the open internet’, [2018] Available at: <https://blog.mozilla.org/netpolicy/2018/05/29/arcep-report-device- neutrality/>

 

26. Digital Rights

  1. In a framework of "digital citizenship" (see generally eg Ribble 2011), "[b]eing a full member in a digital society means that each user is afforded certain rights, and these rights should be provided equally to all members." (id, 35). Digital rights are, in such a general sense, connected to "boundaries" [which] "may come in the form of legal rules or regulations, or as acceptable use policies" (id). Therefore, one of the key related terms is responsibility, which also points to the idea that "those who partake in the digital society would work together to determine an appropriate-use framework acceptable to all" (id).
  1. The term "digital rights" is a concept that has gained recognition through an evolving interpretation of rights recognized by governments all over the world. It is worth noting there is a conspicuous lack of l definition of the term. In fact, it is not specifically referenced in the definitions provided by core documents of legal doctrine and policy relevant for the field of platform law and policy, particularly in treaty law, international regulations or national Constitutions. However, claims are progressively being brought in front of the courts or raised in political debates emphasizing the importance of the digital environment.
  1. At the same time, any general search for the term outside these arenas of legal debate easily shows that the term gained major significance in recent times. As the term is widely used in common parlance as well as in all kinds of internet policy debates, advocacy, and legal practice, it is beyond the scope of our definition to cover all the rights that have been addressed in the above-mentioned law and policy context.
  1. A common trait in its usage is the emphasis on the internet's impact on everyday life in our societies on a global scale, through the pervasiveness of online human interaction nowadays. When striving for a possible recognition by the institutions normally guaranteeing fundamental rights, digital rights would be said to act as corollaries of other rights that are available. For example, the UN Human Rights Council in several bi-annual resolutions (2012, 2014, 2016, 2018) has (re-)affirmed "that the same rights that people have offline must also be protected online, in particular freedom of expression".
  1. Any progress towards realizing the idea of universal (digital) access would impact the idea of "digital rights". The ever-growing pervasiveness of digital technologies can also lead to changing the social contract between societal actors and delineating a new understanding around rights, values, responsibilities and entitlements for the benefit of all in a digital society (see generally, eg Vesnic-Alujevic et al 2019).
  1. The current digital divide(s) are, therefore, a major policy issue relevant for the digital rights- campaigns. In this regard, it seems important to have multi-level provisions that explicitly entrench the new digital rights, such as digital access. Besides, existing fundamental rights are still key for the development of policies surrounding recommendations for platforms' responsibility towards rights-holders.

 

References

 

Mike Ribble. ‘Digital Citizenship in Schools, Second edition’, [2011] Washington, DC: International Society for Technology in Education.

 

United Nations General Assembly, Human Rights Council Twentieth Session, 20/L13… The Promotion, Protection and Enjoyment of Human Rights on the Internet, A/HRC/20/L.13 (June 29, 2012)  (+  2014:  https://ap.ohchr.org/documents/dpage_e.aspx?si=A/HRC/RES/26/13  ,  2016:

https://ap.ohchr.org/documents/dpage_e.aspx?si=A/HRC/RES/32/13                                                                                                                                                                     &          2018:

https://ap.ohchr.org/documents/alldocs.aspx?doc_id=29960 ).

 

Vesnic-Alujevic et al. ‘The Future of Government 2030+: A Citizen Centric Perspective on New Government Models’, [2019] Available at: <https://ec.europa.eu/jrc/en/publication/future- government-2030>.

27. Disinformation

  1. Defining this phenomenon has shown to be far from simple. Scholars from different fields have provided definitions of this phenomenon (Tandoc Jr et al. 2018). The information disorder has been defined as the mix of ‘misinformation’, ‘disinformation’ and ‘malinformation’ which respectively reflect increasing levels of harm and involve different content (Wardle and Derakhshan, 2017). False information would include information disseminated as intentionally false and impossible to verify to mislead the public (Allcott and Gentzkow, 2017). Adopting this definition would imply that only news disseminated with the intention to mislead readers would fall into the field of disinformation. Therefore, other (false) information outside the framework of intent could be considered free expressions of each one's thoughts. This could cover for instance information shared due to mistakes or satire as well as investigative journalism which does not base its findings on entirely truthful facts but on reconstructions of truth. According to the European Commission’s High-Level Group on Fake News and Online Disinformation (‘HLEG’), disinformation is “false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit. The risk of harm includes threats to democratic political processes and values, which can specifically target a variety of sectors, such as health, science, education, finance and more” (HLEG, 2018). The expression “disinformation” has been considered a more adequate way to describe the spread of false content. Precisely, this situation is not just connected to (fake) news but also false or misleading content like fake accounts, videos and other fabricated media (Chesney and Citron, 2019). Moreover, the HLEG distinguishes the notion of disinformation from that of ‘misinformation’, i.e. ‘misleading or inaccurate information shared by people who do not recognise it as such’, and underlinesunderline that disinformation does not include illegal speech (e.g. hate speech).
  1. Disinformation is not a phenomenon of the digital age. Its digital dissemination is the novelty. The digital dimension entails the worldwide reach of online content beyond territorial boundaries and media environment (Sunstein, 2017). The spread of false information online during the 2016 Brexit referendum and the US presidential election can be considered two paradigmatic examples of how disinformation influences internal politics, interfering with electoral processes and undermining trust in public institutions and the media. Besides, the pandemic has shown how disinformation can affect information quality and health on a global scale (Majó Vazquez et al., 2020).
  1. The spread of false content can be understood by looking at the new media framework (Martens et al., 2018). The characteristics of media manipulation highlight how the media sector tend to gravitate toward sensationalism, the need for constant novelty, and the aim of achieving profits instead of professional ethical standards and civic responsibility (Marwick and Lewis, 2017). Digital spaces are perfect spaces for disseminating information at minimal or no cost. Within this framework, the role of online platforms, including social media, becomes critical to understand how false and misleading information spread online. The monetization of these expressions capturing users’ attention and becoming viral highly depends on the algorithmic system pushing certain messages to the top and promoting further engagement. Unlike traditional media outlets, social media usually perform content moderation activities implementing automated systems which can decide in a heartbeat how information is organised online. Beyond media strategy to disseminate disinformation, scholars have emphasised the role of the political context. Indeed, the role of technology platforms, bots and foreign spies has tended to be overemphasised (Benkler, 2018). Political parties and, in particular, populism movements have relied on strategies of disinformation to support their political ideas (Bayer et al., 2019).
  1. This framework shows why, before focusing on the challenges in addressing disinformation, it is worth defining the boundaries of false content considering the digital environment as its primary context. These definitions allow us to understand the multifaceted character of disinformation requiring public actors to face the complexities relating to the regulation of freedom of expression online to tackle this phenomenon. This is why dealing with disinformation means addressing the boundaries of the right to free speech, thus, involving democratic values (Pitruzzella and Pollicino, 2020). Indeed, tackling disinformation requires public actors to decide to what extent speech is protected and balanced with other constitutional rights and liberties, as well as how to pursue other (legitimate) interests. Nonetheless, the regulation of speech does not involve any longer just the States and the speaker, but also multiple players outside the control of the State, such as social media companies. In the information society, freedom of expression is like a triangle (Balkin, 2018). Therefore, due to the role of online platforms in this field, regulation should also take into account the effects of regulatory choices over the role and responsibilities of these actors.
  1. From a policy perspective, different regulatory solutions have been adopted worldwide (Robinson et al. 2020; De Gregorio and Perotti, 2019). While the US has not proposed a precise strategy to deal with this phenomenon, the Union focused on soft law commitments by platforms, precisely the code of practice on disinformation, and ad hoc measures targeting the context of the European elections or (Pollicino et al., 2020). Domestic legislation of European states provides a highly fragmented regulatory picture (e.g. Germany, France). From a global perspective, other regulatory experiences have shown a tendency towards the criminalisation of disinformation (e.g. Singapore, Russia). This fragmentation does not only challenge the protection of the right to freedom of expression online but also undermines the principle of the rule of law rather than promote a clear regulatory framework to address this global phenomenon. For instance, vagueness about definitions and threshold of harm or illegality would negatively impact on the right to freedom of expression. Within this framework, the importance of judicial scrutiny of these regulatory measures could ensure a fair assessment of the case and mediation from an independent authority. The role of judicial authority to scrutinise measures to remove false content could contribute to safeguarding the right to freedom of expression against discretional decisions taken by platforms or non-independent public bodies. Besides, the promotion of fact-checking activities, supporting professional media outlets and investing resources for digital literacy campaigns could play a critical role (Ireton and Posetti 2018). Relying on these measures would entail a lower impact on freedom of expression while building the instruments to fight disinformation on a global scale.

 

References

 

Allcott, Hunt and Gentzkow, Matthew, ‘Social Media and Fake News in the 2016 Election’ [2017] 31(2) Journal of Economic Perspectives 211

 

Balkin, Jack M., ‘Free Speech is a Triangle’ [2018] 118 Columbia Law Review 2011

 

Bayer, Judith et al., ‘Disinformation and Propaganda—Impact on the Functioning of the Rule of Law in the EU and Its Member States’ [2019] Study for the LIBE Committee;

 

Benkler, Yochai et al., Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics (Oxford University Press 2018)

 

Chesney, Robert and Citron, Danielle, ‘Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security’ [2019] 107 California Law Review 1753

 

De Gregorio, Giovanni & Perotti, Elena, ‘Tackling Disinformation around the World’ [2019]

<https://www.wan-ifra.org/reports/2019/05/03/public-affairs-media-policy-briefing-tackling- disinformation-around-the-world>.

 

Ireton, Cherilyn and Posetti, Julie (eds), ‘Journalism, ‘Fake News’ & Disinformation’ [2018] UNESCO <https://unesdoc.unesco.org/ark:/48223/pf0000265552>

 

Pollicino, Oreste et al., ‘The Regulatory Conundrum to Face the Raise and Amplification of False Content in Internet’ [2020] Global Community Yearbook of International Law and Jurisprudence, forthcoming

 

European Commission, ‘A Multi-Dimentional Approach to Disinformation. Final report of the High- Level Expert Group on Fake News and Online Disinformation’ [2018]

 

Majó-Vázquez, Silvia et al., ‘Volume and Patterns of Toxicity in Social Media Conversations during the COVID-19 Pandemic’ Reuters Institute [9 July 2020]

< volume-and-patterns-toxicity-social-media- conversations-during-covid-19-pandemic>

 

Martens, Bertins et al., ‘The Digital Transformation of News Media and the Rise of Disinformation and Fake News’ [2018] JRC Digital Economy Working Paper no. 2;

 

Marwick, Alice and Lewis, Rebecca, ‘Media Manipulation and Disinformation Online’ Data & Society (15 May 2017);

 

Robinson,          Olga         et       al.,        ‘A        Report         on        Antidisinformation                            Initiative’           [2019]

< wp-content/uploads/sites/93/2019/08/A-Report-of-Anti- Disinformation-Initiatives>;

 

Pitruzzella, Giovanni and Pollicino, Oreste, ‘Disinformaiton and Hate Speech. A European Constitutional Perspective’ (Bocconi University Press 2020);

 

Sunstein, Cass R., ‘#Republic: Divided Democracy in the Age of Social Media’ (Princeton University Press 2017);

 

Tandoc Jr, Edson C. et al., ‘Defining “Fake News”: A Typology of Scholarly Definitions’ [2018] 6(2) Digital Journalism 137;

 

Wardle C., and Derakhshan H., ‘Information Disorder: Towards an Interdisciplinary Framework for           Research                 and                Policy           Making’            Council             of           Europe      report           DGI(2017)09

<https://rm.coe.int/information-disorder-toward-an-interdisciplinary-fram... researc/168076277c>

 

  1.  

28. Dispute resolution (online)

  1. Online dispute resolution (ODR) is an umbrella term referring to out-of-court dispute resolution techniques which are offered using technological infrastructures, particularly on the internet (Thomson & Sherr, 2012). As a sub-category of alternative dispute resolution (ADR), ODR is a ‘private independent but binding justice system’ (Mistelis, 2006), by which parties to a dispute have access to procedures through which they can settle their differences (Hörnle, 2009).
  1. Some authors argue that ODR systems should have essential properties such as simplicity (user- friendliness), adaptability (processes automatically adjusted to the needs of the parties) and interoperability (ensuring connectivity of stakeholders regardless of data architecture differences), in order to be properly integrated in Internet-based industries such as e-commerce (Kaufmann- Kohler & Schultz, 2004).
  1. As a system of private justice, ODR has been historically focused on facilitating the solving of disputes between sellers and consumers, which is why one of the most cited case studies of successful ODR is eBay’s Resolution Center (Del Duca, L.F. et al., 2014)). However, this success from early commercial activity on the Internet did not transfer smoothly to other categories of intermediation business models which occurred at later stages. For instance, originally, sharing economy platforms such as AirBnB or Uber would catalogue disputes between hosts and guests or drivers and passengers as customer care problems, often solved through the use of FAQs or automated forms which would provide certain forms of immediate or reviewed relief (e.g. the return of a deposit fee; the return of the charged ride rate). Especially in these cases, the limited platform support in even reaching the other contracting party before or after the completion of the transaction has amplified the pitfalls of the limited liability regimes information society services have been traditionally benefitting from. The same can be said for social media platforms, which provide the intermediation of media services as well as peer content, in an environment where toxicity and abusive language is abundant. This leads to various harms, some of which can be easily labelled legally (e.g. insult, defamation, incitement to hatred), and some of which are more difficult to pinpoint from the perspective of existing legal frameworks (e.g. swarm bullying or cancel culture). All in all, the rationale behind dispute resolution is that users are in search for justice (Citron & Jurecic, 2018; Katsh & Rabinovich-Einy, 2017), and when justice deals with the removal of content, it goes into the realm of content moderation, absent a framework for the resolution of disputes between peers, beyond systems for the reporting of abusive content, which, ironically, are often themselves abused (e.g. cancel culture). Some platforms may have content and reporting management centers (e.g. Youtube) for areas of activity where platform liability may arise in the absence of additional measures (e.g. copyright; Google, 2020). However, given the existing disconnect between real courts and Internet harms, as well as the lack of successful, scalable models for ODR across the various services offered by information society services, it can generally be said that ODR is a field still in its infancy.

 

References

 

Thomson, S. & Sherr, A, ' Definitions of Online Dispute Resolution'  in  Martin  Gramatikov (eds), Costs and Quality of Online Dispute Resolution: A Handbook for Measuring the Costs and Quality of ODR (1st, Maklu, 2012).

 

Mistelis, L. ‘ADR in England and Wales: a successful case of public private partnership’, [2003] 6(3)                                                                    ADR                                                                                         Bulletin.

 

 

Hörnle J, Cross-Border Internet Dispute Resolution (Cambridge University Press 2009) Loebl, Z. Designing Online Courts: The Future of Justice Is Open to All (Kluwer 2019).

Del Duca, L.F. et al. ‘eBay's De Facto Low Value High Volume Resolution Process: Lessons and Best Practices for ODR Systems Designers’, 6 Yearbook of Arbitration and Mediation 204 (2014).

 

Tworek et al. ‘Dispute Resolution and Content Moderation: Fair, Accountable, Independent, Transparent, and Effective’, [2020] Transatlantic Working Group, available at < https://www.ivir.nl/publicaties/download/Dispute_Resolution_Content_Moderation_Final.pdf>.

 

Citron, D. & Jurecic, Q. ‘Platform Justice: Content Moderation at an Inflection Point’, [2018] Aegis Series Paper No. 1811, available at < https://www.lawfareblog.com/platform-justice-content- moderation-inflection-point>.

 

Katsh, E. & Rabinovich-Einy, O. Digital Justice (Oxford University Press 2017).

 

Google.          ‘What         is         a         Content          ID          claim?’,          [2020]                         available    at        < https://support.google.com/youtube/answer/6013276?hl=en>.

 

  1.  

29. Due diligence

  1. In several legal fields, both in international law and in domestic legislation, the concept of due diligence corresponds to what a responsible entity – be it a state or a business enterprise – ought to do under normal conditions in a situation with its best practicable and available means, with a view to behave responsibly and fulfil its obligations. (Dupuy 1977:13) In this perspective, due diligence refers to a level of judgement, care, prudence and determination that an entity is reasonably expected to undertake under specific circumstances.
  1. In some contexts, due diligence refers also to the process by which an entity interested in a specific activity entailing potential risks, such as a purchase or an investment, identifies, analyses and define how to manage such risks before entering in an agreement or transaction.
  1. Hence, due diligence entails a range of analyses and considerations before performing given activities and/or during the performance, in order to prevent and mitigate risks that could determine harm.
  1. In the field of Environmental Law, for example, due diligence signifies the conduct to be expected from a responsible stakeholder, in order to effectively protect other stakeholders and the global environment. (Dupuy 1977:3) Failure to exercise due diligence, therefore, means incapacity to fulfil the standard of conduct expected from a responsible stakeholder in the specific situation.
  1. The International Law Commission considers due diligence as a primary environmental obligation of States. In Articles 3-7 of the Convention on the Prevention of Transboundary Harm from Hazardous Activities, for instance, four features of due diligence can be distinguished and applied, by analogy, to other fields:
  • taking all appropriate measures to prevent and minimise the risk
  • cooperating with other stakeholders
  • implementing obligations through all necessary regulatory actions, including monitoring mechanisms
  • a prior assessment of the possible external harm should be done before giving authorisation for an activity or a major change
  1. The Recommendations on Terms of Service and Human Rights developed by the IGF Coalition on Platform Responsibility attempt to define “due diligence” standards for online platforms with regard to three essential components: privacy, freedom of expression and due process. The existence of a responsibility of private sector actors to respect human rights was affirmed in the UN Guiding Principles on Business and Human Rights, from which the Recommendations derive their inspiration and the core elements applied to the platform responsibility domain. The Recommendations aim to provide a benchmark for respect of human rights, both in the relation of a platform’s own conduct as well as with regard to the scrutiny of governmental requests that they receive. As part of their responsibility, platforms should:
  • make a policy commitment to the respect of human rights
  • adopt a human rights due-diligence process to identify, prevent, mitigate and account for how they address their impacts on human rights; and
  • have in place processes to enable the remediation of any adverse human rights impacts they cause or to which they contribute

 

References

 

Dupuy, Pierre, ‘Due diligence in the International Law of Liability’ (Legal Aspects of Transfrontier Pollution, OECD, 1977), Paris, France

 

  1.  

30. Duty of care

  1. In legal dictionaries, the term "duty of care" has been put either as a (general) principle of "prudence, meticulousness, care" or an obligation thereto (Le Docte Legal Dictionary in Four Languages, 2011). It is generally defined as "having regard to interests” (id.) While it could refer to a specific and closed type of obligation, which is put only on the government as a duty to protect its members (thus, officials), the concept as defined here also has relevance for relations amongst or between individuals in both the governmental and private sphere.
  1. As used in normal parlance, the duty would require the taking of certain measure(s), and in this context also companies or other non-governmental parties are being targeted. Thus, in terms of "platform responsibility", it means to responsibly deal with the negative externalities of (commercial) practices by the internet firms and other market parties involved.
  1. As follows from the academic literature on internet law (see eg Van Eijk et al 2010, Tjong Tjin Tai et al 2015; see also De Streel, A. et al. (2020)), which is commonly based on comparative law methods, the concept "duty of care" is prone to having contested contours. While the term is formally embedded in public laws, for example as part of tort law to support a mechanism for defining negligence in private relationships, it seldom has precise definitions of its own. Nonetheless, it could be said that all (self-, co-, and the more formal) regulatory responses to issues that are identified as relevant in the law and policy making concerning online platforms aim to dictate a "duty-of-care" across both public and private entities based on public order needs. For example, based on other concepts such as trust, these duties of care can be imposed on non- governmental actors when public policy measures identify them as information fiduciaries. Similarly, in the safe harbor regimes that exist in global internet law, duties of care are established within the dynamics of interpreting the exemptions of liability provided for by the regime, such as the European Union’s e-Commerce Directive leaving the possibility for Member States to impose reasonable duties of care on service providers ‘in order to detect and prevent certain types of illegal activities’ (EU Directive on electronic commerce, 2000). An exceptional new legal measure is put forward by the proposal in the UK announced in its "Online Harms White Paper" (2019) aiming to oblige companies to protect users against certain harmful content and an online regulator to deal with internet safety/security issues, which would put a regulatory framework in place with a “mandatory duty of care”. This White Paper and the proposals have been met with several criticisms especially concerning the vagueness of the term “duty of care” that would confront users and platforms alike. It is said that such a general use of the term would bring undue uncertainty if implemented as a statutory response to online harm.

 

References

 

EU Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (EU Directive on electronic commerce), OJ L 178/1, 17.7.2000. Available at <https://eur- lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32000L0031&from=en>.

 

UK Department for Digital, Culture, Media & Sport and Home Office, ' Online Harms White Paper' (UK Government 2019) < https://www.gov.uk/government/consultations/online-harms-white- paper/online-harms-white-paper>

 

UK Government , ' Press release ' ( UK Government 2019) < https://www.gov.uk/government/news/uk-to-introduce-world-first-online-safety-laws>

 

‘Symposium: Online Harms White Paper’, [2019] Issue of the Journal of Media Law, Vol 11, issue 1, at <https://www.tandfonline.com/toc/rjml20/11/1?nav=tocList&>).

 

Hans-Werner Zehnhoff AM, Hugues Timmermans, Erika Schmatz, Yvonne Salmon, Le Docte: Legal Dictionary in Four Languages. (1st, Intersentia, Antwerpen 2011)

 

N.A.N.M. van Eijk, T.M. van Engers, C. Wiersma, C. Jasserand & W. Abel, ' Moving Towards Balance: A Study into Duties of Care on the Internet' [2010] Institute for Information Law Research Paper              No.                                     2012-16             ,                                       Available                  at

<https://www.ivir.nl/publicaties/download/Moving_Towards_Balance.pdf>.

 

De Streel, A. et al. , Online Platforms\' Moderation of Illegal Content Online, Study for the committee on Internal Market and Consumer Protection, (1st, Policy Department for Economic, Scientific  and Quality  of Life Policies,  European Parliament,,  Luxembourg 2020)  Available  at

<https://www.europarl.europa.eu/RegData/etudes/STUD/2020/652718/IPOL_STU(2020)652718

_EN.pdf>.

 

Tjong Tjin Tai, T.F.E.; Koops, E.J.; Op Heij, D.J.B.; E Silva, K.K.; Skorvánek, I., Duties of care and                       diligence     against                          cybercrime (1st,      Tilburg            University., 2015)                             Available    at

<https://pure.uvt.nl/ws/portalfiles/portal/5733322/Tjong_Tjin_Tai_cs_Duties_of_Care_and_Cybe rcrime_2015.pdf>.

 

 

 

31. End to End encryption

  1. End-to-end encryption (E2EE) is the use of cryptography implemented through the Transport Layer Security protocol (TLS) for the protection of a message so that it can only be read by the communicating users (Electronic Frontier Foundation, 2020; W3C, 2015), and not by third parties acting as intermediaries in the transfer of this data. This is made possible through the use of asymmetric cryptography (also known as public-key cryptography), which generates two keys (large numbers with mathematical properties) for the decryption of the message: a private key for encryption and a public key for decryption (Electronic Frontier Foundation, 2018), unlike symmetric cryptography, where the same key is used for both encryption and decryption (Goanta & Hopman, 2020).
  1. Recent definitional issues around E2EE have shown that some companies may indicate that they implement E2EE when in fact that is not the case (Schneier, 2020; Lee & Grauer, 2020). Given the harms which may arise out of not abiding by security standards a company may refer to in order to appease its user base, the assessment of these implementations can become crucial for consumer protection, contract law and misleading marketing.
  1. More specifically, proper E2EE would require that the sending and receiving user manage their encryption keys and procedures directly, and that third party communication software only ever deals with the encrypted content. As this is inconvenient for the average user, almost all current implementations that claim to offer “end-to-end encryption” (e.g. in instant messaging and videoconferencing tools) offer in fact “managed app-to-app encryption”, in which the application also takes care of creating and managing the user’s keys and of encrypting and decrypting the messages. As a consequence, the application also has access to the unencrypted content and could examine it or make it available to its maker or to other parties.

 

References

 

Yan Zhu, ' End-to-End Encryption and the Web' ( W3C Technical Architecture Group (TAG) 2015)

< https://www.w3.org/2001/tag/doc/encryption-finding/>

 

SSD.EFF.ORG, ' A Deep Dive on End-to-End Encryption: How Do Public Key Encryption Systems Work?' (SURVEILLANCE SELF-DEFENSE 2018) < https://ssd.eff.org/en/module/deep-dive-end- end-encryption-how-do-public-key-encryption-systems-work>

 

SSD.EFF.ORG, ' End-to-end encryption' ( SURVEILLANCE SELF-DEFENSE 2020 ) <

https://ssd.eff.org/en/glossary/end-end-encryption>

 

Goanta, C. & Hopman, M. , ' Crypto communities as legal orders.' [ 2020] Internet Policy Review, 9(2). , DOI: 10.14763/2020.2.1486.

 

Bruce Schneier, ' Security and Privacy Implications of Zoom' ( Schneier on Security 2020) < https://www.schneier.com/blog/archives/2020/04/security_and_pr_1.html>

 

Micah Lee, Yael Grauer, ' Zoom Meetings Aren’t End-to-end Encrypted, Despite Misleading Marketing' ( The Intercept 2020 ) < https://theintercept.com/2020/03/31/zoom-meeting- encryption/>

 

 

 

 

32. Federated Service

  1. Federated services are those provided by many organizations, in a coordinated manner. Each organization provides the service to its users, but these services interoperate with each other. This means that a user from one organization can exchange data and services with another user served by another organization. E-mail is a good example of a federated service: you can send and receive email messages from any email user, just with their address information. It is not important that it be served by the same organization as you. Thus, a message sent from Gmail or RiseUp reaches users of any other organization perfectly.
  1. Using the idea of federation for other internet services, users could utilize their favorite service with all the benefits of security or usability, but with the additional capability of being able to communicate with each other across servers and organisations (like editing the same document using Google Docs and OneDrive or sharing the same playlist using Deezer and Spotify). Collaboration has always been key in the network services environment, and this federation feature allows maintaining the independence of companies while offering a unified vision to users.

33. Fact-checking

  1. Fact-checking has gained prominence as a concept, in light of the global proportions gained by the divulgation of fabricated and misleading content, frequently categorised as “fake news” or misinformation/disinformation.
  1. The term “fact-checking” however can refer to two different types of activities depending on whether it is performed as parts of editorial responsibility, before the publication of specific content or, as verification of the veracity of suspicious sensationalist content – that may be entirely fabricated in bad faith with the aim of misleading the public opinion – and that is already circulating.
  1. In journalism, fact-checkers traditionally proofread and verify factual claims ex ante, to make sure that articles drafted by reporters correctly represent the facts, thus evaluating the solidity of the reporting, before publication, to avoid responsibility for false claims.
  1. Ex post fact-checking seeks to verify claims and content, thus avoiding that public opinion is deceived while making public figures – typically politicians – accountable for the truthfulness of their statements. As such, fact-checkers in this line of work seek primary and reputable sources that can confirm or negate claims made to the public. (Mantzarlis 2018:82)
  1. Considering the proven vulnerability and permeability to misinformation and disinformation of social networking platforms, this latter fact-checking activity has gained prominence and become particularly relevant to debunk so-called “fake news”, over the past years.
  1. As reported by the UNESCO (Mantzarlis 2018:84), fact-checking is typically composed of three phases:
      •  
    • Finding fact-checkable claims by scouring through legislative records, media outlets and social media. This process includes determining which major public claims (a) can be fact- checked and (b) ought to be fact-checked.
    • Finding the facts by looking for the best available evidence regarding the claim at hand.
    • Correcting the record by evaluating the claim in light of the evidence, usually on a scale of truthfulness.

 

References

 

Alexios Mantzarlis, ' Fact-Checking 101' in Ireton, Cherilyn, Posetti, Julie (eds), Journalism, \'Fake News\' and Disinformation: A Handbook for Journalism Education and Training. (1st, UNESCO, 2018).

34. Flagging

  1. Flagging is a common “mechanism for reporting offensive content to a social media platform” (Crawford and Gillespie, 2016) or other digital platforms, and refers to the act itself of clicking or otherwise demarcating that a specific social media post, link, video, or other content should be removed or reviewed by the platform. According to Crawford and Gillespie, the flagging feature “is found on nearly all sites that host user-generated content, including Facebook, Twitter, Vine, Flickr, YouTube, Instagram, and Foursquare, as well as in the comments sections on most blogs and news sites.” (Crawford and Gillespie, 2016). Flagging processes can involve varying degrees of sophistication depending on the options offered by the platform. For example, some platforms may allow a user to flag content by simply reporting it as offensive but with no further detail or explanation. Other platforms may provide a drop-down menu or open field form upon a piece of content being flagged, which permits the user to write or select a pre-filled reason that they flagged the content (e.g., noting whether it is harassment, contains violence, or contains nudity), or categorizing what they consider the relevant infraction to be (e.g., image-based abuse, copyright infringement, or violating one or more community standards). While the above description is what flagging is widely understood to be at a basic level, Crawford and Gillespie discuss in their paper, “What is a flag for? Social media reporting tools and the vocabulary of complaint” the broader and more complex sociological role and influence that user flags hold or can be interpreted to have across digital platforms, as “a little understood yet significant marker of interactions between users, platforms, humans, and algorithms, as well as broader political and regulatory forces.” (Crawford and Gillespie, 2016).

 

References

 

Kate Crawford and Tarleton Gillespie, ' What is a flag for? Social media reporting tools and the vocabulary of complaint' [ 2016] New Media & Society 18:3 410, 412

35. Filter

  1. This entry (i) introduces the concept and classification of Internet filters, (ii) provides a more detailed (but very general) analysis of network filters, and (iii) provides a similar analysis of endpoint filters.
  1. An introduction to Internet filters
  1. An Internet [content] filter is a piece of software (and, sometimes, of specialized hardware) that selectively blocks content being transmitted in an Internet communication.
  1. Multiple classifications of Internet filters are possible, depending on different factors. Endpoint filters operate at the end-points of a connection, i.e. on the server or on the user’s device; network filters operate in the middle, somewhere on the connection path between the user and the server. Upload filters operate when the content is first posted onto the Internet, while access filters operate when a user tries to access existing content.
  1. Independently from where and when the content is filtered, the filtering may happen on behalf of different parties. Filters can be voluntarily activated on request of the end-user, to provide services such as parental control for families or productivity control for companies. Network administrators and Internet access providers can deploy filters to prevent connection to harmful services, such as botnet command and control centres or phishing websites. Service and content providers can deploy filters to reject unacceptable content or to prevent access from specific jurisdictions. Governments and courts can mandate the deployment of filters according to applicable regulation, to enforce licensing requirements (e.g. gambling, pharmacies), to prevent access to illegal content (e.g. copyright infringements, child sexual abuse material, hate speech) or to censor their political opponents.
  1. Internet filters are widely used as an alternative to content takedown for cases in which the content cannot or should not be taken down, as they allow to make content inaccessible even without any cooperation by the entity hosting it or by the country where it is located. These cases include:
    • content that is legal, but objectionable to the end-user or the network administrator;
    • content that is illegal in the country from which the Internet is being accessed, but legal in the country where it is hosted;
    • content that is illegal in the country where it is hosted, but cannot be taken down easily and promptly enough for technical, practical or legal reasons.
  1. There is ample debate in legal, moral, policy and technical terms on whether Internet filters are desirable and under which conditions.
  1. Network filters
  1. Network filters are usually applied by telecommunications providers, and specifically Internet access providers, since the most effective point where to apply them is on the local loop connection between the home network and the ISP’s backbone. They are very common for implementing filters on behalf of any of the three stakeholders (the user, the State and the ISP).
  1. Firewalls block connections according to the destination IP address and service. They are effective but suffer both from overblocking - as often a single IP address hosts hundreds of independent websites and services - and from easy circumvention - as the service operator can simply move the service to a different IP address.
  1. Thus, content-level filtering methods have been developed. Transparent proxies silently intermediate connections at the application protocol level (HTTP, for example) and examine the actual content to decide whether to allow access to it. Deep packet inspection (DPI) appliances look at the content within network packets, to the same effect. Both methods access the actual content, including any personal information included in it, and thus infringe the user’s privacy. They have become increasingly ineffective due to the widespread adoption of encrypted protocols (e.g. HTTPS); they would then require breaking the encryption or having a backdoor into it.
  1. As an alternative, rendez-vous filters do not examine content, but only act on service connections necessary to obtain metadata for the actual retrieval of content. The most common type is DNS filters, which are applied at the endpoint of the connection with the ISP’s DNS resolver, where the IP address for the desired hostname is retrieved. These filters do not look at the content and do not require breaking the encryption, but require that the user adopts the filtering resolver.
  1. In policy terms, network filters can break network neutrality and so their use is often restricted by regulation. In the European Union, the Open Internet Regulation (Art. 3) (EU Regulation, 2015) only allows network filters if mandated by law or if necessary for network security and management. At the same time, many European countries have laws or court rulings that mandate the filtering of certain types of content or of specific websites, requiring ISPs to implement such blocks.
  1. The Internet’s technical and business community is divided over network filters. Internet platforms, application developers and the IETF prefer the use of endpoint filters whenever possible (Barnes et al., 2016), and have embraced a policy of encrypting connections as much as possible also to circumvent network filters. On the other hand, network operators and Internet service providers, often backed by their governments, find network filters desirable and useful for a variety of purposes.
  1. This disagreement also has implications for competition, as the disruption of network filters by application makers can have the effect of drawing users away from services provided by ISPs and into services provided over-the-top by the platforms (Borgolte et al., 2019), where the filtered content (even if ruled illegal in the user’s country) is immediately available.
  1. Endpoint filters
  1. Endpoint filters are applied at either edge of a network connection.
  1. When applied on the user’s device, they generally are voluntary filters to prevent some users (for example, children) from accessing some content. In this regard, endpoint filter providers directly compete with network filter providers supplying similar services with different technologies.
  1. On the other hand, endpoint filters on the server side are often used to prevent access or distribution of harmful or illegal content, similarly to network filters.
  1. For example, upload filters are applied by platforms that distribute user-generated content to verify whether such content may be objectionable or illegal. In the European Union, Article 17 of the recent Copyright Directive (EU Regulation, 2019) de facto mandates the deployment of such filters to prevent the upload of copyright-infringing content.
  1. Search filters are customarily deployed by search engines and other indexing services to hide pointers to content which is deemed illegal or inappropriate. Search engines, mostly based in the United States, customarily remove pointers to some results in response to complaints filed under the Digital Millennium Copyright Act (Google 2020).
  1. Geoblocking is a type of server-side endpoint filtering in which content is made available or not depending on the estimated country of origin of the connection. In the European Union, geoblocking is seen as a potential distortion of the single market and thus has been regulated with the Geo-Blocking Regulation (EU Regulation, 2018).
  1. As endpoint filters are generally implemented by entities other than telecommunication providers, they are usually not regulated against. They do not raise network neutrality concerns, yet platforms could also use them in non-neutral ways to influence which content and services users can access.

 

References

 

European Parliament (EC) Regulation (EU) 2015/2120 of the European Parliament and of the Council of 25 November 2015 laying down measures concerning open internet access and amending Directive 2002/22/EC on universal service and users’ rights relating to electronic communications networks and services and Regulation (EU) No 531/2012 on roaming on public mobile communications networks within the Union (Text with EEA relevance) [ 2015]

 

R. Barnes, A. Cooper, D. Thaler, E. Nordmark, ' Technical Considerations for Internet Service Blocking and Filtering' (IETF 2016) < https://tools.ietf.org/html/rfc7754>

 

Borgolte, Kevin and Chattopadhyay, Tithi and Feamster, Nick and Kshirsagar, Mihir and Holland, Jordan and Hounsel, Austin and Schmitt, Paul, , ' How DNS over HTTPS is Reshaping Privacy, Performance, and Policy in the Internet Ecosystem' [2019] Princeton University and The University of Chicago. Availble at: <https://ssrn.com/abstract=3427563>

 

European Parliament (EC) Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC (Text with EEA relevance.) [ 2019] 0J L Document 32019L0790

 

Google Transparency Report, 'Content delistings due to copyright' (Google 2020) < https://transparencyreport.google.com/copyright/overview?hl=en>

 

European Parliament and of the Council (EC) Regulation (EU) 2018/302 of the European Parliament and of the Council of 28 February 2018 on addressing unjustified geo-blocking and other forms of discrimination based on customers\' nationality, place of residence or place of establishment within the internal market and amending Regulations (EC) No 2006/2004 and (EU) 2017/2394 and Directive 2009/22/EC [ 2018] 0J L Document 32018R0302

 

 

 

36. (Digital) Gatekeeper

  1. Digital gatekeepers are the equivalent of the guardians of critical infrastructure (e.g. highway, railway, utilities and telecommunications) in the digital world. Thus, the focal element of this definition is the critical nature of the services they provide, in enabling both the enjoyment of services that are considered essential for digital citizenship, and the provision of such services by third parties.
  1. There are, however, competing views of the essential elements of this term. For instance, one of the first users of the term in the legal domain relies on the four criteria identified by an economist (Reiner Kraakman) who focuses on requirements that regulators should meet before designing an entity of gatekeeper, specifically “(1) serious misconduct that practicable penalties cannot deter; (2) missing or inadequate private gatekeeping incentives; (3) gatekeepers who can and will prevent misconduct reliably, regardless of the preferences and market alternatives of wrongdoers; and (4) gatekeepers whom legal rules can induce to detect misconduct at reasonable cost.
  1. Another pioneer in the area, Emily Laidlaw, defines gatekeeper power as a function of the impact on participation in democratic culture, which in turn depends on: (1) when the information has democratic significance; and (2) when the com- munication occurs in an environment more closely akin to a public sphere. As a result of this, she identifies two different categories> Internet gatekeepers, which are those gatekeepers that control the flow of information; and internet information gatekeepers, which as a result of this control, impact participation and deliberation in democratic culture.
  1. More recent scholarship seems to accentuate, rather than reconciliate these divergences. For instance, Thomas Kadri uses “digital gatekeepers” in a less metaphorical sense, referring to the property owners that may permit and restrict access to their websites much like landowners may do with private land in the real world. In this sense, he discusses how cyber-trespass law empowers them with legal rights of inclusion and exclusion over information on website.
  1. By contrast, Rory Van Loo calls platforms the “New Gatekeepers” to describe how administrative agencies increasingly conscript them to “perform the duties of public regulator” and police other businesses. Similarly, Daniel Citron defines a special role of "digital gatekeepers” on preventing online hate, referring to entities that “have substantial freedom to decide whether and when to tackle” harms like cyber-harassment by deciding what content appears on their websites.
  1. Adopting a more media-focused approach Eli Pariser has invoked the gatekeeper language to describe how platforms exercise editorial control over the news and information we consume, replacing the “old gatekeepers” that ran traditional broadcast and print media. Helberger and other reinforce this understanding, focusing on the control of critical resources, rather than on access to and supply of information, as a measure of heir ability to affect user choices and diversity of exposure.
  1. More recently, we have a seen a resurgence of the concept of gatekeeper especially within the realm of competition law, as a threshold that triggers a more stringent scrutiny for intervention. Recent reports on proposed changes to the exiting framework of competition law for the digital age use similar terms, such as:
  • bottleneck power, where consumers primarily single-home and rely upon a single service provider, which makes obtaining access to those consumers for the relevant activity by other service providers prohibitively costly (EU Special Advisser´s Report)
  • intermediation power, linked to having “unavoidable trading partner” status (Competiton 4.0 report)
  • strategic market status or competitive gateway, i.e. in a position to exercise market power over a gateway or bottleneck in a digital market, where they control others’ market access. The Furman report focuses on three main variables: i) the power to control access to certain goods and services and charge high access fees; (ii) the power to manipulate rankings or the prominence of a given good/service; and (iii) the power to control reputations.418 It also stresses how the concept of “Significant Market Power” that exists in telecom markets can provide some references on how to think about strategic market status in digital markets. Finally, the CMA in its Digital Markets Report complemented that for platforms funded by digital advertising some of the criteria should include measures of shares of supply in consumer-facing markets, reach across consumers, share of digital advertising revenues, control over the rules or standards which apply in the market and the ability to obtain and control unique datasets

 

References

 

Thomas E. Kadri, ' Digital Gatekeepers' [ 2020] Kadri, Thomas, Digital Gatekeepers (July 31, 2020). Texas Law Review, Vol. 99, Forthcoming,

 

Danielle Citron, 'Hate Crimes in Cyberspace ' [ 2014] Harvard University Press

 

Rory van Loo, ' The New Gatekeepers: Private Firms as Public Enforcers ' [ 2020] 106 Virginia Law Review 467

 

Emily Laidlaw, ' A framework for identifying Internet information gatekeepers' [ 2010] International Review of Law, Computers & Technology, Vol. 24, No. 3, 263, 276

 

Helberger, Natali, Kleinen-von Königslöw, Katharina, and van der Noll, Rob, ' Regulating the New Information Intermediaries as Gatekeepers of Information Diversity' [ 2015] Info, VOL. 17 NO. 6, 50, 71

 

Reinier H. Kraakman, ' Gatekeepers: The Anatomy of a Third-Party Enforcement Strategy' [1986] 2 J.L. ECON. & ORG. 53, 53 & n.1

 

Jonathan Zittrain, Harvard Journal of Law & Technology Volume 19, Number 2 Spring 2006

 

Jacques Crémer, Yves-Alexandre de Montjoye and Heike Schweitzer, Competition Policy for the Digital Era (1st, European Comission, Luxembourg 2019)

 

Stigler Center News, ' Stigler Committee on Digital Platforms: Final Report' (Chicago Booth 2019)

<                    https://www.chicagobooth.edu/research/stigler/news-and-media/committee-on-digital- platforms-final-report>

 

Competition and Markets Authority, ' Online platforms and digital advertising market study' ( UK Government 2020) < https://www.gov.uk/cma-cases/online-platforms-and-digital-advertising- market-study>

 

Commission Competition Law 4.0, ' A New Competition Framework for the Digital Economy' ( BMWi 2020)

 

37. Governance

  1. The concept of governance offers a ‘decentred’ perspective on regulation, which does not emanate solely from the state but instead emerges from (complex, interactive) constellations of public and private stakeholders. In the words of Julia Black, “‘[g]overnance’ is a much debated term, but most definitions revolve around the observation that both public and private actors are involved in activities of steering or guiding ‘the governed’ in ways that may or may not be interrelated.” (Black, 2008) Narrower conceptions of ‘governance’ do exist, in which it remains the sole purview of the state, as do broad conceptions of ‘regulation’ which also recognize the role of private actors (Gorwa 2019).
  2. Despite these various shadings and permutations, several authors tend to see ‘governance’ as broadly synonymous with regulation, though connoting an emphasis on the role of private actors in processes of rule-making and enforcement. However, the equivalence between governance and regulation seems to visible only in English-language literature.
  3. As noted by Belli (2016 & 2019) the use of the term governance in reference to the Internet frames the mechanisms that stimulate the interaction and association of different stakeholders in a political space where divergent ideologies and economic interests are confronted. In this context, governance can be considered as the set of processes and institutions that organize comparison of heterogeneous ideas and perspectives and, ideally, promote the collaborative proposal of new regulatory instruments aimed at solving specific problems. On the contrary, regulation can be considered as the collection of the different regulatory instruments that are the product of governance. In the view of Frison-Roche (2003), the objective of regulation is to foster equilibrium and ensure the proper functioning of complex systems, characterized by the presence of a plurality of actors, animated by divergent purposes and interests.  In the latter case may be contractual, such as the terms and conditions (Venturini et al. 2016) defining the rules for the use of web platforms (Belli & Zingales 2017), mobile applications and Internet access networks, or technical, such as the algorithms, standards and protocols defining the software and hardware architectures that determine what users can and cannot do in the digital environment (Reidenberg 1998; Lessig 2006; Belli 2016).
  1. In the context of internet governance, influential accounts including those of Van Eeten & Mueller (2013) and Hoffman, Katzenbach and Gollatz (2016) have argued that governance can and should be distinguished from ‘regulation’. In the internet context, they argue, governance often consists of “rules and institutions that emerge as side effects of actors pursuing non-regulatory goals”, often the result of “complex coordination processes” (Hoffman, Katzenbach and Gollatz (2016). This broader, non-regulatory conception of governance encompasses all forms of coordination and interaction which lead to the creation of rules and principles that guide conduct, such as, for instance, standard-setting by Internet Service Providers or online platforms. However, to prevent “governance” from expanding to cover any and all forms of online interaction and coordination, they introduce a requirement of reflexivity: an act of coordination becomes an act of governance “when ordinary interactions break down or become problematic [...] and we see ourselves forced to discuss and negotiate the underlying norms, expectations and assumptions that guide our actions”.
  1. Whether it is understood as reflexive coordination, or simply as a form of regulation, “governance” has proven to be a highly relevant and widely-used concept in the context of platforms, these being private entities that play an influential role in governing online ecosystems (e.g. Van Dijck, Poel en De Waal 2018). See platform governance.

 

References

 

Belli, Luca. 2016. De la gouvernance à la regulation de l’Internet. Paris: Berger-Levrault

 

Belli, Luca. 2019. Internet Governance and Regulation: A Critical Presentation in Belli Luca and Cavalli Olga (Eds.) Internet Governance and Regulations in Latin America. (FGV Direito Rio). Available at: https://www.gobernanzainternet.org/book/book_en.pdf

 

Belli Luca and Zingales Nicolo (Eds.). Platform regulations: how platforms are regulated and how the regulate us. (FGV Direito Rio 2017) https://bibliotecadigital.fgv.br/dspace/handle/10438/19402

 

Black, Julia. 2008. “Constructing and Contesting Legitimacy and Accountability in Policycentric Regulatory Regimes”. Regulation & Governance 2(2). Available at: https://onlinelibrary.wiley.com/doi/full/10.1111/j.1748-5991.2008.00034.x

 

Hoffman, Jeannette, Christian Katzenbach and Kisten Gollatz (2016). Between Coordination and Regulation: Finding the Governance in Internet Governance. New Media + Society 19(9).

 

Lessig, Lawrence (2006), Code and Other Laws of Cyberspace. Version 2.0. Aufl. New York.

 

Reidenberg, Joel R.  (1998), Lex Informatica: The Formulation of Information Policy Rules Through Technology. Texas Law Review. Vol. 76. N° 3.Gorwa, Robert. 2019. “What is platform governance?”. Information, Communication and society

22(6). Available at: https://gorwa.co.uk/files/platformgovernance.pdf

 

Van Dijck, Jose, Thomas Poell and Martijn de Waal. 2018. The Platform Society: Public Values In A Connective World. New York: Oxford University Press.

 

Van Eeten, Michel and Milton Mueller (2013). Where is the governance in Internet governance?

New Media & Society 15(5).

 

Venturini, Jamila et al. 2016. Terms of service and human rights: an analysis of online platform contracts. Revan,    in collaboration with the Council of Europe and FGV Direito Rio. Available at: https://bibliotecadigital.fgv.br/dspace/handle/10438/19402

 

38. Harassment

  1. This entry discusses: (I) a brief history of the concept; (II) the notion of harassment and its variations – harassment in the working place, in the streets and online, etc.; (III) the concept of online harassment or cyberharassment; (IV) the possible regulatory frameworks.
  1. As MacKinnon (1986) has pointed out, harassment itself, as an act, is not a novelty in the lives of women. The real novelty was the initiative of feminist lawyers in the 1980s for its typification as a sex discrimination act under Title IV of the Civil Rights Acts of 1964, which made possible for women to bring cases of abuse and harassment in the workplace to the courts. At the time, it was seen as a "feminist invention" because those types of male conducts were deeply naturalized in society. Currently, its deconstruction is still an issue. Nevertheless, women have achieved their legal recognition (Honneth, 1996) and are able to demand reparation and have other claims fulfilled.
  1. Naming this experience of suffering and turning it into a matter of litigation has not only given legitimacy to the victims' demands but has also opened doors for further echoes of abuses that minorities face in their everyday lives. Such is also the case of street harassment, which has seen a marked increase in global movements against in the 2010s, thanks to transnational feminist awareness-raising campaigns that use the internet as its main channel (Kearl, 2015). Harassment in public spaces has a fundamental characteristic which differentiates it from harassment in workplaces: the perpetrators are almost always anonymous to the victims, which makes it difficult for punitive institutions to address it or even for the law to typify it properly, in most cases. As a common characteristic, harassment in the workplace and in the street are seen by the literature (MacKinnon, 2018) as a form of power expression by privileged sectors against minorities, taking advantage of situations of vulnerability which exist because of several inequalities.
  1. As a means of communication—currently with more centrality than ever—the Internet is not free of inequalities that permeate society and generate abuse. Online harassment or cyberharassment, as pointed out by Mary-Anne Franks (2011) “has existed as long as the internet itself has existed” (p. 678). While some people “trolled for lulz”, more insidious types target Internet users with violent threats, doxxing and — as we saw all-too-clearly in the 2016 U.S. election and in the 2018 Brazilian election — state-sponsored mass manipulation.
  1. Franks (2011) differentiates cyberharassment from mere insults or juvenile behavior by pointing out that it targets, in most cases, individuals that belong to subordinated groups and affects their life deeply. As a study (entitled “O Reino Sagrado da Desinformação”, or “The Sacred Kingdom of Disinformation” [2019] in free translation) developed by the Brazilian independent news organization “Gênero e Número” has pointed out, the conservative agenda has been led by far- right-wing sectors on Twitter, aiming to manipulate public opinion with disinformation. Anti-feminist campaigns are often led by articulated networks of masculinists, conservatives and radical christian sects – Caplan and Marwick (2018) named it the "manosphere".
  1. Online harassment occurs through several techniques, as described by Jhaver et al. (2018, p. 15), and targets victims who think differently from the status quo (Fladmoe; Nadim, 2017); women and particularly women of color and those from marginalized groups are more frequently and more intensely targeted with harassment. Online harassment is an umbrella term that refers to a set of specific, damaging behaviors and tactics. Tactics include, but aren’t limited to, coordinated attacks, cyberstalking, dogpiling, dog-whistling, doxxing, vile and hateful comments, mob harassment and more. Perpetrators of harassment often target people on multiple platforms using multiple tactics at any given time.
  1. Technology facilitates abuse on many levels. Kadri et al. (2020, p. 1084) showed that Facebook’s “People You May Know” feature made it possible for a man to track and harass a woman that he went on a date with one year after it. Although in most cases online harassment is perpetrated by users that take advantage of internet infrastructure in several ways to cause harm to victims (Jhaver, 2018), differently from street harassment, it may be possible to address and combat it with the help of tech design. This means that since "law is code" (as Lessig, 2006, pointed out), digital abuse can be tackled by technology (Kadri, 2020).
  1. In MacKinnon’s (2017) and Franks (2011) terms a possible path for tackling online harassment would involve (1) typifying it, which would make possible for the victims to give a name to their suffering experiences and acknowledge them as violence; (2) producing a denaturalization movement in society; or, in Kadri et al.’s terms (2020, p. 1085) (3) confronting it would involve the development of “empathetic networks”, through the diversification of the staff of big tech companies throughout its departments, and through consultations with experts and victims before and during the development of a tech product.

 

References

 

Fladmoe, A., & Nadim, M. (2017). Silenced by hate? Hate speech as a social boundary to free speech. Boundary Struggles. Contestations of Free Speech in the Public Sphere. Oslo: Cappelen Damm Akademisk, 45-76.

 

Franks, Mary Anne. (2011) Sexual Harassment 2.0. Md. L. Rev. v. 71, pp. 655.

 

Gênero e Número. (2019). O Reino Sagrado da Desinformação. Available at: http://www.reinodadesinformacao.com.br

 

Honneth, A. (1996). The struggle for recognition: The moral grammar of social conflicts. Mit Press.

 

Jhaver, S., Ghoshal, S., Bruckman, A., & Gilbert, E. (2018). Online harassment and content moderation: The case of blocklists. ACM Transactions on Computer-Human Interaction (TOCHI),

v.                  25,                  n.                  2,                  pp.                  1-33.                     Available       at: https://dl.acm.org/doi/abs/10.1145/3185593?casa_token=4L_hW6bPP48AAAAA:gojOpiq_6mGl pU_vN2Xg5HrXRLPX-vysyNHwmmKFYck4yUFv_w-qcTMILy5OnmqsYgFh3OKOHKep_w

 

Kadri,      T.     Networks      of     Empathy.       Utah     Law      Review,       (2020),                 Available                at     SSRN: https://ssrn.com/abstract=3638394

 

Kearl, H. (2015). Stop global street harassment: growing activism around the world: growing activism around the world. ABC-CLIO.

 

Lessig, L. (2006). Code: Version 2.0. Aufl. New York. MacKinnon, C. (1986). Sexual harassment. New York: Petrocelli.

MacKinnon, C. A. (2017). Butterfly politics. Harvard University Press.

 

Marwick, A. E., & Caplan, R. (2018). Drinking male tears: Language, the manosphere, and networked harassment. Feminist Media Studies, v. 18, n. 4, pp. 543-559.

 

Waldman, Ari E. Images of Harassment: Copyright Law and Revenge Porn (December 3, 2015).

v. 23. Federal Bar Council Quarterly 15 (Sept./Oct./Nov. 2015), Available at SSRN: https://ssrn.com/abstract=2698720

39. Harm (online harm)

  1. Harm is the result of words, actions (or even inactions) that cause physical, emotional or psychological damage to someone, including violence, defamation, or economic loss. This extends to potentially non-tangible damage, including causing a person or group of people to fear for their physical, emotional or psychological safety, experience anxiousness, limit their speech, feel intimidated in their personal or professional life, or worry for their personal or professional reputation.
  1. Harm can also scale from the personal to societal, cultural and political realms.
  1. A UK government white paper (UK Government, 2020) released in 2019 and updated in early 2020 describes “harmful content or activities” categorized into harms with a “clear” definition (e.g. child sexual exploitation and abuse, terrorist content), “less clear” definitions (e.g. cyberbullying, coercive behavior, intimidation, disinformation) or those harmful if underage children are exposed to said content or activities (e.g. children accessing pornography). However, this leaves ample room for interpretation about what “harm” actually means and who these “clear” or “less clear” content or activities harm. That leaves the content or activity to stand on its own, with the reader to interpret however they choose.
  1. Platforms have also used the word “harm” as an outcome of content and activity on their platform although, again, harm isn’t always defined clearly and can be widely debated among users. For example, Twitter’s Trust and Safety team uses the term (“offline harm”, “type of potential harm”) when announcing new policies (i.e. a recent announcement to remove QAnon content, Twitter 2020), but harm is then contested. Some users see harm to “freedom of speech” as outweighing potential “offline harms,” while others may appreciate the recognition that particular content or activity causes harm.

 

References

 

Joint Ministerial foreword, ' Online Harms White Paper' ( UK Government 2020) < https://www.gov.uk/government/consultations/online-harms-white-paper/online-harms-white- paper#the-harms-in-scope>

 

Twitter,           '          Thread            on           Twitter           Safety            '          (           Twitter                         2020) < https://twitter.com/TwitterSafety/status/1285726277719199746?s=20>

 

40. Hash/Hash database

  1. A hash is a function that can be used to generate a unique identifier or value that can then be converted to another value and decoded via a hash table, and is used for several purposes. With respect to content moderation and platform governance, a hash is akin to a digital fingerprint that is added to multimedia (photos, video, etc) which provides a unique identifier and enables that content to be identified across the internet and for the search for, and removal of, the content associated with the hash to be automated.
  1. Hash databases enable the sharing of these unique identifiers, or hashes, across platforms without having to share the content itself. Hashing enables coordinated action, such as content takedown, and allows companies to share information about content deemed unacceptable for a given platform across different service. Hashing technology such as PhotoDNA (Microsoft, n.d) has been used to combat the spread of child pornography, terrorist content, and other unwanted or illegal content, such as extremist content.
  1. In 2009, Microsoft and Dartmouth University launched (Gregoire, 2015) PhotoDNA to help combat the trafficking and sexual exploitation of children, and in 2018 (Langston, 2018) expanded its use for video. The hash database is provided for free to law enforcement and civil society partners, and overseen (Microsoft, n.d) by the National Center for Missing & Exploited Children (NCMEC) in the United States.
  1. In 2016, Facebook, together with Google and Microsoft, created a hash database of ISIS videos to coordinate the removal of terrorist content. This collaboration formed the basis for the creation of the Global Internet Forum for Terrorist Content, which grew up around the hash database to include dozens of companies that coordinate around content removal, and spun off into a stand- along organization in mid-2020. Critics have raised concerns about the opaque nature of this collaboration and the failure of the companies involved to maintain a database or other form of access to affected content that researchers and independent auditors could review and study. Although founding companies said the hash database would only include Al Qaeda and ISIS- related propaganda, in the wake of the 2019 Christchurch massacre of Muslims in New Zealand there was pressure to expand the remit of the database to include other forms of extremism. As of 2018 there were more than 200,000 pieces of content in the database, according to the GICT transparency report (GIFCT, 2020).
  1. Critics of the GIFCT and the approach to coordinated content takedown via hash databases express concern about the potential for the technology and approach to be co-opted to eradicate other types of content, such as hate speech or misinformation. It is also not entirely clear under data protection law how content associated with such hash databases ought to be saved, categorized, and made available for independent, third party oversight and research.

 

References

 

Microsoft, ' Photo DNA' ( Microsoft ) < https://www.microsoft.com/en-us/photodna>

 

Courtney Gregoire , First Microsoft PhotoDNA update adds Linux and OS X support, detections up to 20 times faster' (Microsoft 2015). Available at: https://blogs.microsoft.com/on-the- issues/2015/12/18/first-microsoft-photodna-update-adds-linux-and-os-x-support-detections-up- to-20-times- faster/#:~:text=This%20week%20marks%20the%20sixth,companies%2C%20organizations%2C

%20and%20developers.

 

Jennifer Langston, ' How PhotoDNA for Video is being used to fight online child exploitation' ( Microsoft 2018) < https://news.microsoft.com/on-the-issues/2018/09/12/how-photodna-for-video- is-being-used-to-fight-online-child-exploitation/>

 

GIFCT, ' GIFCT Transparency Report' ( Global Internet Forum to Counter Terrorism 2020) < https://www.gifct.org/transparency/>

41. Hate Speech

1. This entry aims to (I) present the definition of hate speech both according to the human rights law and specialized literature; (II) draw a distinction between hate speech and harm; (III) and highlight the possible solutions to address this issue, in a multistakeholder approach.

2. Hate speech isn't a new phenomenon in society, and neither are the attempts for addressing and combating it. As the International Covenant on Civil and Politics Act (ICCP, United Nations, 1966) already sustained in the mid-1960s, right after the protection of free speech (art. 19) comes the call for the prohibition of hatred speech (art. 20), naming it as "[the] advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence". Subsequently, many other legal instruments tried to encompass "new" forms of discrimination, such as the Committee of Ministers of the Council of Europe Recommendation No R 97(20) 30.10.1997 on “hate speech”, which includes other vulnerable groups into the definition. Also, this document makes liable not only the individuals who advocate in favor of hatred speech, but also the ones that act to "spread, incite, promote or justify" any content related with "racial hatred, xenophobia, anti-Semitism or other forms of hatred based on intolerance, including intolerance expressed by aggressive nationalism and ethnocentrism, discrimination and hostility towards minorities, migrants and people of immigrant origin” (Council of Europe, 1997). These efforts show the commitment of several relevant institutions for cultural changes, being incorporated by domestic legislations worldwide.

3. In spite of the advantages of the existence of general definitions capable of encompassing all of what is considered to be hate manifestations, in a universalist approach, the absence of objectivity leaves it a huge space for the acting of judges and punitive institutions to apply the legislation. This can lead to several issues for law enforcement. In a very similar way, when it comes to online hate speech, there is a huge space for the acting of social network companies to define what they consider to be hate speech and apply content moderation on the online environment.

4. Considering the online forums as the new public sphere, platforms are being called out to assure the equal participation of users, to combat online violence and to enforce content moderation. Moreover, legislators are drawing domestic laws (such as the Germain NetzDG, and the Brazilian Fake News Draft Bill), so that platforms commit to the maintenance of a safe digital sphere and the protection of democractic values. On one side of the discussion, some groups are advocating for the prevalence of free speech, as framed in the US' 1st Amendment, above other principles. These groups tend to call platforms' initiatives to prohibit hate speech as censorship.

5. On the other side of the discussion, academics such as Mary-Anne Franks (2019) and Danielle Citron, say that the debate around hate speech should be centered on how some groups in society are being historically silenced and are powerless against several violations – such as women, non-white men and other minorities (Citron, 2014). In this sense, returning to the ICCP document, in their view institutions should make efforts to guarantee both the protection of free speech and the combatment of hate speech.

6. In spite of problematic publications shared on the online environment being often in a "grey zone", some initiatives could be adopted in a multistakeholder approach in order to tackle online hate speech:

(I) From the companies' point of view:

7. The platforms should be opened to the community, in a sense that the users could be able to make their own contributions – either by engaging in conversations with the offenders, or by flagging the content that is understood as offensive, or even by reporting it for ex-post removal by moderators;

8. Social media companies should establish specific guidelines and community standards and perform content moderation with human and automated techniques, following the basic principles of due process and transparency in the removal procedures.

(II) From the institutions point of view:

9. The Legislative branch should develop domestic legislations and ordinances that follow international norms and principles about addressing gender-based or race discrimination, centered on the victims' perspectives;

10. The Executive branch should make and support policies and campaigns that aim to eradicate all forms of prejudice (in schools, in public places, in work spaces, online and in the State institutions);

11. The Judiciary should establish jurisprudence centered on the victims' perspectives in order to define what type of behaviors are considered hate speech.

12. International law standards – such as human rights conventions – regarding the definition of hate speech, and also the design of procedural rules to assure due process in content moderation, may significantly help to give more transparency to the removal process of problematic content. Improving public awareness and consciousness raising through normative standards can help to engender a safer and healthier online environment.

 

References

 

Citron, Danielle Keats. ‘Hate crimes in cyberspace’, [2014] Harvard University Press. Franks, Mary Anne. ‘The Cult of the Constitution.’ [2019] Stanford University Press.

Recommendation No. R (97) 20 of the Committee of Ministers to Member States on “Hate Speech”, 30 October 1997.

OHCHR         UN,      ‘International       Covenant        on      Civil       and       Political       Act’.      Available at: https://www.ohchr.org/en/professionalinterest/pages/ccpr.aspx

 

42. Human exploitation

  1. There is no agreed definition on what human exploitation is, however international law lists the types of illegal activities related to this practice. Elaborated in 2000, the Palermo Protocol, conceived within the United Nations, is the international legal instrument that deals with human trafficking (Allain, 2013).
  1. The Palermo Protocol (UNGA, 2008) defines human trafficking by a series of actions (recruitment, transport, transfer, accommodation or reception) that may be carried out by different means (threat, use of force, other forms of coercion, abduction, fraud, deception, abuse of authority, taking advantage of the situation of vulnerability of others, delivery or acceptance of benefits - pecuniary or not - for obtaining the consent of others over whom one has authority) for the purpose of exploitation, whatever it may be, of a person.
  1. Despite classifying some practices, the list presented is not exhaustive, and other forms of exploitation can and should also be recognized for the purpose of trafficking.
  1. That said, it can be understand that human trafficking and exploitation doesn’t have a singular meaning, yet it covers a number of forms of human exploitation that can appears in the form of sexual exploitation, when someone is deceived, coerced or forced to take part in sexual activity; labor exploitation, when people are coerced to work for little or no remuneration, often under threat of punishment; domestic servitude, when there are restrictions on the domestic worker’s movement and they are forced to work long hours for little pay; forced marriage, when a person is threatened with physical or sexual violence or placed under emotional or psychological distress to be forced married; forced criminality, when somebody is forced to carry out criminal activity through coercion or deception; child soldiers, when children are used for combats and are made to commit acts of violence or within auxiliary roles such as informants or kitchen hands; and organ harvesting, when an organ is removed with or without consent to be selled often as an illegal trade.
  1. Among the major online platforms, Facebook has the extensive exclusive section dedicated to human exploitation in its Community Standards, where, in addition to the activities mentioned above, it also officially condemns content that are related to the activities of sale of children for illegal adoption; orphanage trafficking and orphanage voluntourism. Platforms usually prohibit content geared towards the recruitment of potential victims, to the facilitation of human exploitation; and to the promoting, depicting, or advocating these criminal activities.
  1. According to the United Nations Office on Drugs and Crime (2018), millions of women, men and children are forced to work in inhumane conditions on farms, in clothing warehouses, on board fishing boats, in the sex industry or in private homes, generating billions of dollars a year. Civil society organizations have been warning that social networks are increasingly constituting a recruitment platform for human exploitation, being a tool to identify and contact potential victims.
  1. Other internet-based services are also useful for abusers, such as anonymous online payments, and encrypted messaging. On the other hand, technologies have also been used to combat trafficking, standing out the text analysis tools to identify a writing pattern in sexual ads, and facial recognition mechanisms (Mzezewa, 2017).
  1. Finally, it is important to notice that lately there has been a discussion regarding whether social media content moderators suffer a form of human exploitation in jobs where everyday they have to see violent content and decide whether it should remain online or not. Even thought that has been little formal study on the impact of this kind of routine, where some people spend eight to nine hours reviewing a series of suicide, harmful and sexual content in order to keep social media platforms safe from it, it has been proved that a number of workers had developed post-traumatic stress syndrome as a result of this activity, and this situation is becoming more and more commnon and it shouldn't go unnoticed (Cardoso, 2019).

 

References

 

Allain, Jean. (2013). Slavery in International Law: Of Human Exploitation and Trafficking - The Introduction.                                                                Available                                                                                                  at:

<https://www.researchgate.net/publication/286456166_Slavery_in_International_Law_Of_Huma n_Exploitation_and_Trafficking_--_The_Introduction>

 

Cardoso, Paula (2019). Precariado algorítmico: o trabalho humano fantasma nas maquinarias da inteligência artificial. Media Lab UFRJ. Available at: <http://medialabufrj.net/blog/2019/09/dobras- 38-precariado-algoritmico-o-trabalho-humano-fantasma-nas-maquinarias-da-inteligencia- artificial/>

 

Mzezewa, Tariro (2017). ‘Hacks That Help: Using Tech to Fight Child Exploitation’.

<https://www.nytimes.com/2017/11/24/style/sex-trafficking-hackathon.html>.

 

UN General Assembly (2000). Protocol to Prevent, Suppress and Punish Trafficking in Persons, Especially Women and Children, Supplementing the United Nations Convention against Transnational Organized Crime, 15 November 2000, available at:

<https://www.refworld.org/docid/4720706c0.html>

 

UNODOC           (2018).          ‘Global         Report         on        Trafficking          Persons.’                            Available      at:

<https://www.unodc.org/documents/data-and- analysis/glotip/2018/GLOTiP_2018_BOOK_web_small.pdf>

 

43. Human Review

  1. Human review is a term specific to the field of content moderation, or how digital platforms manage, curate, regulate, police, or otherwise make and enforce decisions regarding what kind of content is permitted to remain on the platform, and with what degree of reach or prominence, and what content must be removed, taken down, filtered, banned, blocked, or otherwise suppressed. Human review refers to the part of the content moderation process at which content that users have flagged (see “flagging” and “coordinated flagging”) as offensive is reviewed by human eyes, as opposed to assessed by an algorithmic detection or takedown tool, or other form of automated content moderation. These human reviewers may be the platform company’s in- house staff, more frequently the case at smaller platform companies; the platform’s own users who have voluntarily stepped into a moderator role, such as is the case with Reddit’s subreddit moderators and Wikipedia’s editors; or low-paid third-party, external contractors that number up to the thousands or tens of thousands, operating under poor working conditions in content moderation “factories”, as relied on by larger platforms such as Facebook and Google’s YouTube (Caplan, 2018). (These three roles of human reviewers correspond to Robyn Caplan’s categorization of content moderation models, namely, the artisanal, community-reliant, and industrial approaches, respectively) (Caplan, 2018).
  1. Human review is also used to check lower-level moderators’ decisions as well as to check that automated or algorithmic content moderation tools are making the correct decisions (Keller, 2018), such as to “correct for the limitations of filtering technology” (Keller, 2018). As Daphne Keller points out, one danger of combining human review with algorithmic filters, though also a reason to ensure the continued involvement of human review in content moderation, is that “once human errors feed into a filter’s algorithm, they will be amplified, turning a one-time mistake into an every-time mistake and making it literally impossible for users to share certain images or words.” (Keller, 2018).

 

References

 

Robyn Caplan, ‘Content or Context Moderation? Artisanal, Community- Reliant, and Industrial Approaches’ [14 November 2018], Data & Society. Available at: <datasociety.net/wpcontent/ uploads/2018/11 /DS_Content_or_Context_Moderation.pdf>.

 

Daphne Keller, ‘Internet Platforms: Observations on Speech, Danger, and Money’ [2018], Hoover Institution.                                                                            Available                                                  at:

<https://www.hoover.org/sites/default/files/research/docs/keller_webreadypdf_final.pdf>.

44. Incitement to violence

  1. Traditional incitement to violence, enshrined in Article 20 of the ICCPR and criminalised in many domestic jurisdictions, has taken on new significance due to the proliferation of online platforms and the growth of global terrorism (Bayefsky and Blank 2018). Social media and online platforms are a way in which to ‘amplify’ the harm of incitement to violence (Avni 2018, 30–31). While social media intermediaries provide a mechanism by which those inciting violence can access a broader and more diverse audience, the prohibition on incitement to violence is not meaningfully enforced (Matas 2018, 150). This can be explained by the difficulties democratic states face in balancing the problem of virtual hate speech with foundational principles of free speech (Guiora 2018, 142). Incitement to violence is the most ‘severe’ form of online hate speech, in that it ‘threaten[s] with violence, incite[s] violent acts, and intend[s] to make the target fear for their safety’ (Alexandra Olteanu et al. 2018, 5).
  1. Various factors can lead to incitement to violence over digital platforms, including an absence of or unclear legislation on the issue, negative or stereotyped portrayal of minority groups in the media, structural inequalities in access to social media platforms, and the changing media landscape (Izsák 2015, 51–79). A modern example includes the spread of hate speech and incitement to violence via the internet in Myanmar, which played a ‘significant’ role in the Rohingya genocide (OHCHR 2014; Human Rights Council 2018, para. 74). Online service providers are beginning to acknowledge the role they play in the dissemination of material which incites violence; Facebook’s Community Standards claims to ‘remove language that incites or facilitates serious violence’ (Facebook 2020, pt. 1), Google’s User Content and Conduct Policy prohibits ‘Hate Speech’ which it defined to be ‘content that promotes or condones violence’ against an individual or group ‘on the basis of their race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, or any other characteristic that is associated with systemic discrimination or marginalisation’ (Google 2020, para. 3), and Twitter’s violent threats policy prohibits ‘statements of an intent to kill or inflict serious physical harm on a specific person or group of people’ (Twitter 2019).
  1. In the online age, the purpose of incitement to violence has shifted; what used to be ‘justification for action and recruitment to prepare for action’ has now become a simple ‘call to action’, and service providers must adequately monitor and protect against a risk of harm that they themselves have facilitated (Matas 2018, 163). As incitement to violence is an inchoate offense, in that harm does not need to be actioned but merely called for, the freedom which social media platforms provide to express opinions inherently inflate the possibility of violations. However, given the new and larger audiences accessible to those promoting violence, these platforms also increase the likelihood of incitement causing violence. Additionally, the growing prevalence of cyberbullying, virtual sexual harassment, and online stalking make clear that the act of violence itself in the age of online platforms is ‘still developing and not univocal’(Dubravka Šimonović 2018, 5).

 

References

 

Alexandra Olteanu, Carlos Castillo, Jeremy Boy, and Kush R. Varshney. 2018. The Effect of Extremist Violence on Hateful Speech Online. International AAAI Conference on Web and Social Media; Twelfth International AAAI Conference on Web and Social Media. https://www.aaai.org/ocs/index.php/ICWSM/ICWSM18/paper/view/17908/17013.

 

Avni, Micah. 2018. Incitement to Terror and Freedom of Speech. Incitement to Terrorism, 30– 36. https://doi.org/10.1163/9789004359826_005.

 

Bayefsky, Anne F., and Laurie R. Blank, eds. 2018. Incitement to Terrorism. Incitement to Terrorism. Brill Nijhoff. https://brill-com.wwwproxy1.library.unsw.edu.au/view/title/36109.

 

Dubravka Šimonović. 2018. ‘Report of the Special Rapporteur on Violence against Women, Its Causes and Consequences on Online Violence against Women and Girls from a Human Rights Perspective’.                           A/HRC/38/47.                           Human                          Rights Council. https://www.ohchr.org/EN/HRBodies/HRC/RegularSessions/Session38/_layouts/15/WopiFrame. aspx?sourcedoc=/EN/HRBodies/HRC/RegularSessions/Session38/Documents/A_HRC_38_47_ EN.docx&action=default&DefaultItemOpen=1.

 

Facebook.                            2020.                           Community                            Standards.      2020.

https://www.facebook.com/communitystandards/violence_criminal_behavior.

 

Google.             2020.            Terms             and            Policies             -           Currents             Help.                           2020.

https://support.google.com/googlecurrents/answer/9680387?hl=en.

 

Guiora, Amos N. 2018. Inciting Terrorism on the Internet: The Limits of Tolerating Intolerance.

Incitement to Terrorism, March, 135–49. https://doi.org/10.1163/9789004359826_016.

 

Human Rights Council. 2018. Report of the Independent International Fact-Finding Mission on Myanmar.                             39th                                  Session.                                                       https://documents-dds- ny.un.org/doc/UNDOC/GEN/G18/274/54/PDF/G1827454.pdf?OpenElement.

 

Izsák, Rita. 2015. ‘Report of the Special Rapporteur on Minority Issues’. A/HRC/28/64. https://undocs.org/pdf?symbol=en/A/HRC/28/64.

 

Matas, David. 2018. Combating Incitement to Violence on the Internet through Service Provider Action. Incitement to Terrorism, March, 150–64. https://doi.org/10.1163/9789004359826_017.

 

OHCHR. 2014. Myanmar: UN Expert Warns against Possible Backtracking, Calls for More Public Freedoms,                                                                                                                                        2014.

https://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=14910&LangID=E.

 

Twitter. 2019. Violent Threats Policy. 2019. https://help.twitter.com/en/rules-and-policies/violent- threats-glorification.

 

45. Inclusive Journalism

  1. In increasingly diverse societies, being labelled as „developed democracies‟ referred to as countries in transition to democracy, a need for fair, accurate and responsible journalism stands at the top of the requests for rebuilding media for democratic future. The process of reforming the media system after a conflict or a long period of absence of democratic institutions in different countries restate this need acknowledging that the reform would be impossible without reforming the journalistic sector of media. Journalism is a vehicle to public conversation and civic action and strengthening journalism training and education contributes to strengthening its value for the society.
  1. The UNESCO Media Development Indicators, tailored to identify how media reflect diversity of society in order to fulfill its democratic potential, underline the importance of the presence of minority groups in mainstream media. The significance of diversity has been recognized by other key freedom of expression organizations which believe that freedom of expression should be enjoyed by all citizens regardless of their race, ethnicity, faith, religion, language, gender, social status, (dis)abilities or sexual orientation. In 2007 the UN Special Rapporteur on Freedom of Opinion and Expression, the OSCE Representative on Freedom of the Media, the Organisation of African States Special Rapporteur on Freedom of Expression and the ACHPR (African Commission on Human and Peoples‟ Rights) Special Rapporteur on Freedom of Expression and Access to Information, made a joint Declaration on Promoting Diversity in the Broadcast Media. The declaration stressed the fundamental importance of diversity in the media to the free flow of information and ideas in society, in terms both of giving voice to and satisfying the information needs and other interests of all, as protected by international guarantees of the right to freedom of expression”. The United Nation‟s notion of inclusive society, „society for all‟ over-rides differences of race, gender, class, generation, and geography, and ensures inclusion, equality of opportunity as well as capability of all members of the society. Among the prerequisites for inclusive society - respects for all human rights, freedoms, a rule of law, the existence of strong civic society - equal access to public information and tolerance for cultural diversity education plays a critical role. It provides opportunities to learn the history and culture of one's own and other societies, which cultivates the understanding and appreciation of other societies, cultures and religions. The „inclusive society‟ implies a radical set of changes through which society restructures itself to embrace all of its members.
  1. Journalism that is able to relate to the idea of inclusive society can be called „inclusive journalism‟. Inclusive journalism challenges the status quo in order to prevent the media from intentionally or unintentionally spreading prejudice, intolerance and hatred. The idea of inclusive journalism is rooted in and un-separable from the political notion of inclusive democracy.
  1. Used interchangeably, inclusive democracy and inclusive society, indicate a type of political system that goes beyond recognizing formal equality of all individuals and involves taking actions and special measures to compensate for inequalities of unjust social structures. Young (2002: 53) says that “democratic norms mandate inclusion as a criterion of the political legitimacy of outcomes” and distinguishes two forms of social exclusion: „external‟ where groups and individuals are openly excluded for the decision making process and „internal‟ where “the terms of discourse make assumptions some do not share, the interaction privileges specific styles of expression, the participation of some people is dismissed as out of order” (ibid).
  1. The objective of inclusive journalism, and educating and training journalists in increasingly diverse society, is to develop inclusive communicative competence. This ability involves reflective thinking, experience of social, political and cultural pluralism, recognition of otherness and critical stand towards the process of constructing identities. Inclusive journalism acts as a catalyst for society to get informed knowledge of its diverse „self‟, as well as an understanding of the relationship between the individual and society. Most university journalism programs are so focused on the questions of developing academic discipline by integrating theory and practice that they dislocate journalism from its natural embeddedness into community. Mensing notes that most university journalism programs preserve the structure of education based on industrial model of journalism and argues that “moving the focus of attention from the industry to community networks could reconnect journalism with its democratic roots and take advantage of new forms of news creation, production, editing, and distribution” (Mensing 2011, p.16).
  1. In transition countries and post-conflict societies this dislocation could have profound consequences.
  1. The essential curriculum for inclusive journalism, based on MDI’s work is comprised of the following modules:
  • Developing Sensitivity to Diversity – type of a module that aims to foster students understanding of the experiences of minorities;
  • How Diversity Is Reported – traditional academic module based on using standard techniques of news story content analysis enabling students to reach an understanding of how their society‟s media cover diversity issues;
  • Reporting Diversity – practice based module for students to gain experience of the issues involved in covering minority affairs; and
  • Social Diversity and the Media -standard teaching (lectures/essays) module using elements taken from a number of academic disciplines . sociology, social psychology and political science that deal with the issue of social diversity and that offer useful insights to journalism students studying theories of media power and social function to provide students
  1. In all modules developed through the Media Diversity Institute‟s inclusive journalism program, the question of assessment aroused as a key tool for enabling students to engage critically with social diversity and to acquire the skills necessary to conceptualize and produce a competent piece of journalism. Module assessment that combines academic and journalistic work has proved to be the model that the majority of journalism educators listed as the best way to evaluate different qualities in journalistic take on diversity issues. This “holistic and highly contextualized assessment” (Biggs 1999, p.152) clearly requires an active demonstration of knowledge of contemporary journalism. It deals with functional knowledge by setting up tasks that are an interesting and challenging learning experience for students rather than taking it as a judgemental instrument in academic analysis of media. The assessment that includes both formative and summative procedures might take different forms, as outlined in some of the modules developed within the MDI‟s inclusive journalism curriculum framework.

 

References:

 

Rupar, V., & Pesic, M. (2012). Inclusive journalism and rebuilding democracy. In N. Sakr, & H. Basyouni (Eds.), Rebuilding Egyptian media for democratic future (pp. 135-153). Cairo, Egypt: Aalam Al Kotob Publisher.

 

Young, Iris Marion, 1949- Title: Inclusion and democracy [electronic resource] / Iris Marion Young. Notes: Electronic reproduction. Oxford: Oxford University Press, 2004. (Oxford scholarship online). Publisher: Oxford: Oxford University Press, 2002 Description: 314 p. Internet Link: http://abc.cardiff.ac.uk/login?url=http://www.oxfordscholarship.com/oso/public/content/politicalsc ience/0198297556/toc.html

 

Mensing, D. (2011). “Realigning journalism education” In Franklin, B. and Mensing, D. (2011)

Journalism Education, Training and Employment. New York: Routledge, pp.15-32.

 

Biggs, J. (1999). Teaching for quality learning at university.Great Britain: The Society for Research into Higher Education and Open University Press.

46. Information Fiduciary

  1. Information fiduciaries are entities entrusted with the management of the personal information of third parties. The concept, first proposed by Balkin and Zittrain (2016), evokes an analogy with professional figures assigned with fiduciary duties due to the relationship with their clients, which leads to situations of asymmetrical power and. For instance, doctors, lawyers and accountants are all in a fiduciary relationship with their clients due to their superior knowledge and skills, which requires the establishment of a relationship of trust. As a result, they are bound by duties of care, loyalty and confidentiality.
  1. Accordingly, the online platforms identified as information fiduciaries would owe their customers a duty of loyalty, that is, to act in the best interests of their customers, without regard to the interests of their own business. They would also owe a duty of care, that is, to act competently and diligently to avoid harm to their customers. This means, for example, that they would not be allowed to use data for different purposes from those stated at the time of collection, and they would be required to take reasonable steps to secure any information entrusted to them. Finally, they would be required to
  1. The proposal has had some traction in Internet governance circles, leading even to the introduction in the US Senate of a bill (The “Data Care Act”) that would further specify the duties, including: the obligation to notify data breaches concerning an individual; the duty not to use data in a way that is unexpected and highly offensive to a reasonable end user; the duty not to disclose or sell personal data to third parties that do not have the same level of fiduciary duties, and to take reasonable measures to ensure that such duties are fulfilled.
  1. At the same time, the proposal provoked criticism, for one because it does not contain limits on the collection of personal data, but especially because fiduciary obligations to customers are fundamentally incompatible with the nature of publicly listed corporations (where managers are under a fiduciary duty to maximize shareholder value) and the predominant business model on the Internet (where personal data are regularly used for advertising purposes).

 

References

 

Jack M. Balkin, Jonathan Zittrain, ‘A Grand Bargain to Make Tech Companies Trustworthy’ [3 October 2016] at https://www.theatlantic.com/technology/archive/2016/10/information- fiduciary/502346/

 

Khan, Lina and Pozen, David E., ‘A Skeptical View of Information Fiduciaries’ [2019]. Harvard Law Review, Vol. 133, pp. 497-541

 

47. Infrastructure

  1. Infrastructure, according to the Cambridge Dictionary's definition, is “the basic structure of an organization or system which is necessary for its operation”. More specifically, the Internet Infrastructure Coalition (2018) defines Internet infrastructure as follows:
  1. “Internet infrastructure is the physical hardware, transmission media, and software used to interconnect computers and users on the Internet. Internet infrastructure is responsible for hosting, storing, processing, and serving the information that makes up websites, applications, and content.”
  1. Namely, the physical infrastructure comprises all the equipment that transmits data through the network, such as submarine and terrestrial cables, backbones, routers, satellites, antenna towers, and even smartphones (Constantinides et. al, 2018); and all the equipment that stores internet data, such as data centers and database servers. As for the virtual infrastructure, we have another important set of foundations, for instance, open standards (eg. IEEE 802.11s), the Internet protocol suit (TCP/IP), the Domain Name System (DNS), and the Hypertext Transfer Protocol (HTTP).
  1. A relevant trend related to infrastructure is the growing investment of online platforms in physical infrastructure, such as submarine cables. This attention to infrastructure is due to the fact that the current foundations are not being sufficient to support the traffic generated by the big platforms like Google, Facebook and Microsoft (Burgess, 2018). In addition to the control that such platforms already exercise in the content layer, additional control in the infrastructure layer can exacerbate problems related to competition, privacy and net neutrality.
  1. Another important problem concerns attacks on critical infrastructure, targeting end users, devices, network services and web servers. Computer emergency response teams have done a great job to address this issue and combat these attacks over the last decades (Bada et.al, 2014). However, one type of attack that these teams do not address is physical incidents, caused by both humans and animals (Arthur, 2013; Moss, 2020). The growing threat of a physical attack should not be underestimated, as it can cause huge damage to economies and national security (Starosielski, 2019).

 

References

 

Arthur, Charles (2013). ‘Undersea internet cables off Egypt disrupted as navy arrests three’ Available at: <https://www.theguardian.com/technology/2013/mar/28/egypt-undersea-cable- arrests >

 

Bada, M., Creese, S., Goldsmith, M., Mitchell, C., & Phillips, E. (2014). Computer Security Incident Response Teams (CSIRTs): An Overview. Global Cyber Security Capacity Centre, 1-23. Available at: <http://www.elizabethphillips.co.uk/Research/CSIRTs.pdf>

 

Burgess, Matt (2018). ‘Google and Facebook are gobbling up the internet's subsea cables’ Available at: <https://www.wired.co.uk/article/subsea-cables-google-facebook>

 

Cambridge                   Dictionary.                  ‘Infrastructure                  meaning’                  Available                                       at:

<https://dictionary.cambridge.org/dictionary/english/infrastructure>

 

Constantinides, P., Henfridsson, O., & Parker, G. G. (2018). ‘Platforms and infrastructures in the digital                                                   age.’                                                   Available                        at:

<http://ide.mit.edu/sites/default/files/publications/ISR%202018%20Constantinides%20Henfridss on%20Parker%20Editorial.pdf>

 

Internet Infrastructure Coalition (2019). ‘What is the Internet’s Infrastructure?’ Available at:

<https://www.i2coalition.com/what-is-the-internets-infrastructure-video/>

 

Moss, Sebastian (2020). ‘How cows caused a small Google network outage’ Available at:

<https://www.datacenterdynamics.com/en/news/how-cows-caused-small-google-network- outage/>

 

Starosielski, Nicole (2019). ‘Strangling the Internet’ Available at: <https://limn.it/articles/strangling- the-internet/>

48. Intermediary Liability

  1. This entry defines in advance what an intermediary is, to later clarify its liability.
  1. There is a broad spectrum of actors labeled as internet intermediaries, such as internet service providers (ISPs), web hosting providers, social networks, cloud service providers, domain name registrars and search engines. Certainly, this is a non-exhaustive list, since there several types of internet related services and different organizations and national laws have their own definitions and categorizations of internet intermediaries (OECD, 2010; OAS, 2011; Article 19, 2013).
  1. "Intermediary liability" refers to the legal responsibility of intermediaries regarding both the actions taken and the content generated by users of their services (MacKinnon et al., 2015). That is, this type of liability does not concern the legal responsibility related to the platform own content or other ancillary issues (eg. tax payment, labor obligations).
  1. Despite not having a binding character, an important document on intermediary liability called Manila Principles (EFF, 2015) is still used today by renowned academics who study this topic and by countries as a model to implement fair and democratic digital policies.
  1. There are different levels of calibration for intermediary liability (Bankston et. al, 2012). First, we have the strict liability model under which intermediaries are held responsible for user-generated content. Next in order, the safe harbour model is a little bit more flexible, where the intermediary is exempt from liability in relation to content published by third parties, but still needs to comply with certain requirements. Lastly, there is the broad immunity model which protects intermediaries from liability for a great variety of content posted by users.
  1. China has the most notorious strict liability model, where intermediaries are in a situation where they must brutally monitor all content posted by their users, otherwise they may be penalized. Brazil has a unique example of the safe harbour model, its Marco Civil da Internet defines that intermediaries will only be held responsible if they do not remove content after receiving a court order, except in matters regarding copyright and revenge porn (Zingales, 2015). Regarding the broad immunity model, the Section 230 of the Communications Decency Act (CDA), a U.S. law, is the most cited example of expansive protections against liability, where intermediaries are not responsible for third-party content.

 

References

 

Article       19       (2013).        ‘Internet       intermediaries:         Dilemma        of      Liability’                   Available               at:

<https://www.article19.org/wp-content/uploads/2018/02/Intermediaries_ENGLISH.pdf>

 

Bankston, Kevin, Sohn, David, & McDiarmid, Andrew. (2012). Shielding the Messengers– Protecting Platforms for Expression and Innovation. Center for Democracy and Technology, 12. Available at: <https://cdt.org/wp-content/uploads/pdfs/CDT-Intermediary-Liability-2012.pdf>

 

Electronic Frontier Foundation (EFF) (2015). ‘The Manila Principles on Intermediary Liability Background                                               Paper’                                              Available                   at:

<https://www.eff.org/files/2015/07/08/manila_principles_background_paper.pdf>

 

MacKinnon, Rebecca; Hickok, Elonnai; Bar, Allon; and Lim, Hai-in. (2015). Fostering Freedom Online: The Role of Internet Intermediaries. Other Publications from the Center for Global Communication                                            Studies.                                           Available               at:

<http://unesdoc.unesco.org/images/0023/002311/231162e.pdf>,

 

OAS        (2011).       ‘Special       Rapporteurship         for     Freedom        of     Expression’               Available    at:

<https://www.oas.org/en/iachr/expression/showarticle.asp?artID=848>

 

Organisation for Economic Co-operation and Development (OCDE) (2010). OECD. March 2010. The                        Economic                and          Social Role               of               Internet             Intermediaries.       Available           at:

<www.oecd.org/internet/ieconomy/44949023.pdf>

 

Zingales, Nicolo (2015). ‘The Brazilian approach to internet intermediary liability: blueprint for a global regime?’. Internet Policy Review, 4(4). DOI: 10.14763/2015.4.395 Available at:

<https://policyreview.info/articles/analysis/brazilian-approach-internet-intermediary-liability- blueprint-global-regime>

49. Internet Safety/Security

  1. The terms internet safety and internet security are closely connected. In the words of the European Commission (2020, 1), "Security is not only the basis for personal safety, it also provides the foundation for confidence and dynamism in our economy, our society and our democracy." Thus, internet safety/security are perceived as general policy issues. These are general policy concerns about worldwide challenges such as crime, health and safety on an individual level.
  1. However, then the internet infrastructure is perceived as a critical environment, specific security-challenges are dealt with by the internet governance measures that are tailored to bring about guarantees surrounding the internet’s technical functioning. Importantly, platforms and other service providers who are tasked with the provision of access to its users play a general role in this sense.
  1. As introduced above, the concept of "internet safety/security" evokes other terms commonly understood as threats in the digital environment. For this reason, many national legislations on regulating misconduct are relevant for this topic. This is reflected in the approach taken in the Convention on Cybercrime (Council of Europe, 2001), which aims to have its members maintain, update or introduce substantive criminal law measures to deal with the problem of cybercrime. Regarded as the first international treaty on this topic, this Convention is widely used as a reference for developing law and policy (see for example the site on EU Law on Cybercrime). In pursuance of developing these solutions, in recent years several soft-law measures have been taken, such as the "EU Code of conduct on countering illegal hate speech online" (European Commission 2016). The EU took the initiative for seeking more proactive responses and accountability from major private internet-companies. As pointed out by the European Commission (2020, 13): "The latest evaluation shows that companies assess 90% of flagged content within 24 hours and remove 71% of the content deemed to be illegal hate speech. However, the platforms need to improve further transparency and feedback to users and to ensure consistent evaluation of flagged content."
  1. With threats being often described as “hybrid” in form, in recent years, many elements of responsiveness to the issues surrounding internet safety and security are leading to the regular renewal of or the adoption of new strategies by different governments all over the world. As an example, the latest “National Cyber Strategy” (2018) in the US presented the intention to increase the imposition of “costs” on all kinds of different players in the internet environment, in order to “to deter malicious cyber actors and prevent further escalation” (id, p. 2). Examples of the implementation of this strategy by the US government are the executive orders by president Trump seeking to restrict the possibilities of users to access several (social) media and mobile applications (such as WeChat and TikTok; see the orders of The White House, 6 August 2020). The US government’s orders against “WeChat” and “TikTok” were motivated as being based on national cybersecurity-concerns. A detailed plan to install specific prohibitions was announced in order to specifically limit the offer of the targeted apps in US stores. However, both of these executive orders have led to further administrative actions and judicial (counter-)measures limiting their execution and the reopening of the access to the US market. Another approach that was recently initiated by the Russian Government takes the form of establishing a network that can operate alongside the WWW in case of an attack. This so-called RuNet is envisioned as an obligation for internet providers in Russia by implementing new rules established by the “the Federal law No. 90-FZ on Amendments to Certain Legislative Acts of the Russian Federation (in terms of ensuring the safe and sustainable functioning of the Internet in the territory of the Russian Federation”).
  1. It is likely that more of such new technical safety measures or even new protocols for the technical functioning of the internet will be brought up as policy choices in response to safety and security threats, also on the international level such as ITU. For example, such topics are high on the list for discussion at the World Telecommunication Standardization Assembly – 2020 (such as a “New IP” protocol system persistently promoted by Huawei and the Chinese Government).
  1. Major policy areas that are related to this term can be found in connection to all the national legal interventions that are responsive to fundamental (human) rights (such as children's rights) as well as the general issues of cybercrime already pointed to above (such as racism, xenophobia, hate crime, theft, etc). These areas get regular attention in terms of proposals for common legal and regulatory responsibilities, which would be applicable across the internet. In this regard, a notable, legislative initiative is the UK's recently proposed "online safety laws" as put forward in the "Online Harms White Paper" (2019), which proposed a broad (statutory) "duty of care".

 

References

 

Huawei and the Chinese Government, ‘New IP protocol system’ [2019] Available at: https://www.itu.int/md/T17-TSAG-C-0083

 

World          Telecommunication            Standardization            Assembly,            [2020].                     Available                                at:

<https://www.internetsociety.org/resources/doc/2020/itu-wtsa-2020-background-paper/>

 

Russian Federation, ‘Federal law No. 90-FZ on Amendments to Certain Legislative Acts of the Russian                                 Federation’                                [2019]                               Available    at: http://publication.pravo.gov.ru/Document/View/0001201905010025?index=0&rangeSize=1;

 

ICNL, ' Russia' ( ICNL 2020) < https://www.icnl.org/resources/civic-freedom-monitor/russia>

 

European Commission, ‘EU Law on Cybercrime’. Available at: https://ec.europa.eu/home- affairs/what-we-do/policies/cybercrime_en

 

White House, ‘National Cyber Strategy’, [2018]. Available at: https://www.whitehouse.gov/wp- content/uploads/2018/09/National-Cyber-Strategy.pdf

 

Convention on Cybercrime adopted 23 November 2001, entered into force 1 July 2004, ETS No.185, available at <www.coe.int/en/web/conventions/full-list/-/conventions/treaty/005 >

 

European Commission. 2016. "The EU Code of conduct on countering illegal hate speech online." Availabe  at     <          https://ec.europa.eu/info/policies/justice-and-fundamental-rights/combatting- discrimination/racism-and-xenophobia/eu-code-conduct-countering-illegal-hate-speech- online_en>

 

European Commission. 2020. "Communication on the EU Security Union Strategy." Available at

<https://ec.europa.eu/info/sites/info/files/communication-eu-security-union-strategy.pdf>.

 

UK Department for Digital, Culture, Media & Sport and Home Office. ‘Online Harms White Paper’, [2020] Available at < https://www.gov.uk/government/consultations/online-harms-white- paper/online-harms-white-paper>

 

UK Government, ‘Press release’, [2019] <https://www.gov.uk/government/news/uk-to-introduce- world-first-online-safety-laws>

 

‘Symposium: Online Harms White Paper’, [2019] Journal of Media Law, Vol 11, issue 1; Available at: https://www.tandfonline.com/toc/rjml20/11/1?nav=tocList& )

 

The White House/US President Trump, Executive Order on Addressing the Threat Posed by WeChat, 6 August 2020. Available at <https://www.whitehouse.gov/presidential- actions/executive-order-addressing-threat-posed-wechat/>.

 

The White House/US President Trump, Executive Order on Addressing the Threat Posed by TikTok, 6 August 2020. Available at <https://www.whitehouse.gov/presidential-actions/executive- order-addressing-threat-posed-wechat/>.

50. Interoperability

  1. Interoperability is the ability to transfer and render useful data and other information across systems, applications, or components. The combination of transmission and analysis involves several layers of the so-called Open Systems Interconnection model (OSI model), requiring the achievement of various levels of interoperability. At a minimum, one should distinguish the lower and the upper layer, pointing to a division between infrastructural interoperability and data interoperability.
  1. At the infrastructure (lower) layer, interoperability is achieved through the use of common protocols for the conversion, identification and logical addressing of data to be transmitted over a network. The most common standards in this layer are Ethernet and TCP/IP. Protocols are also used for communication between computer programmes over telecommunications equipment, through common languages such as HTTP for web content, and SMTP, IMAP and POP3 for emails.
  1. At the application (upper) layer, interoperability is attained by reading and reproducing specific parts of computer programs, called interfaces, which contain the information necessary to “run” programs in a compatible format. However, different interfaces are needed depending on who actually “runs” the program1550: if it is from the perspective of the user/consumer of the computer program, user interfaces are relevant to the ex- tent that they enable him or her to visualize and deploy a specific set of commands or modes of interaction with the program, that can potentially be replicated into another (different) application. Importantly, although this kind of interoperability can increase a program’s utility to the user, it is not required for the purpose of its technical functioning. Most choices for user interfaces are indeed dictated not so much by functional elements of the program, as by the pursuit of the goals of user friendliness, aesthetical appeal and promotion of brand-specific features.
  1. In a data-driven economy, the importance of open technical standards can hardly be overstated: common technical and legal protocols for interconnection and data processing enable communication and portability, thereby stimulating innovation and promoting competition of services within a given technological paradigm.
  1. The degree to which such standards are truly open is likely to be a significant point of contention among different types of businesses. Granting automatic access to technology implementers can affect a technology provider’s ability to appropriate the value of its innovation in downstream markets; this in turn may lead important players in the industry to not only abstain from standard- setting efforts, but also implement strategies aimed at foreclosing interoperability with competitors’ technologies (horizontal interoperability) and preventing third parties from building on top of their technology (vertical interoperability).
  1. From the perspective of the developer of a computer program, the relevant interfaces for interoperability are the Application Programming Interfaces, i.e. any well-defined software interfaces which define the service that one component, module or application provides to other software elements. However, interoperable APIs do not necessarily imply the ability of either users or developers to meaningfully relate the outputs of interoperable computer programs, unless they are expressed in the same language (most commonly, JPEG for images, HTML for webpages, PDF for documents and MP3 for music). This can be achieved through the so called “data interfaces”, which are responsible for restoring and retrieving data in a specific format.

 

References

 

A. Van Rooijen, The Software Interface between Copyright and Competition Law: A Legal Analysis of Interoperability in Computer Programs (Kluwer Law, Alphen aan den Rijn 2010)

 

C. R. B. de Souza et al, “Sometimes You Need to See Through Walls- A Field Study of Application Programming Interfaces”, in Computer supported cooperative work, ACM Press 2004, p. 63-71

 

IEEE Guide to the POSIX Open System Environment (OSE), in IEEE Std 1003.0-1995 , vol., no., pp.0_3-, 1995, doi: 10.1109/IEEESTD.1995.81544.

51. Liability

  1. Liability refers to a legally enforceable responsibility for a harmful event. Liability can be civil or criminal, which are fundamentally different concepts in their origin and nature: the former implies a responsibility from a financial perspective, which can be explicitly foreseen by a statute but also be the result of contractual arrangements; whereas the latter implies the commission of a criminal offence, and thus necessarily depends on the existence of a primary norm in the legal system establishing a prohibited conduct (either active or passive). The primary goal of these two liabilities is also different, as the former is aimed to ensure compensation, while the latter aims at deterrence.
  1. In the context of platforms and more generally of intermediaries, an important distinction should be made between primary and secondary liability: while the former requires the violation of a specific rule of conduct directed to the intermediary, the second arises from duties that are triggered by the conduct of third parties. However, there is some confusion in the use of these terms across jurisdictions, as the dividing line between primary (also known as “direct”) and secondary (also known as “indirect”) liability is not always clear-cut: several statutes attribute primary liability on intermediaries for the failure to prevent, or the implicit authorization of, third party conduct (for instance, the doctrine of “authorization” in Australian and UK copyright law).
  1. Terminological clarifications aside, two main justifications are used to impose secondary liability: participation and relationship. The latter is the one that can be most easily circumscribed, as it requires the existence of a specific relationship between the primary and secondary infringer, where the latter benefits from the harm and is sufficiently close in relationship to the primary wrongdoer. The best example of this is employment relationships (based on the principle of respondeat superior), but the same rationale has been extended under the doctrine of vicarious liability to a range of scenarios where the secondary infringer had the right and ability to control the conduct of the primary infringer, and it is deriving financial benefit. It should be noted that the liability exemptions for hosts in the US Digital Millennium Copyright Act and in the European E- Commerce Directive specifically leave out circumstances where an intermediary had authority or control over a third party activity, but the former also includes the additional requirement of deriving no financial advantage from such activity.
  1. Participatory liability depends on the existence of a requisite degree of participation, which can range from mere facilitation to purposeful combination. In the UK, for instance, three types of participatory liability have been recognized: combination, authorization and inducement liability. Combination is the most intuitive scenario, where two or more parties have a common design or enterprise, and the infringing acts are in pursuance or furtherance of that. Initially, this was interpreted strictly at common law to require an identity of concerted action to a common end; however, more recent cases adopted a more liberal approach requiring a combination (even tacit) to secure the doing of acts which eventually prove to be infringements. The doctrine has been used in Dramatico Entertainment Ltd v British Sky Broadcasting, [2012] EWHC 268 (Ch), [2012] 3 CLMR 14 to find that the Pirate Bay website facilitates its users’ infringement of copyright, on grounds that there were hardly any lawful uses of the site. Most recently, three limits to its expansion were identified in Fish & Fish Ltd v Sea Shepherd UK  [2013] EWCA Civ 544, [2013] 3 All ER 867. First, common design will not be assumed simply because a person sells a product to another knowing that it is going to be used to commit a tort; second, there needs to be something more than just a close relationship between the parties; and third, approval of a person’s plan will not be sufficient in itself to give rise to common design.
  1. Authorization liability implies a different form of participation consisting of (tacit or explicit) permission, or possibly an order, from a person having (or purporting to have) authority over the “immediate” wrongdoer. It requires sufficient knowledge of the relevant circumstances and the acts committed (or to be committed) by the primary infringer. In the UK, a court established in Newzbin (No. 1) [2010] FSR 21 [90] that its application depends on a number of factors, such as the nature of the relationship between the alleged authoriser and the primary infringer, whether the equipment or other material supplied constitutes the means used to infringe, whether it is inevitable it will be used to infringe, the degree of control which the supplier retains and whether he has taken any steps to prevent infringement.
  1. Finally, inducement liability requires a further degree of participation, including acts such as persuasion and encouragement, for an infringing purpose. For instance, it was recently the doctrine on the basis of which a bookmaker was imputed of secondary copyright infringement for providing its customers with a link where they could find a database of infringing information concerning live football matches. In US law, this is recognized in the field of patents and copyright, for those who distribute a device with the object of promoting an infringing use; however, such intent must be shown by clear expression or other affirmative steps taken to foster infringement, which is not always easy for a plaintiff. Famously, in Metro-Goldwyn-Mayer Studios Inc. v. Grokster 545 U.S. 913 (2005), the US Supreme Court found inducement liability for peer to peer software offered by Grokster on the basis of three key factors: (1) efforts to satisfy a known demand for infringing content; (2) an absence of design efforts to diminish infringement; and (3) Grokster’s financial benefit from the activity.
  1. Both in criminal and in civil liability cases, a defendant can be subjected to an injunction, i.e. a court order that imposes a given conduct – be it an action or an inaction. This category of liability should be distinguished because, although it may arise in connection with the existence of intermediary liability, the cause of action is independent: the liability is attached to the failure of complying with the judicial order, rather than the responsibility for a third-party conduct. It has thus been suggested that the appropriate term is one of accountability, as discussed in that definition.

 

References

 

See Paul Davies, Accessory Liability (Hart Publishing, 2015); Hazel Carty, ‘Joint Tortfeasance and Assistance Liability’, (1999) 19 Legal Studies 489.

 

Richard Arnold and Paul S Davies, ‘Accessory liability for intellectual property infringement: the case of authorization’, Law Quarterly Review 442 (2017)

 

Graeme Dinwoodie, ‘A Comparative Analysis of Secondary Liability of Online Service Providers’ in Graeme Dinwoodie (ed.), Secondary Liability of Internet Service Providers (Springer 2017).

 

Jaani Riordan, A Taxonomy of Intermediary Liability’, in G. Frosio (ed.), The Oxford Handbook on Intermediary Liability (OUP, 2020).

 

Christina Angelopoulos, Harmonizing Intermediary Copyright Liability in the EU: A Summary, in

G. Frosio (ed.), The Oxford Handbook on Intermediary Liability (OUP, 2020).

 

Husovec, Martin, Accountable, Not Liable: Injunctions Against Intermediaries (May 2, 2016). TILEC Discussion Paper No. 2016-012, Available at SSRN: https://ssrn.com/abstract=2773768 or http://dx.doi.org/10.2139/ssrn.2773768

 

 

52. Marketplace

  1. A marketplace is a web-based service enabling the sale or goods or the provision of services by third party vendors. While the marketplace operator can also provide goods (of which it has full ownership) or services (through its own employees), this activity amounts merely to at distance/online sales. The marketplace operators process themselves or have built-on tools to process the payment of the good or service by users in favor of the third-party vendors. The marketplace may or may not collect a fee for its intermediation service. Vendors can be businesses or consumers. The target users can similarly be both coming from business or retail backgrounds.
  1. Within marketplaces, sharing economy platform operators facilitate transactions between providers and users. The transaction can relate to the temporary use of/access to a good intermediation services, or any service between providers acting outside their professional activity and users. The range of this notion is largely challenged in the literature and should only encompass the most narrow sense (else, it would equal to the notion of “marketplace”).
  1. Marketplaces act as “points-of-control”. Their influence on the underlying supply of goods or services is thus questioned under vertical restraints theories in competition law. Because they are points of control, policy makers can also use them to ensure the compliance with certain economic policies, especially in tax matters (e.g. by imposing reporting duties).
  1. The third-parties providing goods or listing offers for services on these marketplaces can either be business or individuals. It widens therefore the opportunities for individuals to offer goods and services on a frequent basis, without having to enter within the scope of consumer protections legislations. Indeed, consumer protection legislation often require the service provider or the seller to be a business. Two situations qualified as unfair may thus arise because the marketplace hides the identity of the third parties as well as the frequency of the transactions occurring within that seller/provider. On the one hand, businesses will try to pass off as individuals selling goods or offering services without consumer protection warranties. On the other hand, individuals may grow an activity as large as a business and evade legislations applicable to that business (in terms of consumer protection but also in terms of licencing) - creating thus a situation of unfair competition. This is the crux of many judicial challenges to ensure parity between “brick-and- mortar” and “click-and-mortar” businesses (especially in the so-called sharing economy).
  1. Because goods and services are offered by third-parties on marketplaces, the issue of the liability of the platform is often raised, notably in terms of intellectual property rights where the platforms have due diligence duties to ensure that trademark and copyrights protections are not infringed (see notice-and-takedown). Additionally, policymakers, incumbent marketplaces who feel unduly harmed by the unfair competition of click-and-mortar businesses also seek to find the platforms liable for other forms of illegal goods, services or activities (e.g. with regards to licence requirements). The success of this assertion varies largely from country to country.

 

References

 

COM 356 – ‘A European agenda for the collaborative economy’, [2016].

 

OECD. "The Role of Internet Intermediaries in Advancing Public Policy Objectives." In OECD Publishing, [2011].

 

Calo, Ryan, and Alex Rosenblat, ‘The Taking Economy: Uber, Information, and Power’, [2017] Columbia Law Review 117.

 

Edelman, Benjamin G, and Abbey Stemler. ‘From the Digital to the Physical: Federal Limitations on Regulating Online Marketplaces.’ [2019] Harvard Journal on Legislation, Forthcoming, 18-063.

 

Balfour, Abigail W. ‘Where One Marketplace Closes, (Hopefully) Another Won't Open: In Defense of Fosta.’ [2019] Boston College Law Review 60, no. 8, 2475.

 

Hoppner, Thomas, and Philipp Westerhoff. ‘The Eu’s Competition Investigation into Amazon’s Marketplace.’ [2018] Hausfeld Bulletin, no. Fall (2018).

 

Stemler, Abbey. ‘Feedback Loop Failure: Implications for the Self-Regulation of the Sharing Economy’, [2017] Minn. J. L. Sc. Tech. 18, 674, 712.

 

Lobel, Orly. ‘The Law of the Platform.’ [2016] Minn. L.R. 101, 87, 166.

 

Goldman, Eric. ‘An Overview of the United States’ Section 230 Internet Immunity.’ [2020] In Oxford Handbook of Online Intermediary Liability, edited by Giancarlo Frosio: Oxford, UK: Oxford University Press.

 

Goldman, Eric. "The Complicated Story of Fosta and Section 230." [In eng]. First Amendment Law Review 17 (2018-2019 2018): 279-93.

 

Chander, Anupam. "How Law Made Silicon Valley." Emory Law Journal 63 (2014): 639-94.

 

Hatzopoulos, V., & Roma, S. (2017). Caring for sharing? The collaborative economy under EU law. Common Market Law Review, 54(1)

 

Hatzopoulos, V. (2019). After Uber Spain: the EU’s approach on the sharing economy in need of review. Case Comment. European Law Review, 44.

53. Media Pluralism

  1. At its core, media pluralism references to the kind of diversity of media sources and opinions available to any given audience. More specifically, Reporters Without Borders (RSF, 2016) stresses that media pluralism can either refer to “a plurality of voices, of analyses, of expressed opinions and issues (internal pluralism), or a plurality of media outlets, of types of media (print, radio, TV, or digital), and coexistence of privately owned media and public-service media (external pluralism).” Media pluralism is imperative to a healthy, functioning democracy, as it fosters an information ecosystem that enables citizens to access a range of opinions, confront ideas, make informed choices, and conduct their life freely. Yet, consumption habits, changing economic models, and technical systems are threatening media pluralism around the world. Media consolidation and concentration (Wikipedia) are also a key threat. As fewer individuals or organisations control increasing shares of mass media producers, editorial independence, narrative diversity, and public-interest reporting are much more limited and controlled.
  1. In the age of digital and technological convergence, both internal pluralism and external pluralism are relevant to Internet governance discussions. When taken together, they reflect myriad digital policy areas – specifically access to information media sustainability. Diversity – ranging from gender perspectives, to the voices of minorities and marginalised groups – is a crucial component of internal pluralism, for instance. Internal media pluralism is also inextricably linked to bridging the digital divide(s) as well as encouraging skill development via digital media literacy and local capacity development. New technologies pose a threat to internal plurality as well, specifically the phenomenon of artificial intelligence (AI) applications being used to replace editors and content curators. Digital platforms have an important responsibility to promote and ultimately preserve internal media pluralism in their role as a primary gatekeeper (Helberger et al., 2015) to information diversity. Key recommendations (Global Forum For Media Development, 2020) to safeguarding this role include remodeling platform algorithms and moderation practices, as well as reversing commercial incentives that discriminate against journalism and news media.
  1. On the other hand, external pluralism is intrinsically tied to discussions around digital markets and media market failure (Pickard, 2019), competition and innovation, media funding, and zero rating. Dominant Internet business models continue to place strain (Chicago Booth, 2019) on both legacy and new/digital media outlets, which in turn, makes local and regional media ecosystems more fragile, more prone to closures and the creation of “news deserts,” (UNC) and more susceptible to media capture (Center for International Media Assistance) – a form of governance failure that occurs when the news media advance the commercial or political concerns of state and/or non-state special interest groups controlling the media industry instead of holding those groups accountable and reporting in the public interest. Looking ahead, media plurality and platform governance go hand-in-hand. Recognising how vital media plurality is and ultimately working to safeguard it is a critical endeavour going forward.

 

References

 

Reporters Sans Frontieres, ' Contribution to the EU public consultation on media pluralism and democracy'                              (European                              Commission                               2016) < https://ec.europa.eu/information_society/newsroom/image/document/2016- 44/reporterssansfrontiers_18792.pdf>

 

Wikipedia,            'Concentration            of          media            ownership'            (                               Wikipedia                      )             < https://en.wikipedia.org/wiki/Concentration_of_media_ownership>

 

Natali Helberger, Katharina Kleinen-von Königslöw and Rob van der Noll, ' Regulating the new information intermediaries as gatekeepers of information diversity' [ 2015] VOL. 17 NO. 6, © Emerald Group Publishing Limited, ISSN 1463-6697 50, 71

 

Global Forum For Media Development, ' Joint Emergency Appeal For Journalism And Media Support' (Global Forum For Media Development 2020) < https://gfmd.info/emergency-appeal-for- journalism-and-media-support-2/>

 

Victor Pickard, ' Public Investments for Global News' (Center for International Governance Innovation 2019) < https://www.cigionline.org/articles/public-investments-global-news>

 

Stigler Center News, ' Stigler Committee on Digital Platforms: Final Report' ( Chicago Booth 2019)

<                    https://www.chicagobooth.edu/research/stigler/news-and-media/committee-on-digital- platforms-final-report>

 

UNC, ' Do You Live In A News Desert?' ( UNC ) < https://www.usnewsdeserts.com/>

 

Center for International Media Assistance, ' What is Media Capture?' ( Center for International Media Assistance ) < https://www.cima.ned.org/resources/media-capture/>

 

Article 19, ' Media Freedom' ( Article 19 ) < https://www.article19.org/issue/media-freedom/> Center for International Media Assistance, < https://www.cima.ned.org/>

Centre for Media Pluralism and Freedom (CMFP)

 

CMPF, ' Media Pluralism Monitor' ( CMPF ) < https://cmpf.eui.eu/media-pluralism-monitor/>

 

Council of Europe, ‘Recommendation CM/Rec(2018)11 of the Committee of Ministers to Member States on Media Pluralism and Transparency of Media Ownership’. Available at: https://rm.coe.int/1680790e13

 

IGF, ' Dynamic Coalition on the Sustainability of Journalism and News Media ' ( IGF 2019 ) < https://groups.io/g/dc-sustainability>

 

GFMD, ' Internet Governance' ( GFMD ). Available at: https://gfmd.info/internet-governance/

 

Media Diversity Institute. Availble at: https://www.media-diversity.org/

 

Government of the Netherlands , ' The concept of pluralism: media diversity' ( Media Monitor ) < https://www.mediamonitor.nl/english/the-concept-of-pluralism-media-diversity/>

 

UNESCO, ' Media Pluralism and Diversity' ( UNESCO ) < https://en.unesco.org/themes/media- pluralism-and-diversity>

 

UNESCO, World trends in freedom of expression and media development: global report 2017/2018 (1st,                                                  UNESCO,                              2018).                            Available at: https://unesdoc.unesco.org/ark:/48223/pf0000261065

 

 

54. Microtargeting

(I) Targeting is a practice whereby online content, typically advertising content, is distributed towards particular audiences based on their personal data. In the words of William Gorton, microtargeting involves ‘creating finely honed messages targeted at narrow categories of voters’ based on data analysis ‘garnered from individuals’ demographic characteristics and consumer and lifestyle’ (Gorton, 2016). Targeting is often closely associated with personalization, and the terms are often used interchangeably. The prefix micro- is used to indicate that a highly specific audience is being targeted, although the precise criteria for this designation are rarely made explicit.

  1. The most popular and influential microtargeting services are those offered by major online platforms such as Google and Facebook, but it is not limited to these services. Indeed, microtargeting can also be done offline; many offline campaign activities, such as door-to-door canvassing, pamphleteering and telephone banking can be targeted with the help of personal data, much in the same way as online advertising.
  1. When microtargeting relates to political advertisements, it is referred to as political microtargeting, which Ira Rubinstein describes as form of ‘direct marketing in which political actors target personalized messages to individual voters by applying predictive modelling techniques to massive troves of voter data’ (Rubinstein, 2014). It is worth noting, however, that the concept of ‘political’ advertising is also ambiguous and continues to be contested, with some focusing on a narrower category of election campaign ads and others extending the term to cover all political ‘issues’ - which is itself a highly amorphous category (Leerssen et al, 2019). This same ambiguity about the boundaries of the political also arises in the context of microtargeting.
  1. The threshold where targeting becomes microtargeting is not always clear. One way to distinguish micro-targeting is by reference to the size of the audience targeted. Another is to focus on the granularity of the personal data involved. Along these lines, Tom Dobber, Ronan O’Fathaigh and Frederik Zuiderveen Borgesius propose, in the context of political advertising, that ‘micro-targeting differs from regular targeting not necessarily in the size of the target audience, but rather in the level of homogeneity, perceived by the political advertiser’ (Dobber, Fahy and Zuiderveen Borgesius, 2019). In this reading, targeting an entire neighbourhood with a single message constitutes regular targeting, whereas tailoring different messages to different users within the neighbourhood, based on their personal data profiles, constitutes microtargeting. Overall, the available literature suggests a sliding scale between general and micro-targeting, rather than a strict binary.

(II) ‘Microtargeting’ is a novel concept from communications science without any clearly defined legal meaning. Although various laws affect the practice of targeted advertising, including data protection laws and campaign finance laws (Dobber, O’Fathaigh and Zuiderveen Borgesius, 2019), these have not historically relied explicitly on the concept of ‘targeting’ or ‘microtargeting’ in doing so. Only recently has the concept made its first appearance in official policymaking.

  1. In its June 2020 resolution on competition policy, the European Parliament proposed a ban on micro-targeting performed by online platforms. The report “calls on the Commission to ban platforms from displaying micro-targeted advertisements and to increase transparency for users” (European Parliament 2020). A further operationalization of this concept has not (yet) been proposed, although accompanying statements from the amendment’s author, Paul Tang MEP, appear to use microtargeting interchangeably with ‘personalization’ – suggesting a relatively low threshold that could potentially cover most if not all targeting practices involving personal data (“European Parliament wants to forbid personalised advertisements” 2020).
  1. Occasionally, self-regulatory efforts by platforms and other online services also reference the concept of microtargeting. For instance, Google claimed that it had prohibited ‘microtargeting’ for political advertisements, by virtue of having restricted targeting options for these ads to a more limited selection: age, gender, and general location (Google 2019).
  1. In the context of political advertising and campaigning laws, microtargeting practices are now under intense scrutiny in multiple jurisdictions, with reforms recently completed or ongoing in, inter alia, Germany, France, the United Kingdom, the Netherlands, Sweden, Ireland, the United States, and Canada (For an overview, see Van Hoboken et al 2019). The European Commission has also singled out microtargeting as a point of attention in the ongoing Digital Services Act reforms. Until now, the majority of these laws and proposals have focused on issues such as campaign finance and transparency, and have not (yet) tackled the legality of microtargeting as such.

 

References

 

Dobber, Tom, Ronan Ó Fathaigh and Frederik Zuiderveen Borgesius. 2019). “The regulation of online political micro-targeting in Europe”. Internet Policy Review 8(4). Available at: https://policyreview.info/articles/analysis/regulation-online-political-micro-targeting-europe

 

European Parliament. 2020. Resolution of 18 June 2020 on competition policy – annual report 2019. 2019/2131(INI). Available at: https://www.europarl.europa.eu/doceo/document/TA-9-2020- 0158_EN.html

 

“European Parliament wants to forbid personalised advertisements”. 19 June 2020. Paultang.nl. Available at: https://paultang.nl/en/forbid-personalised-ads/

 

Google. 2019. “An Update on our political ads policy”. Google Official Blog. Available at: https://blog.google/technology/ads/update-our-political-ads-policy

 

Gorton, William. 2016. “Manipulating Citizens: How Political Campaigns’ Use Of Behavioral Science Harms Democracy”. New Political Science 38(1).

 

Rubinstein, Ira. 2014. “Voter Privacy in the Age of Big Data” Wisconsin Law Review 5, pp. 861- 936.

 

Leerssen et al, 2019. “Platform Ad Archives: Promises and Pitfalls”. Internet Policy Review 8(4). Available at: https://policyreview.info/articles/analysis/platform-ad-archives-promises-and-pitfalls

 

Van Hoboken, Joris, Naomi Appelman, Ronan Ó Fathaigh, Paddy Leerssen, Tarlach McGonagle, Nico van Eijk and Natali Helberger. 2019. The legal framework on the dissemination of disinformation through Internet services and the regulation of political advertising: A report for the Ministry of the Interior and Kingdom Relations. Institute for Information Law (IViR). Available at: https://www.ivir.nl/publicaties/download/Report_Disinformation_Dec2019-1.pdf

55. Moderation

  1. Content moderation can be described as the result of editorial decisions made by the subject who govern the space where information is published. Moderation has also been defined as “the governance mechanisms that structure participation in a community to facilitate cooperation and prevent abuse.” (Grimmelman, 2015).
  1. Content moderation is not a novelty in the media sector. As content providers, traditional media outlets like televisions and newspapers have always selected the information to broadcast or disclose. This activity has also extended to the digital environment. Since the first online fora, we have seen how communities have moderated digital spaces to decide which content reflects the values or interest of the group without commercial purposes. In the last years, the commercial side of content moderation has evolved with online platforms, precisely social media, which have built a bureaucracy to moderate content (Klonick, 2019). This activity has been defined as “the screening, evaluation, categorization, approval or removal/hiding of online content according to relevant communications and publishing policies […] to support and enforce positive communications behaviour online, and to minimize aggression and anti-social behaviour.” (Flew et al., 2019). This amount of content flowing on social media’ spaces is not free but subject to a wide range of practices applied by platforms to manage content posted by their users (Niva Elkin- Koren & Maayan Perel, 2019a). Just in the case of Facebook, the amount of post moderated in different areas of the world is on a scale of billions each week.
  1. While some practices intend to optimize the matching of content with the users who view it and would potentially engage with it, other practices intend to ensure that content complies with appropriate norms (Niva Elkin-Koren & Maayan Perel, 2019b). Social media decide how to organize users’ news feed or set their recommendation system to target certain categories of users (i.e. soft moderation). Together with such activities, social media make editorial decisions which can also lead to the removal of online content to ensure respect and enforce community’s rules (i.e. hard moderation). Content moderation decisions can be entirely automated, made by humans or a mix of them (Gorwa et al. 2020). While the activities of pre-moderation like prioritisation, delisting and geo-blocking are usually automated, post-moderation is usually the result of automated and human moderation. The massive amount of content to moderate explains why content moderation is usually performed by a mix of machines and human moderators which decide whether to maintain or delete the vast amount of content flowing every day on social media (Roberts, 2019).
  1. Within this framework, social media platforms facilitate the global exchange of content generated by users, at gigantic scale while governing information flow online (Kaye, 2019). However, these characteristics are just one part of the jigsaw explaining the ability and reasons for platforms to discretionary establish how to carry out content moderation. Content moderation is the constitutional activity of social media (Gillespie 2019). The moderation of online content is an almost obligatory step for social media not only to manage removal requests but also prevent that their digital spaces turn into hostile environments for users due to the spread for example, of incitement to hatred. Indeed, the interest of platforms is not just focused on facilitating the spread of opinions and ideas across the globe but establishing a digital environment where users feel free to share information and data that can feed commercial networks and channels and, especially, attract profits coming from advertising. In other words, the activity of content moderation is performed to attract revenues by ensuring a healthy online community, protect the corporate image and show commitments with ethical values. Within this business framework, users’ data are the central product of online platforms under a logic of accumulation (Zuboff, 2019).
  1. In this scenario, content moderation produces positive effects for freedom of expression and democratic values. The organization, filtering and removal of content increases the possibilities for users to experience a safe digital environment without the interference of objectionable or harmful content. At the same, content moderation negatively impacts on the right to freedom of expression. Since social media can select which information deserves to be maintained and deleted according to standards based on the interest to avoid any monetary penalty or reputational damage. Such a situation is usually defined as collateral censorship (Balkin, 2014). Scholars have observed that online platforms try to avoid regulatory burdens by relying on the protection recognised by the First Amendment, while, at the same time, they claim immunities as passive conduits for third-party content (Pasquale, 2016). As underlined, immunity allows Internet intermediaries ‘to have their free speech and everyone else’s too’ (Tushnet, 2008). Moreover, an extensive activity of content moderation influences even the right to privacy and data protection. Indeed, users could fear to be subject under a regime of private surveillance over their information and data. It is worth observing that, in the last case, even the right to free speech is involved due to the users’ concern to be monitored through the information they publish.
  1. More broadly, content moderation challenges also democratic values, such as the principle of the rule of law, since social media autonomously determine how freedom of expression online is protected on a global scale without any public safeguard (Suzor, 2020). The immunity granted by these laws leads online platforms to freely choose which values they want to protect and promote, no matter if democratic or anti-democratic and authoritarian. Since online platforms are private businesses, they would naturally tend to focus on minimising economic risks rather than ensuring a fair balance between fundamental rights when moderating content (De Gregorio, 2019b). The international relevance of content moderation can be understood even by looking at how this activity has led to escalating violent conflict in countries like Myanmar or Sri Lanka, so that some States decided to shut down social media as increasingly happening in African countries.
  1. Addressing the challenges of content moderation without undermining its social relevance for the digital environment is one of the primary points from a policy perspective. In making decisions on online content, social media platforms apply a complex system of norms, driven by consumption, commercial interests, social norms, liability rules and regulatory duties, where each set of norms may interact with others (Belli and Zingales, 2017). Scholars have mostly proposed to protect the system of immunity (Keller, 2018) or reinterpret its characteristics (Bridy, 2018), building an administrative monitoring-and-compliance regime (Langvartd, 2017), or introducing more safeguards in the process of moderation (De Gregorio 2020a; Bloch Wehba 2020). In other words, the focus would move from liability to responsibility. To achieve this purpose, transparency and accountability safeguards could help to understand how speech is governed behind the scenes without overwhelming platforms with disproportionate monitoring obligations.

 

References

 

Balkin, Jack M., ‘Old-School/New-School Speech Regulation’ (2014) 127 Harvard Law Review 2296;

 

Belli, Luca and Zingales, Nicolo (eds), Platform Regulations. How Platforms are Regulated and How They Regulate Us (FGV Rio 2017);

 

Bridy, Annemarie, ‘Remediating Social Media: A Layer-Conscious Approach’ (2018) 24 Boston University Journal of Science and Technology Law 193.

 

Bloch-Wehba, Hannah, ‘Automation in Moderation’ SSRN (29 January 2020) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3521619

 

De Gregorio, Giovanni (a), ‘Democratising Content Moderation: A Constitutional Perspective’ (2020) 36 Computer Law & Security Review.

 

De Gregorio, Giovanni (b), ‘From Constitutional Freedoms to the Power of the Platforms: Protecting Fundamental Rights Online in the Algorithmic Society’ (2019) 11(2) European Journal of Legal Studies 65:

 

Elkin-Koren, Niva and Perel, Maayan (a), ‘Algorithmic Governance by Online Intermediaries’ in Eric Brousseau et al. (eds), Oxford Handbook of Institutions of International Economic Governance and Market Regulation (Oxford University Press 2019).

 

Elkin-Koren, Niva and Perel, Maayan (b), ‘Separation of Functions for AI: Restraining Speech Regulation by Online Platforms’ SSRN (22 August 2019) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3439261;

 

Flew, Terry et al., ‘Internet Regulation as Media Policy: Rethinking the Question of Digital Communication Platform Governance’ (2019) 10(1) Journal of Digital Media & Policy 33;

 

Gorwa, Robert et al., ‘Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance’ (2020) 7(1) Big Data & Society 1;

 

Grimmelmann, James, ‘The Virtues of Moderation’ (2015) 17 Yale Journal of Law & Technology 42;

 

Gillespie, Tarleton, Custodians of the Internet. Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media (Yale University Press, 2018);

 

Kaye, David, Speech Police: The Global Struggle to Govern the Internet (Columbia Global Reports 2019).

 

Keller, Daphne, ‘Internet Platforms: Observations on Speech, Danger, and Money’ (2018) Hoover Institution's Aegis Paper Series, No. 1807;

 

Klonick, Kate, ‘The New Governors: The People, Rules, and Processes Governing Online Speech’ (2018) 131 Harvard Law Review 1598;

 

Langvardt, Kyle, ‘Regulating Online Content Moderation’ (2017) 106(5) Georgetown Law Journal 1353.

 

Pasquale, Frank, ‘Platform Neutrality: Enhancing Freedom of Expression in Spheres of Private Power’ (2016) 17 Theoretical Inquiries in Law 487;

 

Roberts, Sarah T., Behind the Screen. Content Moderation in the Shadows of Social Media (Yale University Press 2018);

 

Suzor, Nicolas, Lawless: The Secret Rules That Govern Our Digital Lives (Cambridge University Press 2019);

 

Tushnet, Rebecca, ‘Power Without Responsibility: Intermediaries and the First Amendment’ (2008) 76 The George Washington Law Review 986.

 

Zuboff, Shoshana, Surveillance Capitalism. The Fight for a Human Future at the New Frontier of Power (Public Affairs 2019).

 

 

56. Must-carry

  1. Must carry legislation is one of the instruments used in the regulation of cable TV to promote diversity and ensure the broadcast of channels that would not normally be included in the operators' bundle (Perry, n.d.).
  1. The must carry rules appeared in 1965, in the face of the expansion of the cable TV service, to avoid that the growing power of the cable TV providers ended up suppressing the local broadcasters (Valente, 2013). Historically seen as a guarantee that broadcasters would be distributed by cable television providers, the must carry rules became more complex over time.
  1. In 1968, the U.S. Court of Appeal issued the first decision (Black Hills Video Corp. v. FCC, 1968) declaring the legitimacy of the must carry, according to which it understands that its rules preserved local broadcasting without thereby violating the freedom of expression guaranteed by the First Amendment to the United States Constitution. From then on, a longstanding debate was launched on the importance of must carry rules (Pieranti & Festner, 2008).
  1. This discussion recalls the debates on net neutrality and the obligation for internet services providers to ensure that all information that runs over the network must be equally treated (Ballard, 2011). Here we can draw a parallel between the power of cable TV services and the power of ISPs, where both have the technical capacity and economic incentives to restrict their competitors and/or favor their own business or partners (Belli, 2016). That is why the debates on freedom of expression and monopolistic tendencies are so inherent in both must carry discussions and those of network neutrality (Patrick & Scharphorn, 2015).

 

References

 

Belli, Luca. (2016). ‘Net neutrality, zero rating and the Minitelisation of the internet’. Journal of Cyber Policy, 2(1), 96–122. doi:10.1080/23738871.2016.1238954

 

Pieranti, Octávio & Festner, Susana (2008). ‘Estudo comparativo de regras de Must Carry na TV por assinatura’. Brasília: ANATEL.

 

Valente, Jonas (2013). ‘Regulação democrática dos meios de comunicação’. São Paulo: Fundação Perseu Abramo.

 

Ballard, Tony (2011). ‘Public service broadcasting: Net neutrality and the ‘must carry’ rules’ Available at: <https://www.harbottle.com/title-215/>

 

Perry,       Audrey        (n.d.).       ‘Must-Carry        rules’       Available        at:                  <https://www.mtsu.edu/first- amendment/article/1000/must-carry-rules>

 

Patrick, Andrew, & Scharphorn, Eric (2015). ‘Network Neutrality and the First Amendment’. Mich. Telecomm.                   &              Tech.              L.                 Rev.,                                    22,           93.                   Available at:

<https://repository.law.umich.edu/cgi/viewcontent.cgi?article=1209&context=mttlr>

 

 

57. Non-discrimination

  1. The term “non-discrimination” has a very specific definition and understanding under international law, however it is also used in a different sense in the context of online platforms. This entry first looks at the agreed international definitions of the term, as found in relevant legal instruments, before turning to other uses.
  1. “Non-discrimination” under international (human rights) law
  1. The principle of non-discrimination - and, specifically, the prohibition of discrimination - has been translated into a number of international human rights instruments, most notably the International Covenant on Civil and Political Rights (ICCPR). While the ICCPR and other international human rights instruments (such as the International Covenant on Economic, Social and Cultural Rights) often prohibit discrimination in the enjoyment of the rights that they set out, the ICCPR also provides a standalone prohibition of discrimination through Article 26 which provides that: “the law shall prohibit any discrimination and guarantee to all persons equal and effective protection against discrimination on any ground such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status”.
  1. While “discrimination” is not defined in Article 26 itself, the UN Human Rights Committee (HRC) has provided guidance on the scope of the term in its General Comment No. 18 (UN, 1989). There, the HRC states that “discrimination” should be understood as including “any distinction, exclusion, restriction or preference which is based on any ground such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status, and which has the purpose or effect of nullifying or impairing the recognition, enjoyment or exercise by all persons, on an equal footing, of all rights and freedoms”. In setting out this definition, the HRC drew upon other international human rights instruments which do define the term, such as the Convention on the Elimination of All Forms of Discrimination against Women and the International Convention on the Elimination of All Forms of Racial Discrimination. The HRC also clarified that Article 26 of the ICCPR prohibits discrimination “in law or in fact” and “in any field regulated and protected by public authorities”.
  1. In the context of online platforms, the right to non-discrimination could be breached, for example, as a result of content moderation policies which themselves treat different groups differently, or in their enforcement disproportionately affect users with a particular characteristic; or the use of algorithms which are biased on the basis of a user’s personal characteristics, leading to discriminatory outcomes.
  1. Other uses of the term
  1. Outside of international law, the term “non-discrimination” is most commonly used in the context of online platforms to refer to a legal obligation not to discriminate in the conditions or quality of the services and information that it provides. The European Commission, for example, defines the term “non-discrimination” as follows: “an obligation of non-discrimination ensures that an operator applies equivalent conditions in equivalent circumstances to other undertakings providing equivalent services, and provides services and information to others under the same conditions and of the same quality as it provides for its own services, or those of its subsidiaries or partners.” (European Commission). Examples of discrimination in this context would largely be for commercial reasons, and would include differential pricing arrangements or different treatment of traffic.

 

References

 

UN Human Rights Committee, ‘General Comment No. 18: Non-discrimination’, [1989].

 

European Commission, ‘Shaping Europe’s digital future, Glossary’. Available at: https://ec.europa.eu/digital-single-market/en/glossary

 

 

 

58. Notice

  1. Notice is a term that refers to the disclosure to a particular stakeholder of a relevant fact or situation. To be deemed a valid notice, such disclosure should contain a sufficient level of detail for the recipient to understand that information revealed and investigate the facts or situation. There are also certain formalities or general requirements to be complied with when it comes to what are deemed as qualified notices for the purpose of specific legal processes. For instance, several contracts specify that a notice of termination must be sent in writing and within a specified period. Similarly, when it comes to the information that must be provided by data controllers to data subjects, article 12 (1) of the GDPR requires it to be provided in a concise, transparent, intelligible and easily accessible form, using a clear and plain language – especially when directed to a child.
  1. One important type of notice is the one regarding changes to a previously agreed contract. Here, cases involving the credit card and telecommunications industries provide helpful insight as to how courts evaluate modifications of standard terms, also known as “contracts of adhesion”, as there is no individual negotiation. For example, in Badie v. Bank of America, 67 Cal. App. 4th 779 (Cal. Ct. App. 1998), where a bank attempted to modify credit card terms by adding an arbitration procedure where one was not already part of the contract terms, a US Court found that the offeree did not receive proper notice of the modification because the proposed change was printed on an insert with the monthly bill and nothing otherwise called the change to anyone’s attention. Other companies have found out the hard way that simply providing a complete set of the proposed revised terms, without any indication as to which terms had been changed, was not sufficient notice (see e.g. DIRECTV, Inc. v. Mattingly, 829 A.2d 626 (Md. 2003)). On the other hand, a company that prominently announced modified terms with its monthly bill, and provided an Internet address and telephone number where the customer could access the revised terms, was found to have successfully put the customer on notice of the changed terms. Ozormoor v. T- Mobile USA, Inc., 2008 U.S. Dist. LEXIS 58725 (E.D. Mich. June 19, 2008).
  1.  The appearance and placement of the notice also is important. One company was unable to enforce a notice of a contract modification that was printed on its invoice where it was the fifth item on the second page of the invoice, in ordinary type. Manasher v. NECC Telecom, No. 06- 10749,2007 U.S. Dist. LEXIS 68795 (E.D. Mich. Sept. 18, 2007). On the other hand, in Briceno Sprint Spectrum, L.P., 911 So. 2d 176 (Fla. Dist. Ct. App. 2005), Sprint’s notice was enforceable where it printed “Important Notice Regarding Your PCS Service From Sprint” in bold letters immediately below the amount due on the invoice. The notice also prominently discussed the changes in the contract terms and provided both a telephone number and a website where the revised terms could be found.
  1. US Courts also have looked at whether the modification has been accepted by the offeree. For example, in Klocek v. Gateway, Inc., 104 F. Supp. 2d 1332 (D. Kan. 2000), the purchaser of a Gateway computer did not see Gateway’s standard terms (and was not provided notice about the terms) until the computer was shipped to the purchaser and she opened the box. Gateway’s standard terms contained a number of provisions, including an arbitration clause. When Gateway moved to dismiss a class action lawsuit in light of the Federal Arbitration Act, the court refused to enforce the arbitration clause. The court found that the plaintiff offered to purchase the computer and Gateway accepted. Gateway’s standard terms then became either an expression of acceptance or a confirmation of the offer under section 2-207 of the U.C.C. However, the court found that the rest of the provisions in Gateway’s standard terms, including the arbitration clause, were not part of the original purchase agreement and were not enforceable.
  1. More recently, in Knutson v. Sirius XM Radio, 771 F.3d 559 (9th Cir. 2014), the terms regarding an automobile’s trial subscription to a satellite radio service were sent to the owner a month after the purchase of the automobile in an envelope marked “Welcome Kit.” The Ninth Circuit refused to enforce the additional terms because there was no mutual assent to the terms. The Ninth Circuit found no evidence that the purchaser of the automobile knew that he had purchased anything from Sirius or was entering into a relationship with Sirius, let alone had agreed to the terms (which contained an arbitration clause). Therefore, continued use of the service by the purchaser did not manifest assent to the terms.
  1. In the online context, courts that have addressed modifications generally have respected these traditional contract principles and have held that attempted modifications are unenforceable when the person to whom the modification is offered has no reason to know of the proposed changes to the agreement. As a result, online contract modifications tend to fall for failure to satisfy the notice requirement.
  1. In evaluating online contract modification, courts have paid close attention to the differences between electronic and face-to-face or paper communications. This is a refreshing development, given that this is not always the case in opinions addressing online contract formation in the first instance. The opinion in Campbell v. General Dynamics, 407 F.3d 546 (1st Cir. 2005), a dispute involving an attempted modification of an employment handbook, provides an example of judicial awareness that electronic messages can get lost in the electronic shuffle. In Campbell, an employer attempted to modify an employment handbook by sending a mass company-wide e- mail message containing hyperlinks to the proposed changes to its employees. One of the proposed modifications was a binding arbitration clause. In holding that the modification was not effective, the court focused on the expectations of the employee receiving the modification offer. Given that the mass e-mail message did nothing to communicate its importance and that employment changes at General Dynamics were usually communicated in person by means of a signed writing, the court held that the attempted modification was not binding.
  1. The communicative value of online interaction similarly influenced the Ninth Circuit in holding that the attempted modification in Douglas v. U.S. District Court, 495 F.3d 1062 (9th Cir. 2007), was ineffective. The dispute in that case arose when a phone service provider changed its online terms to add new service charges, a new arbitration clause, and a class action waiver. It did so without notifying its customers of the changes and simply posted the changes to its website. The plaintiff had agreed to automatic billing and therefore had little reason to visit the website on a regular basis. After the district court found the arbitration clause enforceable, the Ninth Circuit reversed, finding that the subscriber had not been given notice of the changes. The Ninth Circuit also felt strongly that parties to a contract have no obligation to check the terms on a periodic basis to learn whether they have been changed by the other side. This fact, plus the fact that the plaintiff would not have known where to find the changes to the terms of use even if he had visited the website, led the court to hold that the modifications were unenforceable.
  1. The court in Rodman v. Safeway, Inc., 2015 U.S. Dist. LEXIS 17523 (N.D. Cal. 2015), similarly refused to impose a duty on website users to continually check for changes to online terms. Rodman was another case in which the author of online terms of use posted changes to those terms on its website but made no attempt to notify its customers of the changes. The defendant attempted to justify its actions by highlighting a clause in its original terms of use that reserved the right to amend the terms at any time and imposed a duty on the customer to keep up with changes to the terms. Like the court in Douglas, the court in Rodman stressed that it is unreasonable to expect a customer to check a website regularly for changes to online terms. Moreover, the court, applying traditional contract doctrine, noted that a customer could not assent to future changes of which there was no reason to know would come.
  1. In the context of platforms, the term notice is typically (but not only) used concerning the alleged illegality of a particular type of content or behavior, following which the platform may remove or disable access in accordance with a notice and takedown, a notice and notice procedure or some other standardized process. These notices can contain allegations either a violation of existing law or a violation of the platform´s terms of service. A civil society effort led by the Electronic Frontier Foundation (EFF) in 2014 established a number of minimum requirements for such notices, which they call “content restriction requests”, as part of the Manila Principles of Intermediary Liability. In particular, the Principles stipulate that a content restriction request pertaining to unlawful content must, at a minimum, contain the following:
    1. The legal basis for the assertion that the content is unlawful.
    2. The Internet identifier and description of the allegedly unlawful content.
    3. The consideration provided to limitations, exceptions, and defences available to the user content provider.
    4. Contact details of the issuing party or their agent, unless this is prohibited by law.
    5. Evidence sufficient to document legal standing to issue the request.
    6. A declaration of good faith that the information provided is accurate.
  1. By contrast, a content restriction requests pertaining to an intermediary’s content restriction policies must, at the minimum, contain the following:
  1. The reasons why the content at issue is in breach of the intermediary’s content restriction policies.
  2. The Internet identifier and description of the alleged violation of the content restriction policies.
  3. Contact details of the issuing party or their agent, unless this is prohibited by law.
  1. Finally, notices in the context of online platforms may refer also to the disclosures made by platforms to users about the content that is prohibited, and the content from each specific user that is removed. This is a core principle of the Santa Clara Principles on transparency and accountability in content moderation, another civil society movement led by the EFF and a small group of organizations and advocates, establishing guidelines for content moderation that have been implemented by Reddit and endorsed by Apple, Github, Twitter, YouTube and other platforms (EFF, 2020).
  1. The Principles require companies to provide detailed guidance to the community about what content is prohibited, including examples of permissible and impermissible content and the guidelines used by reviewers, and an explanation of how automated detection is used across each category of content. They also require minimum information to be included in the notices about why her post has been removed or an account has been suspended:
  • URL, content excerpt, and/or other information sufficient to allow identification of the content removed.
  • The specific clause of the guidelines that the content was found to violate.
  • How the content was detected and removed (flagged by other users, governments, trusted flaggers, automated detection, or external legal or other complaint). The identity of individual flaggers should generally not be revealed, however, content flagged by government should be identified as such, unless prohibited by law.
  • Explanation of the process through which the user can appeal the decision.
  1. In addition to these requirements as to the form of notices, two important elements of the Santa Clara Principles concern the records of such notices: their availability in durable form accessible even if a user’s account is suspended or terminated, and the presentation to users who flag content of a log of past content moderation requests they have submitted, along with the corresponding outcomes of the moderation processes. However, at the same time, it has been considered that the Principles fail to specifically address the peculiarities of certain practices, calling for an update (EFF, 2020). Some of the drafters of the Santa Clara Principles (Suzor, West, Quodling, and York 2020) noted that the implementation of the principles is unsatisfactory in certain respects, in particular due to (1) the prevalence of confusion from users about the exact content or behavior that triggered a sanction from the platforms; (2) the systemic failure on the part of platforms to provide good reasons to explain the decisions they reach; (3) the failure to inform users of how (and especially by whom) triggered the flagging of specific content for review by the platform´s moderation system; and (4) the confusion about who exactly makes content moderation decisions, and their possible biases. Accordingly, they call for the following disclosure in notices:
  • more general demographic information about the makeup of their moderation teams, with particular regard to age, nationality, race, and gender.
  • detailed information about the training and guidelines associated with the moderation process, including what processes exist to support moderators to make consistent and well- informed decisions in the context of potential ambiguity.
  • Differential social impact of the inputs and outputs and the algorithms of these systems, to understand bias in moderation decisions. Analysis of this type will require large-scale access to data on individual moderation decisions as well as deep qualitative analyses of the automated and human processes that platforms deploy internally.

 

References

 

American Bar Association, ´Online Contracts: We May Modify These Terms at Any Time, Right? [20                        May                                     2016],                                     Available                                          at: https://www.americanbar.org/groups/business_law/publications/blt/2016/05/07_moringiello/

 

Manila Principle of Intermediary Liability, Available at: https://www.manilaprinciples.org/principles

 

EFF, ‘EFF Seeks Public Comment About Expanding and Improving Santa Clara Principles’ [14 April 2020]. Available at: https://www.eff.org/press/releases/eff-seeks-public-comment-about- expanding-and-improving-santa-clara-principles

 

Nicolas Suzor, Sarah Meyers West, Andrew Quodling, and Jilian York, ‘What Do We Mean When We Talk About Transparency? Toward Meaningful Transparency in Commercial Content Moderation’, [2019] International Journal of Communication 13, 1526, 1543

59. Notice-and-notice

  1. Notice-and-notice it is a process to deal with potential infringing content, a regime that became more widely known after being established in Canada's Copyright Act. It is an alternative to the notice-and-takedown model that works as follows: after a decision by the platform to remove content through private notification, the user is given the option to counter-notify and personally take responsibility for maintaining the content online, in which case the platform is exempt (de Souza & Schirru, 2016). According to Valente (2018) it is a model that distributes responsibilities in order to try to contemplate both the users and the (copy)rights holder, who wants a faster mechanism for notification and the possibility of removing infringing (copy)right content.
  1. In a notice-and-notice system, providers forward to users the notifications made by right holders regarding alleged violations of rights practiced by these users, without summary removal of the content (as in the case of notice-and-takedown). Proponents of the notice-and-notice system, like the Canada model, argue that this process would eliminate flaws in the notice-and-takedown system (Geist, 2011).
  1. ARTICLE 19 (2013) argues that this system would have good results when dealing with civil complaints regarding copyright, defamation, privacy, adult content and bullying (instead of harassment or threats of violence). In their view, this system, in the worst case scenario, would give content providers the opportunity to respond to allegations of violations of the law before any action is taken; it would contribute to reducing the number of abusive requests, as it requires a minimum of information about the allegations; and it would provide an intermediary system for resolving disputes before matters reach the courts.

 

References

 

Article       19       (2013).        ‘Internet       intermediaries:         Dilemma        of      Liability’                   Available               at:

<https://www.article19.org/wp-content/uploads/2018/02/Intermediaries_ENGLISH.pdf>

 

de Souza, Allan, & Schirru, Luca (2016). ‘Os direitos autorais no marco civil da internet’ Liinc em Revista, 12(1). Available at: <http://revista.ibict.br/liinc/article/view/3712/3132>

 

Geist, Michael (2011). ‘Rogers Provides New Evidence on Effectiveness of Notice-and-Notice System’ Available at: <http://www.michaelgeist.ca/content/view/5703/125/>

 

Government         of       Canada         (2017).         ‘Notice       and        Notice        Regime’                                Available                 at:

<https://www.canada.ca/en/news/archive/2014/06/notice-notice-regime.html>

 

Government         of       Canada         (2018).         ‘Notice       and        Notice        Regime’         Available at: https://www.ic.gc.ca/eic/site/Oca-bc.nsf/eng/ca02920.html

 

Valente, Mariana (2019). ‘Direito autoral e plataformas de Internet: um assunto em aberto’ Available at: <https://www.internetlab.org.br/pt/especial/direito-autoral-e-plataformas-de-internet- um-assunto-em-aberto/>

60. Notice-and-takedown

  1. Notice-and-takedown is a process to deal with potential infringing content based on the practice of sending an extrajudicial notification to the content provider and the immediate removal of the allegedly infringing content by the provider, without the need for a prior court order or possibility of counter-notification, before or after the content removal. Under such a mechanism, the provider may be liable if it did not remove the content immediately.
  1. This mechanism has become predominant on the internet thanks to the Digital Millennium Copyright Act 1998 (DMCA), a U.S. law that limits the liability of online service providers for copyright infringement caused by their users if they promptly remove the offending content after being notified of an alleged infringement by copyright owners or their representatives. The enactment of this legislation immediately provoked similar adoptions in other countries, but nations that remained neutral were also regulated by the DMCA, as the major platforms apply this legislation globally, disregarding local copyright laws. Online platforms frequently use DMCA to establish a “three strikes” policy, a graduated response system that consists of three warnings to the user about posting copyrighted material, generating a serious penalty after the last warning, such as account deletion.
  1. Explaining the relationship between this removal regime and the DMCA is relevant because notice-and-takedown mechanisms has serious problems, for instance:
  1. offer serious risks to freedom of expression online, encouraging the arbitrary removal of content
  2. allow short-term censorship of material whose timing is crucial, like election period.
  1. National legislation lays out several hypotheses in which works protected by copyright can be used without the need of authorization by the copyright owner. However, it is not uncommon for copyright infringement to be the argument used for purposes of censorship, when the use of that work would be potentially lawful - which is not always easy to determine.
  1. In big digital platforms, the responsible for assessing whether or not the posted contents complies with copyright rules is an artificial intelligence tool (eg. Youtube’s ContentID). This mechanism was created to analyze user generated content in search of excerpts of copyrightable works. Record companies and major film studios send copies of their original works and the system compares numerous excerpts with what is being shared on the network to find illegal copies on the platforms. The discussion about the limits of these automated tools came to light after several accounts had their contents blocked or deleted on the platforms. The problem lies in the so-called false positives, that is, when the filter allows for the claim of copyright even under lawful conditions, such as criticism and parodies.
  1. Besides that, copyright holders can also commit abuses in the private notification procedure, as documented by the EFF's Takedown Hall of Shame project - and, without judicial review, the platform has incentives to remove content when it receives the notification, so as not to take the risk of being held responsible if it decides not to remove content that may be considered unlawful in a lawsuit. Users can file a lawsuit, too, if they understand that the removal has harmed their rights, but they have little incentive to do so, and are usually the most economically fragile part.

 

References

 

Article       19       (2013).        ‘Internet       intermediaries:         Dilemma        of      Liability’                   Available               at:

<https://www.article19.org/wp-content/uploads/2018/02/Intermediaries_ENGLISH.pdf>

 

Abranet (2011). ‘Contribuição para Aperfeiçoamento do Anteprojeto da Lei de Direitos Autorais’ Available                   at:          <http://www2.cultura.gov.br/site/wp-content/uploads/2011/08/Abranet- Associa%C3%A7%C3%A3o-Brasileira-de-Internet.pdf>

 

de Souza, Allan, & Schirru, Luca (2016). ‘Os direitos autorais no marco civil da internet’ Liinc em Revista, 12(1). Available at: <http://revista.ibict.br/liinc/article/view/3712/3132>

 

EFF. ‘Takedown Hall of Shame’ Available at: <https://www.eff.org/pt-br/takedowns>

 

Madigan, Kevin (2016). ‘Despite what you hear, Notice and Takedown is Failing Creators and Copyright Owners’ Available at: <https://cpip.gmu.edu/2016/08/24/despite-what-you-hear-notice- and-takedown-is-failing-creators-and-copyright-owners/>

 

Graduated              Response.               ‘About             Graduated               Responde’                                  Available                 at:

<http://graduatedresponse.org/new/?page_id=5>

 

Valente, Mariana (2019). ‘Direito autoral e plataformas de Internet: um assunto em aberto’ Available at: <https://www.internetlab.org.br/pt/especial/direito-autoral-e-plataformas-de-internet- um-assunto-em-aberto/>

61. Notice-and-staydown

  1. Notice and staydown (NSD) refers to a system of intermediary liability where, following a qualified notice, the intermediary is required not only to remove or disable access to an allegedly infringing content, but also to prevent further infringements by restricting the upload on the platform of the same or equivalent content. There is some ambiguity as to whether this model would require the prevention of uploads only of identical content or it would extend also to content with minor alterations (for instance a shorter version of a previously infringing video). The latter interpretation is favored at least in Europe, after the recent ruling of the European Court of Justice in Case C- 18/18, Eva Glawischnig-Piesczek v Facebook, where the Court ruled that the prohibition of general monitoring obligation included in art. 15 of the E-commerce Directive “must be interpreted as meaning that it does not preclude a court of a Member State from:
  • ordering a host provider to remove information which it stores, the content of which is identical to the content of information which was previously declared to be unlawful, or to block access to that information, irrespective of who requested the storage of that information;
  • ordering a host provider to remove information which it stores, the content of which is equivalent to the content of information which was previously declared to be unlawful, or to block access to that information, provided that the monitoring of and search for the information concerned by such an injunction are limited to information conveying a message the content of which remains essentially unchanged compared with the content which gave rise to the finding of illegality and containing the elements specified in the injunction, and provided that the differences in the wording of that equivalent content, compared with the wording characterising the information which was previously declared to be illegal, are not such as to require the host provider to carry out an independent assessment of that content” (emphasis added).
  1. Even prior to this ruling, however, NSD was already present at least to some degree in Germany, where courts had established under the doctrine of “Kern” a duty of care for hosts to review all the following infringing acts of a similar nature that are easily recognizable. This has been used to impose, for instance, the following (Husovec, 2019): (1) employ word-filtering technology for the name of the notified work, including on existing uploads,(2) use better than basic fingerprinting technology that only detects identical files, such as MD5, as a supplementary tool, (3) manually check external websites for the infringing links associated with the notified name of a work on services like Google, Facebook and Twitter or (4) use web-crawlers to detect other links on own service.
  1. The term NSD originates from a heated discussion around the scope of the safe harbor for hosting intermediaries, which depends upon knowledge of infringing activity. As discussed in the entries on red flag knowledge or willful blindness, knowledge is occasionally found by courts even outside the qualified notice process, in the presence of facts that are sufficient to impute a culpable intention on the part of the intermediaries. However, although the implications of these doctrines might be somewhat similar to NSD, the obligations are fundamentally different in nature: the one imposed by the NSD arises automatically with the reception a valid notification, rather than following an inquiry into what is reasonable to have known considering the circumstances. This means also that NSD requires platforms to filter all uploads, in order to detect content previously identified as infringing (Kuzcerawy, 2020), which presumably will be done in automated form, due to the sheer volume of uploads. Uploads on Youtube, for instance, amount to more than 500 hours of video per minute (as of May 2019), which is strong evidence of the need for Youtube to rely on automated content recognition technologies like Content ID (deployed since 2007). In the view of the ECJ, automated search tools and technologies allow providers to obtain the result without undertaking an independent assessment, in particular to the extent that the notice contains the name of the person concerned by the infringement determined previously, the circumstances in which that infringement was determined and equivalent content to that which was declared to be illegal. However, one could actually question this conclusion: if a sentence or word can be considered defamatory in one context, it is not necessarily so in a different context, warranting therefore human determination at some level. This carveout from the safe harbor is likely to be even more problematic if extended to infringements of copyright or trademark law, given the challenges involved in ensuring that a machine recognizes the existence of licenses or valid defences by the alleged infringer.
  1. In addition to the free speech concerns with the prior restraints imposed through NSD, there is substantial criticism on the economic effects of a NSD regime, in particular as it significantly raises costs for hosting platforms. While it may be convenient or even necessary for Youtube, Facebook or other large platforms to use content recognition technologies, it can be problematic to impose these requirements on smaller players, who might need in turn to obtain a license from the bigger player to fulfil their obligation. This was one of the major concerns of the proposal for a Directive on Copyright in the Digital Single Market, which required “information society service providers that store and provide to the public access to large amounts of works or other subject-matter uploaded by their users” to take measures, such as the use of effective content recognition technologies, to ensure the functioning of agreements concluded with rightholders for the use of their works or other subject-matter or to prevent the availability on their services of works or other subject-matter identified by rightholders through the cooperation with the service providers. The final version of the Directive changed this by requiring (a) the use of best efforts to obtain licensing agreements; (b) best efforts in accordance with high industry standards of professional diligence, to prevent the availability of the works in the sense explained above; and (c) the expeditious removal or disabling of content identified by notices and best efforts to prevent their future uploads in accordance with point (b). Furthermore, it removed the specific reference to content recognition technologies, while at the same time specifying that such obligations apply only to the extent that righthoders have provided the service providers with the relevant and necessary information (in the context of those technologies, this includes in primis the reference files to enable the content recognition). Even more importantly, the new version of the Directive provides guiding principles on the NSD regime, by: (1) explicitly requiring Member States implementing such regime to preserve copyright exceptions and not impose general monitoring; (2) creating a three-tiered regime, where full NSD is only required for big and established players, while those who have been providing services in the EU for less than three years and which have an annual turnover below EUR 10 million would only need to comply with letter (a) above and to act upon notice for the removal of specific content, and yet those who have an average number of monthly unique visitors of such service providers exceeds 5 million would also have to demonstrate best efforts to prevent further uploads in the sense explained under letter (c); and (3) specifying that to determine the scope of the obligations imposed under this regime it must be taken into account (a) the type, the audience and the size of the service and the type of works or other subject matter uploaded by the users of the service; and (b) the availability of suitable and effective means and their cost for service providers.

 

References

 

Aleksandra Kuczerawy ,From ‘notice and take down’ to ‘notice and stay down’: risks and safeguards for freedom of expression, in Giancarlo Frosio (ed), The Oxford Handbook of Intermediary Liability Online (OUP, 2020)

 

Martin Husovec, The Promises of Algorithmic Copyright Enforcement: Takedown or Staydown? Which is Superior? And Why? 42 COLUM. J.L. & ARTS 53 (2018)

 

Christina Angelopoulos, Harmonizing Intermediary Copyright Liability in the EU: A Summary, in Giancarlo Frosio (ed), The Oxford Handbook of Intermediary Liability Online (OUP, 2020)

 

Giancarlo F. Frosio, Reforming Intermediary Liability in the Platform Economy: A European Digital Single Market Strategy, 112 Nw. U. L. Rev 19 (2017)

 

 

62. Nudging

1. Nudging refers to the use of choice architecture (the nudge) to influence the behavior of an individual or group of individuals (nudgees) without depriving them from the ability to choose a different course of action. The term was coined by Richard Thaler and Cass Sunstein with their book ‘Nudge: Improving Decisions About Wealth, Heath and Happiness’, published in 2008, which offered a first conceptualization of a theory of regulation based on positive reinforcement and indirect suggestions as ways to influence the behavior and decision making of groups or individuals. The book was very impactful, leading to the rise of nudging regulation and even to the creation in 2010 of a “nudge” unit (also called behavioural insights team) within the UK government in order to generate and apply behavioural insights to inform policy, improve public services, and deliver positive results for people and communities.

2. Thaler and Sunstein propose to formulate public policies in a way that addresses the cognitive biases and helps improve decisions through what they call “choice architecture”, i.e. the set of constraints surrounding individuals’ choices. For instance, in one of their early papers, they map the letters of “NUDGES” onto five different types of design interventions:

 (I) iNcentives: leveraging the choosers’ incentives can be a powerful mechanism to direct people. Think, for instance, about providing more salience to information that is relevant to make a decision that is otherwise underestimated, such as emphasizing the opportunity costs of buying a car.

    • Understand mappings: this evokes a similar concept to the above, but referring to specific situations where the information is complex and therefore it is helpful to provide to choosers a map of possible options (for instance, in choosing the best possible cure for a disease).
    • Defaults: this is probably the most commonly cited type of nudging, and refers to pre- selecting an option for the chooser while maintaining the possibility to reverse that choice.
    • Give feedback: sometimes simply informing whether something is going wrong (or well) helps people redirect.
    • Expect error: designing the choice architecture in a way that minimizes errors is also a nudge. The reason why this category is not subsumed within the notion of defaults is not apparent.
    • Structure complex choices: sometimes choices are difficult to make, if the choice set is too large. For this reason, giving choosers the ability to structure their choice process (for instance through a filtering system that helps identifying useful ranges) can be a powerful nudge.

3. In a separate paper, Sunstein lists the different tools that can be used to obtain nudging effects, which appears to be an expansion (and to some extent a correction) of the previous list:

    • Default Rules
    • Simplification
    • Use of social norms (e.g., illustrating examples of expected behaviour)
    • Increase in ease and convenience (e.g. making low-cost option for healthy food visible)
    • Disclosure
    • Warnings (graphic or otherwise)
    • Reminders
    • Precommitment strategies (by which people commit to a certain course of action)
    • Eliciting implementation intentions (e.g. “do you plan to vote?”)
    • Informing people of the nature and consequences of their own past choices (“smart disclosure”).Although the above is a repetition of a largely rehearsed concepts, it is important to revisit these for two reasons. First, a constant theme running through these lists is the formulation of design choices to “de-bias” human decision-makers. This is somewhat different from the work of the fast & frugal school of behavioral economics, which endeavoured to help decision-makers by offering the best heuristics; and there is no reason in principle why heuristics could not be used in nudging to reach desired outcomes- for example, by framing options in a more visible and appealing fashion. However, the key point of criticism is that nudging tools may not always be used in a way that de-biases individuals: in fact, it can be used in a way that nurtures known biases and on that basis elicits choices that are not necessarily in the best interest of individuals. Second, the entire discussion by Thaler and Sunstein refers either explicitly or implicitly to nudging as a choice of public regulation, where the nudger can be trusted (or is at least assumed to be trusted) to pursue the general interest. But insofar as nudging is done by private entities that are not subject to the transparency and accountability safeguards that apply in the governmental context, a discussion about its boundaries and about ways in which compliance can be scrutinized becomes paramount.

4. It can also be noted that the definition provided by Thaler and Sunstein in their writings on the topic is not always consistent: in their book, they define nudging as “any aspect of the choice architecture that alters people’s behaviour in a predictable way without forbidding any options or significantly changing their economic incentive”. In doing so, they rule out the admissibility in this category of traditional regulatory tools such as bans, fines, taxes or other economic incentives (or disincentives). However, the line between a nudge and some of those categories is blurred, as the alternative course of action in all those situations remains available to the nudgee (in some instances, at the cost of violating the law) and the authors explicitly admit the possibility that nudges moderately alter one’s economic incentives. They also argue that nudges are omnipresent in society (as every design choice has potential effects on individuals’ behavior), and therefore the anti-nudging position is a “literal non-starter” - because at least deliberate nudges allow us to appreciate their rationale and operation. However, it is not clear that nudges can always be transparent and intelligible, and serendipity may be a value worth protecting- to let people determine their own path through random, or at least non-deliberate, nudges. Finally, Thaler and Sunstein only mention examples (such as the GPS, the retirement plan, the narrowing road design) where choice architects design, construct, or organize context without changing the original choice sets or fiddling with incentives. Yet it is clear that nudges will often have substantial impact on the range of choices or the incentives of the choosers, and not apparent how the nudger can abstain from the latter scenario. It is also not clear why Thaler and Sunstein separate economic incentives from other forms of incentives, including the prospect of pain and penalties, as that would more accurately incorporate the breadth of the endeavor of behavioural economics.

5. There is extensive philosophical discussion about the differences between nudging and manipulation. This discussion has been especially pronounced after the realization that the online world introduces a new type of nudge, one based on algorithmic real-time personalization and reconfiguration of choice architectures based on large aggregates of personal data: the so-called “hyper-nudging” (Yeung, 2016). In this context, where the transparency of nudges is hindered by the personalization of the nudges, the accountability of manipulative nudges increases.

6. There are a range of definitions that can be used to identify manipulation, for instance:

    • an intentional act that successfully influences a person to belief or behavior by causing changes in the mental processes other than those involved in understanding (Faden & Beauchamp, 1986)
    • a kind of influence that bypasses or subverts the target’s rational capabilities, in a way that treats its objects as “tools and fools” (Wilkinson, 2014)
    • directly influencing someone’s beliefs, desires or emotions, such that she falls short of ideals for belief, desire or emotion in ways typically not in her self-interest or likely not in her self-interest in the present context (Barnhill, 2014)
    • A statement or action that does not sufficiently engage or appeal to people’s capacity for reflective and deliberative choice (Sunstein, 2015)
    • Non-rational influence (Noggle, 1996)
    • Pressure, but not irresistible pressure amounting to coercion (Raz 1985; Noggle 1996)
    • Trickery to induce behavior (Noggle, 1996)
    • Hidden influence: intentionally and covertly influencing decision-making, by targeting and exploiting one’s decision-making vulnerabilities (Susser et al 2019)

7. We can imagine clear cases of manipulation (subliminal advertising), cases that clearly fall outside of the category (for example, a warning about deer crossings in a remote area), and cases that can be taken as borderline (a vivid presentation about the advantages of a particular mortgage, or a redesign of a website to attract customers to the most expensive products).

8. In order to distinguish nudging from manipulation, various authors propose criteria to set limits on the acceptability of nudges. For instance, Sunstein requires them to be de-biasing (market failure correcting), educative and non-exploitative (Sunstein 2015).

9. Thaler, in turn (Thaler, 2015), uses the following criteria to distinguish acceptable nudging (or “nudging for good”):

    • First, the nudge is transparent and not misleading.
    • Second, it is as easy as possible to opt out.
    • Third, they must increase welfare.

10. Baldwin focuses on the proportionality of the nudge to scale of the problem, considering evidence of effectiveness and the moral considerations at stake (Baldwin 2014). He then distinguishes between simple nudges, that only engages system 1 thinking (1st degree nudge), more intrusive nudges that exploit behavioral or volitional limitations so as to bias a decision in the desired direction (2nd degree nudge), and a yet more intrusive nudge (3rd degree) where there is no ability to appreciate the influence.

11. He concludes that nudges will have different effectiveness depending on who the targets are, where the following characteristics are relevant: whether the individual´s objective aligns with that of the nudger, and whether their capacity to absorb the nudging information is high or low.

 

References

 

Ruth Faden & Beauchamp, A History of Informed Consent (OUP, 1986) Karen Young, ‘Hypernudge’: Big Data as a Mode of Regulation by Design’ Information, Communication & Society (2016) 1,19

Anne Barnhill, What is Manipulation?, in MANIPULATION: THEORY AND PRACTICE 51, 65–72 (Christian Coons & Michael Weber eds., 2014)

 

Baldwin, Robert (2014) From regulation to behaviour change: giving nudge the third degree. The Modern Law Review, 77 (6). pp. 831-857

 

Pelle Guldborg Hansen and Andreas Maaløe Jespersen, ‘Nudge and the Manipulation of Choice A Framework for the Responsible Use of the Nudge Approach to Behaviour Change in Public Policy’ European Journal of Risk Regulation 1 (2013), p. 10

 

Cass Sunstein, The Ethics of Nudging, 32 Yale J. on Reg. (2015). Available at: http://digitalcommons.law.yale.edu/yjreg/vol32/iss2/6

 

T. M. Wilkinson, Nudging and Manipulation, 61 POL. STUDIES 341, 342 (2013).

 

Richard Thaler, ‘The Power of Nudges, for Good and Bad’ The New York Times (New York 31 October 2015), at https://www.nytimes.com/2015/11/01/upshot/the-power-of-nudges-for-good- and-bad.html

 

Richard Thaler and Cass Sunstein, Nudge: Improving Decisions About Wealth, Heath and Happiness (Yale University Press 2008)

Joseph Raz, The Morality of Freedom (Oxford: Oxford University Press, 1986).

 

Robert Noggle, “Manipulative Actions: A Conceptual and Moral Analysis,” American Philosophical Quarterly 33(1), 199

 

Cass Sunstein, The Ethics of Influence: Government in the Age of Behavioral Science (Cambridge: Cambridge University Press, 2016)

63. Online Advertising

  1. Online advertising can be defined as the industry of Internet marketing/advertising, as well as the technologies (adtech) and practices characterizing this industry. Online advertising has two important features distinguishing it from traditional advertising: measurability and targetability (Goldfarb & Tucker, 2011). Among the main types of online advertising we can distinguish display advertising, search advertising and social media advertising (Goldfarb & Tucker, 2011).
  1. Display ads are mainly used on regular websites and entail the display of banners or audio-visual ads. Search advertising entails that ads are featured at the top of search results returned from a search engine query. Both types of advertising evolved to also include so-called ‘ad auctions’ and ‘real-time bidding’, which involve the buying and selling of advertising via programmatic instantaneous auctions (Information Commissioner’s Office, 2019; Google, 2020). Social media advertising uses elements of display and search advertising, combined with native advertising models such as influencer marketing (see also the entries for ‘Content/Web monetization’ and ‘Influencers/content creators)’.

 

References

 

Avi Goldfarb & Catherine Tucker, ‘Online Advertising’, [2011] 81 Advances in Computers.

 

Information Commissioner’s Office, ‘Update report into adtech and real time bidding’, [20 June 2019]. Available at: <https://ico.org.uk/media/about-the-ico/documents/2615156/adtech-real- time-bidding-report-201906.pdf>.

 

Google, ‘How the Google Ads auction works’. Available at: <https://support.google.com/google- ads/answer/6366577?hl=en>.

 

 

64. Open Identity

  1. An online identity is a collection of personal information about a person, associated with credentials that allow the owner of the identity to control the information and to assert their identity towards other parties over the Internet.
  1. The identity may represent an actual person (real world identity), a fictitious person (pseudonymous identity) or an unknown set of one or more persons (anonymous identity).
  1. Frameworks for the management of online identities usually perform some or all of these functions:
  1. Authentication, i.e. the establishment and verification of credentials (passwords, biometric data etc.) to ensure that only the legitimate owner of the identity can use it;
  2. Authorization, i.e. the request and release of permission for an authenticated identity to access a specific resource or service;
  3. Signing, i.e. the creation of cryptographic attestations of a certain assertion by the owner of the identity;
  4. Information management, i.e. the entering, storing and controlled distribution of the personal information that the owner associates with the identity.
  1. An open identity is an online identity provided and managed through the use of open, federated standards that allow multiple identity providers to coexist, including the possibility for the identity owner to switch from a provider to another or to self-manage their identity without recurring to an external identity provider (this latter case is called self-sovereign identity).
  1. Currently, the most common identity frameworks are those provided by Internet platforms, especially by Google, Facebook and Apple. These systems are widely used for registration and login into online websites and services; while they are based on an open protocol (OpenID Connect), they are not open, as the user cannot choose a different provider; e.g. a Google identity can only be used within the Google ecosystem, and no other providers can supply identities for that ecosystem.
  1. The European Union, through the eIDAS Regulation (EU Regulation, 2014), has established an identity framework that federates national identity systems and can be used for logging into online services, typically for real world identities and public administration websites. The openness of eIDAS implementations varies across European countries.

 

References

 

(EC) Document 32014R0910 Regulation (EU) No 910/2014 of the European Parliament and of the Council of 23 July 2014 on electronic identification and trust services for electronic transactions in the internal market and repealing Directive 1999/93/EC [ 2014]. Available at: https://eur-lex.europa.eu/legal- content/EN/TXT/?uri=uriserv%3AOJ.L_.2014.257.01.0073.01.ENG

65. Open Standard

1. An open standard is a standard that is developed through open processes and can be used and implemented by every interested party under non-discriminating conditions, and if possible for free.

 

2. Different standardization organizations adopt different definitions for the term, which generally agree about the fact that the standard must have been developed through an open consensus process that does not exclude or disadvantage any stakeholder, but disagree on the intellectual property licensing requirements. Under that aspect, the definitions and the resulting policies can be broadly grouped into two categories:

  1. Definitions that require the standard and the related essential intellectual property to be available for free, without requiring negotiations with intellectual property holders or the payment of royalties; this is for example the policy of the World Wide Web Consortium (Dardailler,                    2007 and                                           Weitzner,                   2004);
  1. Definition that require the standard and the related essential intellectual property to be available under “fair, reasonable and non-discriminatory” (FRAND) licensing terms, which may however include the payment of royalties and/or a discretionary negotiation with the rights holder; this is for example the policy of the ITU-T (ITU, 2005).

3. FRAND technologies can be a significant obstacle to projects that do not have any amount of funding or do not have the legal capabilities to deal with licensing negotiations, such as many open source projects.

4. The Internet Engineering Task Force “prefers” technologies which are not subject to patents or whose patents are royalty-free, but accepts FRAND technologies if necessary (Bradner and Contreras, 2017). A similar stance is taken by the European Union, whose definition of open standard can be found in Annex II to Regulation 2015/2012 (EU Regulation, 2012); the European Commission has repeatedly addressed the problems connected to a fair interpretation of the FRAND concept (European Comission, 2017).

 

References

 

Daniel Dardailler , ' Definition of Open Standards' ( World Wide Web 2007) < https://www.w3.org/2005/09/dd-osd.html>

 

Daniel        J.       Weitzner,        '      W3C        Patent       Policy'        (      World       Wide                   Web 2004)               < https://www.w3.org/Consortium/Patent-Policy-20170801/>

 

IPR Ad Hoc Group, ' Definition of \"Open Standards\"' ( ITU 2005) < https://www.itu.int/en/ITU- T/ipr/Pages/open.aspx>

 

S. Bradner, J. Contreras, ' Intellectual Property Rights in IETF Technology' ( IETF 2017) < https://tools.ietf.org/html/rfc8179>

 

(EC) REGULATION (EU) No 1025/2012 OF THE EUROPEAN PARLIAMENT AND OF THE

COUNCIL                 [             25              October               2012]               Available                at:                                  https://eur- lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2012:316:0012:0033:EN:PDF

 

European Comission, Communication From The Commission To The European Parliament, The Council And The European Economic And Social Committee (1st, , Brussels 2017). Available at: https://ec.europa.eu/docsroom/documents/26583/attachments/1/translations/en/renditions/nativ e

66. Optimization

  1. Optimization refers to the practice or process of making something as effective, visible, functional, or efficient as possible. More specifically, search engine optimization refers to the practice of designing your website or page to increase the quantity, quality, or both, of organic, unpaid search results. At the same time, optimization is an predominant organizing principle of digital systems that incorporate real-time feedback from users (Kulynick et al. 2018): for instance, ride sharing applications such as Uber, which rely on optimization to decide on the pricing of rides; navigation applications such as Waze, which rely on optimization to propose best routes; banks, which rely on optimization to decide whether to grant a loan; and advertising networks, which rely on optimization to decide what is the best advertisement to show to a user. As a solution to the potentially adverse effects from manipulation of individual behavior caused by optimization systems, some have proposed the adoption of specific measures to reduce or eliminate the emergence of such effects from the design stage (see a.g. Amodei et al, 2016). As an alternative, the concept of Protective Optimization Technologies has been proposed- i.e., technological solutions that those outside of the optimization system deploy to protect users and environments from the negative effects of optimization (Kulynick et al. 2018).

 

References

 

Dario Amodei et al (2016). Available at: https://arxiv.org/abs/1606.06565

 

Kulynich et al (2018). Available at: https://arxiv.org/abs/1806.02711

 

 

67. Platform

  1. According to the Merriam Webster dictionary, the concept of platform generally refers to a plan or design. The Historical Larousse dictionary suggests that the term first appeared in the French language in 1434, to define a “horizontal surface acting as a support.” This original meaning helps to construct a general characterisation of the platform as a structure on top of which something – be it a product or a service – may be built and operated.
  1. When the substantive is qualified as “digital” or “online”, it may refer to a vast array of software applications that are frequently loosely defined. As pointed out by the European Commission (2016b), the term is frequently utilised to refer to "two-sided" or "multi-sided" markets (Rochet & Tirole, 2003; Evans, 2003). In such markets, users are brought together by a platform operator in order to facilitate an interaction (exchange of information, a commercial transaction, etc.). In the context of digital markets, depending on a platform's business model, users can be buyers of products or services, sellers, advertisers, software developers, etc.
  1. Conspicuously, the existence of multi-sided platforms is not a phenomenon which can be defined as exclusively taking place in the online world. On the contrary, the history of businesses demonstrates that platforms emerge in a wide range of sectors, in the off-line world, as well as in the online world. In this sense, the European Commission (2016b) stresses that examples vary from markets to newspapers: both gather sellers and buyers in a common space thereby facilitating contact between two sides that would otherwise be unlikely to interact. Nevertheless, 'real life' platforms were usually limited physically and geographically (the merchandise had to be transported and stocked, a paper had limited circulation and advertisements had to be location specific etc.)
  1. Amit Tiwana (2014) provides a useful definition of platforms as “the extensible codebase of a software-based system that provides core functionality shared by apps that interoperate with it, and the interfaces through which they interoperate.” Due to the heterogeneous nature of digital platforms, many studies on digital platforms provide examples to clarify what they refer to when discussing digital platforms. This is the case, for instance in the European Commission (2016a) Communication on Online Platforms and the Digital Single Market, where the characteristics of digital platforms are listed and described, providing examples of platforms, rather than defining them.
  1. Notably, the aforementioned document highlights the following characteristics of online platforms:
  • they have the ability to create and shape new markets, to challenge traditional ones, and to organise new forms of participation or conducting business based on collecting, processing, and editing large amounts of data;
  • they operate in multisided markets but with varying degrees of control over direct interactions between groups of users;
  • they benefit from ‘network effects’, where, broadly speaking, the value of the service increases with the number of users;
  • they often rely on information and communications technologies to reach their users, instantly and effortlessly;
  • they play a key role in digital value creation, notably by capturing significant value (including through data accumulation), facilitating new business ventures, and creating new strategic dependencies.
  1. To provide a very broad and comprehensive definition, the Recommendations on Terms of Service and Human Rights developed by the IGF Coalition on Platform Responsibility define a platform as “as any applications allowing users to seek, impart and receive information or ideas according to the rules defined into a contractual agreement.”
  1. Overall, the majority of digital platforms share three main features: they are technologically mediated, they enable interactions between different types of users and allow those types of users to implement specific activities (de Reuver, Sørensen & Basole, 2018). Existing literature points out the existence of three broad categories of online platforms:
  • Market-makers bring together two distinct groups that are interested in trading, increase the likelihood of a match, and reduce search costs.
  • Audience makers match advertisers to audiences.
  • Demand coordinators, such as software platforms, operating systems, and payment systems coordinate demand between different user groups (for example card holders and merchants, developers and smartphone users).
  1. Hence, platforms provide a medium where one type of platform users can deliver value both to the other type of users and the platform itself. In this context the European Commission (2016b) points out that the demand of the different types of users is related to the supply of other types, and several kinds of interdependencies may exist between the various types of platform users:
  • producers  of  complementary  products  (e.g.             app developers) and end consumers (gamers),
  • advertisers and readers
  • shoppers and sellers
  • job seekers and recruiters
  • accommodation providers and accommodation seekers
  • transportation providers and passengers.
  1. Importantly, major online platforms trigger important network effects and generate revenue by recruiting one type of users (e.g. advertisers) and offering them access to another type of users (e.g. individual users of social networks).
  1. Critically, platforms define a private ordering – through their terms of service, their technical architectures and their practices – that directly impact their users as well as the products and services built on top of them. (Belli & Veturini 2016; Belli and Sappa 2017; Belli & Zingales 2017)

 

References

 

Belli L, & Venturini J. Private ordering and the rise of terms of service as cyber-regulation. Internet Policy Review. Vol 5. N° 4. (2016). https://policyreview.info/articles/analysis/private-ordering-and- rise-terms-service-cyber-regulation

 

Belli L & Sappa C. The Intermediary Conundrum: Cyber-regulators, Cyber-police or both? JIPITEC. (2017). http://www.jipitec.eu/issues/jipitec-8-3-2017/4620

 

Belli L & Zingales N, Platform regulations: how platforms are regulated and how they regulate us.

FGV: Rio de Janeiro. 2017

 

de Reuver, M., Sørensen, C., & Basole, R. C. (2018). The Digital Platform: A Research Agenda. Journal of Information Technology, 33(2), 124–135.

 

European Commission. Online Platforms and the Digital Single Market Opportunities and Challenges for Europe. Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. COM/2016/0288 final. 2016a

 

European Commission. Online Platforms Accompanying the document Communication on Online Platforms and the Digital Single Market. Commission Staff Working Document {COM(2016) 288 final. 2016b.

 

Evans, D. (2003), "Some Empirical Aspects of Multi-sided Platform Industries", Review of Network Economics, 2(3), 2194-5993

 

Rochet, J.-C., & Tirole, J. (2003), "Platform Competition in Two-sided Markets", Journal of the European Economic Association, 1(4), 990-1029

 

Amrit Tiwana, Platform Ecosystems Aligning Architecture, Governance, and Strategy. 2014

 

van Eijk N. et al Digital platforms: an analytical framework for identifying and evaluating policy options. 2015.NO report | TNO 2015 R11271 | Final report. https://www.ivir.nl/publicaties/download/1703.pdf

68. Platform Governance

  1. The concept of Governance refers to a ‘decentred’ perspective on regulation, which does not emanate solely from the state but is instead carried out by (complex, interactive) constellations of public and private stakeholders. The term has found widespread usage in the context of platforms, which are often seen to play an influential role as overseers of complex social and commercial ecosystems (e.g. Van Dijck, Poel en De Waal 2018). The result is an growing attention for “platform governance” amongst academics and policymakers (Gorwa 2019).
  1. In the words of Tarleton Gillespie, platforms are implicated in online governance in two ways: governance by platforms, and governance of platforms (Gillespie 2016). Governance by platforms describes their role in facilitating and policing online behaviour, whereas governance of platforms describes the actions of governments and other stakeholders who contest and control platform action.
  1. Governance by platforms can take many forms. Some of its most recognisable expressions are the drafting and enforcement of general rules and standards, such as Community Guidelines and Terms of Service, as well as the content moderation practices that purport to enforce these principles. But the concept of platform governance can be extended to countless other areas of platform policy, including their interactions with ad buyers such as political campaigners (Kreiss & MacGregor 2019); engagement with, and donations to, civil society and academia (Bruns 2019); treatment of content providers and influencers (Caplan & Napoli 2020; Goanta & Ranchordas 2020); and accommodations of government agencies and other public authorities (e.g. Benkler 2011). More fundamentally, the basic technical design of platform services can constitute a form of governance, to the extent that it structures and constrains the behaviour of users and other stakeholders.
  1. Governance of platforms is an equally broad and varied concept. Government regulation is typically the first point of reference, from legislation and regulatory oversight to judicial action. But the aforementioned private stakeholders can also play a role in governing platforms. For instance, civil society actors can investigate and criticise platforms, either independently or as members of self- or co-regulatory regimes (Gorwa 2019). Platform users, content providers and advertisers may also be able to leverage governments or platforms to change their course, as can activists and mobilized user groups – a notable example being the recent advertiser boycott against Facebook. As Robert Gorwa highlights, these complex multi-stakeholder interactions play out across various geographical scales, with overlapping “local, national, and supranational mechanisms of governance” (Gorwa 2018).

 

References

 

Caplan, Robyn and Tarleton Gillespie. 2020. “Tiered Governance and Demonetization: The Shifting Terms of Labor and Compensation in the Platform Economy”. Social Media + Society 6(2).

 

Benkler, Yochai. 2011. “A Free, Irresponsible Press: Wikileaks and the Battle over the Soul of the Networked Fourth Estate”. Harvard Civil Rights-Civil Liberties Review 46-1. Available at: http://benkler.org/Benkler_Wikileaks_current.pdf

 

Bruns, Axel. 2019. After the ‘APIcalypse’: social media platforms and their fight against critical scholarly research. Information, Communication & Society 22(11). Available at: https://eprints.qut.edu.au/131676/

 

Goanta, Catalina and Sofia Ranchordás. 2020. The Regulation of Social Media Influencers.

London: Edward Elgar.

 

Gillespie, Tarleton. 2016. “Governance of and by platforms”, in: Jean Burgess, Thomas Poell, and Alice Marwick (eds.), SAGE Handbook of Social Media. New York: SAGE Publishing. Available at: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/12/Gillespie-Regulation- ofby-Platforms-PREPRINT.pdf

 

Gorwa, Robert. 2019. “What is platform governance?”. Information, Communication and society

22(6). Available at: https://gorwa.co.uk/files/platformgovernance.pdf

 

Gorwa, Robert. 2019. “The Platform Governance Triangle: Conceptualising the Informal Regulation         of       Online        Content”.        Internet        Policy         Review         8(2).        Available at: https://policyreview.info/articles/analysis/platform-governance-triangle-conceptualising-informal- regulation-online-content

 

Kreiss, Daniel and Shannon MacGregor. 2019. “The “Arbiters of What Our Voters See”: Facebook and Google’s Struggle with Policy, Process, and Enforcement around Political Advertising”. Political Communication 36(4).

 

Van Dijck, Jose, Thomas Poell and Martijn de Waal. 2018. The Platform Society: Public Values In A Connective World. New York: Oxford University Press.

 

69. Platform Neutrality

  1. Platform neutrality expresses the idea that products or services that function as platforms should not unreasonably discriminate against complements. Platform neutrality was popularized after a report issued by the French National Digital Council (Conseil National du Numérique) in 2014, which advanced numerous recommendations on the issue (CNNum 2014).
  1. Platform neutrality draws on principles developed for utilities regulation. Utilities, because of the fundamental services they offer and because they have traditionally been (public or private) monopolies, were regulated to hold themselves out to serve the public indiscriminately, meaning that they cannot make individualized decisions on whether and on what terms to deal with each customer. Modern digital platforms are sometimes seen as performing similar fundamental roles, such as providing the necessary functionality for app ecosystems to emerge (app neutrality) or discoverability through online search (search neutrality) (Frischmann 2004). If platforms guarantee equal conditions to all complements, then the complements can compete on the merits.
  1. Platform neutrality has been criticized on various grounds. The first is that digital platforms often do not exhibit the same characteristics as traditional utilities, namely, they are neither monopolies nor indispensable nor do they unequivocally offer the same kind of public good services as utilities and therefore they should not be subject to the same kind of rules. Secondly, discriminatory behavior can be welfare enhancing and it is therefore generally not banned in the market, unless it is unreasonable and distorts market conditions (Yoo 2004). Thirdly, some platform services by definition have to be discriminatory (e.g ranking of search results), which makes a neutrality principle impossible to implement and even counter-productive (Renda 2015).
  1. Because of the tension between the benefits and risks of platform neutrality, regulation in this domain has so far been limited to business to business (B2B) relations and only to light touch obligations that emphasize transparency and accountability rather than banning specific types of conduct (Regulation (EU) 2019/1150 on promoting fairness and transparency for business users of online intermediation services).

 

References

 

CNNum. “Platform Neutrality: Building an Open and Sustainable Digital Environment.” Opinion No. 2014-2 (2014).

 

Frischmann, Brett M. “An economic Theory of Infrastructure and Commons Management.” Minnesota Law Review 89 (2004): 917.

 

Renda, Andrea. “Antitrust, Regulation and the Neutrality Trap: A Plea for a Smart, Evidence- based

 

Internet Policy.” CEPS Special Report No. 104 (2015).

 

Yoo, Christopher S. “Would Mandating Broadband Network Neutrality Help or Hurt Competition- A Comment on the End-to-End Debate.” Journal on Telecommunications & High Technology Law 3 (2004): 23.

 

 

70. Pornography

1. This entry provides a brief literature review of pornography through the lens of feminist legal theory. Chamallas (2012) segments the feminist movements in legal scholarship by (1) the generation of equality (1970s), which is often associated with liberal feminism (Chamallas, 2012, p. 19) because of the claims against formal inequality and toward individual rights such as the access to male-dominated activities; and (2) the generation of difference (1980s), which was responsible for bringing substantive inequalities to the discussion, such as the feminization of poverty and the gender gap in politics. Feminist legal scholars and activists from the 1980s are classified into two other subcategories: dominance feminism and cultural feminism. The first group was the main actor responsible for advocating for legislation to protect women's bodily integrity (Chamallas, 2012, p. 22) with campaigns against pornography.

2. For MacKinnon (1987), for example, pornography and the culture that portrays women as sex objects are responsible for the maintenance of sexual violence and sex discrimination – briefly, for MacKinnon, sexuality is expropriated from women by the male-dominated State, as for the Marxists, labour is expropriated from workers by the capitalist State. Her work – that defines pornography as "graphic sexually explicit subordination of women" (MacKinnon, 1991) – has inspired legislation and ordinances against it worldwide. Feminists that are influenced by this view tend to see pornography as a promotion of dehumanization and objectification of women that are in a situation of inequality – since most of the women that work in this industry are from marginalized segments (poor, black and latina women).

3. Other feminists, however, fear that the combat of pornography in these terms may engender a worse situation for women and for freedom of speech – also arguing that the male-dominated state could use these ordinances against minorities. In 1984, for example, a feminist "anti- censorship" task force (the F.A.C.T.) was formed by women who were against the anti- pornography movement. This would contemplate liberal concerns about individual rights and choices and also inspire the autonomy feminism movement that arose in the 1990s and focused on sex-positivity and women's own agency.

 

References

 

Chamallas, M. E. (2012). Aspen Treatise for Introduction to Feminist Legal Theory. Wolters Kluwer Law & Business.

 

MacKinnon, C. A. (1987). Feminism unmodified: Discourses on life and law. Harvard university press. Tn that work in this industry are from marginalized segments (poor, black and latina women).

 

71. Prioritization

  1. Prioritization can refer to a form of traffic management and the way traffic flows through the internet and its connected parts, it can refer to the ranking of results in a hierarchy.
  1. In the former, some network traffic is given precedence over others. Prioritization of some types of information over others can be based on importance. Users or edge providers can pay to optimize the transmission of traffic, the platform can prioritize some types of traffic over others, or users can pay access providers to transmit some traffic before or faster than other traffic. Some see prioritization as contradicting net neutrality principles.
  1. Prioritization can also refer to the ordering of indexed results, with higher quality, or more important, or more relevant results being given priority over other results.

72. Proactive measures

  1. This entry provides an overview of the concept of proactive measures, where “measures” is a term of art which includes a range of steps that can be taken as a form of governance or regulation, usually in relation to specific kinds of content or conducts. “Proactive" is a term that is used frequently to qualify the nature of these measures taken by platforms or other intermediaries with regard to third party content. The two most common meanings are: (1) as an operational matter, acting based on the platform's own initiative, not in response to a notice or other external source of information; (2) as a legal matter, acting voluntarily and without legal compulsion.
  1. Naturally, there is some overlap between (1) and (2), as the external source of information under (2) may be a judicial order or another form of notification that triggers a legal obligation for the platform to take the measures in question. In addition, legal obligations may arise independently from the existence of a specific notification, as platforms might be subject to a duty of care to prevent the dissemination of certain content in the first place: an example is the recently proposed Eliminating Abusive and Rampant Neglect of Interactive Technologies Act (EARN IT Act) of 2020, which would create an exemption to the immunity of platforms under section 230 of the Communication Decency Act by allowing civil and state criminal suits against companies who do not adhere to certain recommended “best practices” with regard to Child Sexual Abuse Material (CSAM). The EU legislation on this matter is the Audiovisual Media Service Directive (2018/1808) which among other things requires Member States in its art. 28b to ensure that video-sharing platform providers under their jurisdiction take appropriate measures to protect:
    • (a) minors from programmes, user-generated videos and audiovisual commercial communications which may impair their physical, mental or moral development in accordance with Article 6a(1);
    • (b) the general public from programmes, user-generated videos and audiovisual commercial communications containing incitement to violence or hatred directed against a group of persons or a member of a group based on any of the grounds referred to in Article 21 of the Charter of the Fundamental Rights of the European Union, or containing content the dissemination of which constitutes a criminal offence in the EU (namely child pornography or xenophobia)
  • (ba) the general public from programmes, user-generated videos and audiovisual commercial communications containing content the dissemination of which constitutes an activity which is a criminal offence under Union law, namely public provocation to commit a terrorist offence within the meaning of Article 5 of Dir. (EU) 2017/541, offences concerning child pornography within the meaning of Article 5(4) of Dir. 2011/93/EU and offences concerning racism and xenophobia
  1. Similar language can be found in the proposal for a Terrorism Regulation in establishing duties of care and proactive measures on Hosting Services Providers (HSPs) to remove terrorist content, including to remove when appriopriate terrorist material from their services, including bydeploying automated detection tools, acting in a “diligent, proportionate and non-discriminatory manner, and with due regard for due process”, and in the Christchurch call made by several governments and online service providers to address terrorist and other violent extreme content online, including a commitment by providers to adopt specific measures seeking to prevent the upload of terrorist and violent extremist content and to prevent its dissemination on social media and similar content- sharing services, including its immediate and permanent removal, without prejudice to law enforcement and user appeals requirements, in a manner consistent with human rights and fundamental freedoms”.
  1. Platforms´ general concern for the adoption of proactive or “voluntary” measures is that they may lead to the establishment of knowledge that triggers an obligation to remove or disable access to content, failing which the platforms might lose the benefit of the safe harbor. For this reason, scholars have argued for the introduction of a general “good samaritan” provision, modeled upon Section 230 © of the US Communications Decency Act, which would preserve the application of the safe harbor as long as the measures are taken in “good faith” against certain types of objectionable content (Kuzcerawy, 2018; Barata, 2020)

 

References

 

Joan Barata, ‘Positive Intent Protections: Incorporating a Good Samaritan principle in the EU Digital      Services       Act’     (Center      for    Democracy       and      Technology       2020).      Available at: https://cdt.org/insights/positive-intent-protections-incorporating-a-good-samaritan-principle-in- the-eu-digital-services-act/#:~:text=Close%20the%20menu-

,Positive%20Intent%20Protections%3A%20Incorporating%20a%20Good%20Samaritan%20prin ciple,the%20EU%20Digital%20Services%20Act&text=The%20%E2%80%9CGood%20Samarita n%E2%80%9D%20principle%20ensures,other%20forms%20of%20inappropriate%20content.

 

Aleksandra Kuczerawy , 'The EU Commission on voluntary monitoring: Good Samaritan 2.0 or Good Samaritan 0.5?' (Ku Leuven 2018) < https://www.law.kuleuven.be/citip/blog/the-eu- commission-on-voluntary-monitoring-good-samaritan-2-0-or-good-samaritan-0-5/>

 

 

73. Recommender systems

  1. Recommender systems are algorithms aimed at supporting users in their online decision making. More specifically, in the computer science literature, a recommender system is defined as: "a specific type of advice-giving or decision support system that guides users in a personalised way to interesting or useful objects in a large space of possible options or that produces such objects as output" (Felfernig et al. 2018).
  1. Examples of such systems are the Amazon recommender tool for products, the Netflix algorithm that suggests movies, the Facebook software that finds "friends" we might know.
  1. A key element of recommender systems is that their suggestions are personalised, i.e. based on users' preferences. Such information can be directly obtained from users (e.g. asking specifically for her preferences) or can be generated by observation of their behaviour (Jannach et al. 2010). Most recommender systems rely on machine learning techniques, including deep neural networks (Goanta and Spanakis 2020).
  1. From a technical point of view, four main models of recommendation systems have been identified (Aggarwal 2016): 1) collaborative filtering systems; 2) content-based recommender systems; 3) knowledge-based recommender systems; 4) hybrid systems.
  1. Collaborative filtering systems perform the recommendation process based on the user-item interaction provided by several users. Let us assume A and B have similar tastes and that the algorithm has recorded such a similarity. A rates the movie Titanic highly, the recommender system infers that the rating of B for Titanic will be likely to be similar. Hence, the algorithm formulates Titanic as a recommendation for B.
  1. Content-based recommender systems construct a predictive model thanks to the attributes (descriptive features) of users or items. Following in the movie example: A rated Titanic highly. Titanic is described by keywords like "drama" and "love affair". Therefore, movies that are classified in the same way (Romeo+Juliet or Pearl Harbour) would be recommended to A.
  1. Knowledge-based recommender systems formulate recommendations based on the constraints specified by users, the item attributes and the domain knowledge. Such systems are common where items are not bought very often (so it is not efficient to rely on user-item interactions). Examples of them are tools for searching real estates, cars, touristic accommodation, etc.
  1. Finally, hybrid systems combine one or more of the previous aspects.
  1. Another classification proposed in the literature distinguishes recommender systems in three typologies, based on the role played by the platform in the sourcing of the content recommended (Cobbe and Singh 2019). In the so-called “open recommending” system, such as YouTube, the platform does not perform editorial control and the recommendation is elaborated from user- generated content. On the contrary, “curated recommending” is intended as a system where the platform selects, curates or approves the content. Finally, in “closed recommending” systems the platform creates itself the content to be recommended. Such a classification can be relevant when intermediary liability is at stake. While for “curated” and “closed” recommending systems the safe harbour immunity regime will be out of the picture, queries remain for “open” recommenders. Before the Court of Justice of the EU a case is currently pending to ascertain whether YouTube plays an active role by recommending videos and performing other ancillary activities (Case 500/19).
  1. The organisation of the recommender system and its intelligibility can also give rise to direct liability of the platform vis-à-vis the content creator, such as a social media influencer. Goanta and Spanakis (2020) argue that the rules against unfair commercial practices and competition law both in Europe (the Unfair Commercial Practices Directive) and in the US (the Federal Trade Commission Act) can offer a first line of defence against the opaqueness of the algorithmic decision-making and the discretionary power exercised by platforms to the detriment of content creators. Such a framework however does not provide a full fledge of protection to the emerging actors involved in social media transactions and needs to be strengthened (Goanta and Spanakis 2020).

Recent legislative initiatives in Europe Ranking

  1. Knowledge-based recommender systems essentially work in response to a search query launched by a user. The output of such a model is likely to overlap with the legal definition of ranking. In Europe, the latter is intended as: "the relative prominence of the offers of traders or the relevance given to search results as presented, organised or communicated by providers of online search functionality, including resulting from the use of algorithmic sequencing, rating or review mechanisms, visual highlights, or other saliency tools, or combinations thereof" (recital 19, Directive (EU) 2019/2161. See also, Art. 3(1)(b), Directive (EU) 2019/2161 and art. 2(8), Regulation (EU) 2019/1150). To increase the transparency in online marketplaces, newly introduced provisions in the B2C and the P2B context impose an obligation to provide clear information about the main parameters and parameter weighting adopted to rank products and to disclose any paid advertising or payment specifically made for achieving a higher ranking within the search results. It is yet to be seen how these transparency requirements will be developed, considering not only the complexity and the dynamicity that ranking algorithms might reach through machine learning but also the possible limitations imposed by trade secrets (Twigg- Flesner 2018). The Commission is currently working on transparency guidelines (European Commission, 2020) to facilitate the compliance of platforms.

Ratings and reviews

  1. Both in collaborative filtering and content-based recommender systems, the first input is given by users ratings and reviews. They can be defined respectively as scores (in a numerical form) and feedback (in a textual form) generated by the platform's users to report their experience with a product, a buyer or a service provider in a supposedly impartial manner. Some platforms provide aggregate or consolidated ratings, which sum up the single ratings or reviews in an overall assessment. Consolidated ratings can play an essential role in supporting the users' decision- making process, addressing some cognitive difficulties and the problem of information overload, i.e. the 'wall' of reviews (Busch 2016).
  1. Ratings and reviews are not only input for recommender systems. They also represent a private ordering mechanism widely used by online platforms, such as eBay, Amazon, Uber or Airbnb, to build and maintain trust within their community and to preserve the attractiveness of their services.
  1. Ratings and reviews can perform two main functions: (1) informative and (2) self-regulatory.
  1. First of all, they constitute a reputational mechanism that can help reduce information asymmetry between the parties and promote the overall transparency of the transaction (Smorto 2016; Busch 2016; Ranchordás 2018). They represent a source of information which, before the advent of e-commerce, could have been obtained through channels such as advertising, direct experience or recommendations of friends or acquaintances. In this sense, ratings and reviews have codified the 'word of mouth' in the business models, contracts and digital architectures of such platforms (Dellarocas 2003).
  1. Recent legislative interventions in Europe have been directed to ensure the transparency of rating and review mechanisms. In the B2C context, the Directive (EU) 2019/2161 introduced the explicit prohibition to submit or commission false consumer reviews or endorsements, as well as manipulate them, in order to promote products. Furthermore, traders (including platforms) have to declare whether and how the review of a product is genuine, i.e. it is submitted by consumers who have actually used or purchased the product.
  1. The second function of ratings and reviews can be the platform’s self-regulation. On many platforms, users (both service providers and end-users) assess each other. This bi-directional evaluation is an incentive for users to behave according to the rules of the community and maintain a high online reputation. A series of private sanctions usually complete the rating and review systems: if the user's overall score is below the threshold set by the platform, the personal account can be suspended or deactivated. In some cases, self-regulation is the only function pursued by the platform via the rating (Ducato, 2020). Considering that ratings and reviews can be considered personal data, relating to both the individual who receives the score and the one who gives it to the other user, the data protection framework will apply to this form of automated- decision making processing (Ducato, 2020).

 

References

 

Consultation Results, ' Ranking transparency guidelines in the framework of the EU regulation on platform-to-business relations – an explainer' (European Commission 2020). Available at: https://ec.europa.eu/digital-single-market/en/news/ranking-transparency-guidelines-framework- eu-regulation-platform-business-relations-explainer

 

Aggarwal, Charu C. 2016. Recommender Systems. Vol. 1. Springer.

 

Busch, Christoph. 2016. "Crowdsourcing Consumer Confidence. How to Regulate Online Rating and Review Systems in the Collaborative Economy." In European Contract Law and the Digital Single Market: The Implications of the Digital Revolution, edited by Alberto De Franceschi, 223– 44. Intersentia. https://doi.org/10.1017/9781780685212.013.

 

Cobbe, Jennifer, and Jatinder Singh. 2019. "Regulating Recommending: Motivations, Considerations, and Principles." Forthcoming, European Journal of Law and Technology (10)3, https://ejlt.org/index.php/ejlt/article/view/686.

 

Dellarocas, Chrysanthos. 2003. "The Digitization of Word of Mouth: Promise and Challenges of Online Feedback Mechanisms." Management Science 49 (10): 1407–1424.

 

Ducato, Rossana. 2020. "Private Ordering of Online Platforms in Smart Urban Mobility: The Case of Uber's Rating System", CRIDES Working Paper Series no. 3/2020; forthcoming In Smart Urban Mobility. Law Regulation and Policy, edited by M. Finck, M. Lamping, V. Moscon, H. Reiko. Springer – MPI Studies on Intellectual Property and Competition Law.

 

Felfernig, Alexander, Ludovico Boratto, Martin Stettinger, and Marko Tkalčič. 2018. Group Recommender Systems: An Introduction. Springer.

 

Goanta, Catalina, and Gerasimos Spanakis. 2020 "Influencers and Social Media Recommender Systems: Unfair Commercial Practices in EU and US Law." TTLF Working Papers no. 54, https://ssrn.com/abstract=3592000.

 

Jannach, Dietmar, Markus Zanker, Alexander Felfernig, and Gerhard Friedrich. 2010. Recommender Systems: An Introduction. Cambridge University Press.

 

Ranchordás, Sofia. 2018. "Online Reputation and the Regulation of Information Asymmetries in the Platform Economy." Critical Analysis of Law (5): 127.

 

Smorto, Guido. 2016. "Reputazione, Fiducia e Mercati." Europa e diritto privato (1): 199.

 

Twigg-Flesner, Christian. 2018. “The EU’s Proposals for Regulating B2B Relationships on Online Platforms–Transparency, Fairness and Beyond.” EuCML (6): 222

 

 

74. Red Flag Knowledge

  1. Red flag knowledge is a term of art used in the copyright context to infer knowledge on the part of an online service provider without it having received a specific notice (hence its qualification as “constructive knowledge”) about infringing activity which it enables. Although US Congress did not explicitly include it in the Digital Millennium Copyright Act (DMCA), it referred to this term in the legislative history as equivalent to being “aware of facts or circumstances from which infringing activity is apparent”, which triggers an obligation of expeditious removal in the safe harbor of hosting, caching services and information location tools established in the Digital Millennium Copyright Act. It clarified that the goal was to exclude from the safe harbor directories that “refer Internet users to other selected Internet sites where pirate software, books, movies, and music can be downloaded or transmitted” when infringement “would be apparent from even a brief and casual viewing.”
  1. With the DMCA in force, the term has appeared in a number of cases in US courts, leading to diverging interpretations. For instance, in UMG Recordings, Inc. v. Shelter Capital Partners LLC, the Ninth Circuit Court of Appeals held (citing important precedents such as Sony Betamax and Napster) that online service provide Veho´s protection under the safe harbor was not lost on ground of a general knowledge of the possibility that their platform could be used to share infringement material isn’t enough to qualify as Red Flag Knowledge. However, a year later, in Viacom International v. YouTube, the Second Circuit Court of Appeals explained that “The difference between actual and red flag knowledge is not between specific and generalized knowledge, but instead between a subjective and an objective standard. In other words, the actual knowledge provision turns on whether the provider actually or “subjectively” knew of specific infringement, while the red flag provision turns on whether the provider was subjectively aware of facts that would have made the specific infringement “objectively” obvious to a reasonable person. The red flag provision, because it incorporates an objective standard, is not swallowed up by the actual knowledge provision under our construction of the § 512(c) safe harbor. Both provisions do independent work, and both apply only to specific instances of infringement.
  1. By contrast, in 2016, in Capitol Records v. Vimeo—a case in which Capitol Records asserted that Vimeo, a platform that allows its users to upload videos, was “not only aware of the copyright infringement taking place on its system, but [was] actively promot[ing] and induc[ing] that infringement … [and] refusing to filter or block videos by using copyrighted recordings”—the Second Circuit held that, even where a copyright owner provides evidence that an online service provider’s employee viewed “a video that plays all or virtually all of a recognizable copyrighted song,” that evidence is insufficient to establish red flag knowledge: the service provider must have actually known facts that would make the specific infringement claimed objectively obvious to a reasonable person. The same Second Circuit, however, has also recognized that a “time-limited, targeted duty” of inquiry to determine whether there is an “objectively obvious” infringement does not run afoul of the prohibition of general monitoring in section 512(m).
  1. Because of the confusion generated by the different articulation of red flag knowledge, the US Copyright Office has recently suggested a clarification in its report on proposed reforms to section 512 of the DMCA. In particular, it has advised clarifying the relationship between such knowledge and the prohibition of general monitoring, and called for a broader notion of knowledge which is not linked to “specific” infringing content.

 

References

 

Records, LLC v. Vimeo, LLC, 826 F.3d 788 (2d Cir. 2016).

 

TOTO, Carolyn, “When It Comes to the DMCA, a Red Flag Becomes Harder to Fly” (2016). Pillsbury - Internet and Social Media Law Blog. Available at https://www.internetandtechnologylaw.com/dmca-red-flag/

 

US Copyright Office, “Section 512 of title 17: A report of the register of copyrights” (May 2020), available at https://www.copyright.gov/policy/section512/section-512-full-report.pdf

 

UMG Recordings, Inc. v. Shelter Capital Partners LLC, 667 F.3d 1022 (9th Cir. 2011).

 

Terrica Carrington. Twenty Years of the DMCA: Notice and Takedown in Hindsight (Part II), Copyright Alliance Blog (28 October 2018), Available at https://copyrightalliance.org/ca_post/twenty-years-dmca-notice-and-takedown/

75. Regulation

  1. Regulation refers to a set of authoritative rules designed to control or govern conduct in a particular sector or domain by restricting or enabling specific activities. There are numerous definitions for this concept, influenced more broadly by ideology and disciplinary traditions. Generally understood as a form of ‘command and control’ imposed by the state ‘through the use of legal rules backed by (often criminal) sanctions’ (Black, 2002), regulation needs to be distinguished from the broader notion of “governance”. What the majority of these have in common is the understanding of regulation as a form of state action designed to influence business or social behaviour (Baldwin et al., 2012), materialized in the promulgation of a binding set of rules. Back in the 19th century, John Stuart Mill defined regulation as a ‘governmental intervention in the affairs of society’ (Mill, 1848) and that understanding has prevailed also in relation to Internet platforms, a newer area of regulatory concern.
  1. The opposite move, known as ‘deregulation’, refers to the reduction or elimination of government power in a particular sector or across the economy in order to foster competition within the industry and reduce the inefficiencies of public regulation. Without signifying a complete withdrawal of the state from defining conditions for rule-making, deregulatory processes for the Internet economy in the early 1990s represented a balancing act between property rights regimes, existing governance structures and rules of exchange (Irion and Radu, 2014). They allowed the growth of Internet services, intermediaries and platforms in the absence of strong public regulation.
  1. An important tension in digital regulation has been the one between hard and soft instruments, between what is legally codified and various attempts to influence behaviour via institutional mandates and modelling (Radu, 2019). While the former exerts direct influence over the conduct of the addressee, soft forms of regulation are indirect means to shape intervention in the private domain, primarily by shaping a normative order (Kettemann, 2020), integrating norms at the domestic, regional and international level.
  1. Regulatory debates have long focused on the relationship between the regulator and the regulated entity. Over time, regulating digital technologies has grown in complexity due to the complex technical expertise required, the high levels of information asymmetry and the risk of ‘regulatory capture’ (the regulated industry being able to design public rules in its interest). Importantly, regulation happens through sets of practices and it is thus ‘collectively mediated and legitimized by the key communities whose buy-in is necessary’ (Radu 2019, p. 25).
  1. As technology evolved worldwide, the Internet has seen a regulatory shift, away from the application of general telecommunications rules towards the creation of Internet-specific regulation from the 1990s onwards, culminating in harmonized legislation in the European Union. Plans for state-mandated regulation addressing digital platforms are back in full swing, calling into question the efficacy of self- and co-regulation models (Marsden, 2011).
  1. See also: Co-regulation, Self-regulation

 

References

 

Baldwin, R., Cave, M., Lodge, M. (2012). Understanding Regulation. Theory, Strategy and Practice. Oxford, UK: Oxford University Press.

 

Black, J. (2002). Critical reflections on regulation. Australian Journal of Legal Philosophy, 27. Available at: http://www.austlii.edu.au/au/journals/AUJlLegPhil/2002/1.pdf

 

Irion, K., Radu, R. (2013). Delegation to independent regulatory authorities in the media sector: A paradigm shift through the lens of regulatory theory. In: Schulz, W., Valcke, P., Irion, K. (eds.), The Independence of the Media and Its Regulatory Agencies: Shedding New Light on Formal and Actual Independence Against the National Context (pp. 15–54). Polity Press.

 

Kettemann, M. (2020). The Normative Order of the Internet: A Theory of Rule and Regulation Online. Oxford: Oxford University Press.

 

Marsden, C. (2011). Internet Co-Regulation. European Law, Regulatory Governance and Legitimacy in Cyberspace. Cambridge: Cambridge University Press.

 

Mill, J. S. (1848). Principles of Political Economy. London: John W. Parker, West Strand. Radu, R. (2019). Negotiating Internet Governance. Oxford: Oxford University Press.

76. Remedy

  1. A simple definition is given by legal dictionaries, emphasising the element of “recovery” or “repair”, thus referring to the end-result of a process described as an “effective grievance mechanism” (Le Docte Legal: Dictionary in Four Languages, 2011).
  1. The term "remedy" is formally embedded in many international treaties (eg article 13 of the European Convention on Human Rights [ECHR]; article 2(3;a) of the International Covenant on Civil and Political Rights [ICCPR]; articles 12 and 23 of the Arab Charter on Human Rights [ACHR]) and national public laws.
  1. In human rights law, the provisions on the individual right to an effective remedy are directed at states (see eg the Committee of Ministers/Council of Europe, 2016 Recommendation on Internet Freedom; section 5), as a fundamental guarantee that provides individual persons a legal means of seeking redress in connection to interferences with any of the recognised substantive human rights. As suggested by the Council of Europe’s Guide to Human Rights for Internet Users (2014, 26), ‘The remedy must be effective in practice and in law and not conditional upon the certainty of a favourable outcome for the complainant. Although no single remedy may itself entirely satisfy the requirements of Article 13 [ECHR], the aggregate of remedies provided in law may do so.’ Thus, it includes the positive obligation for the states to effectively respond to human rights issues. As stated in one of the core platform law and policy sources, the "Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework" (also widely known as the Ruggie-principles; 2011, 27): "Remedy may include apologies, restitution, rehabilitation, financial or non-financial compensation and punitive sanctions (whether criminal or administrative, such as fines), as well as the prevention of harm through, for example, injunctions or guarantees of non-repetition."
  1. In recognition of due process concerns that exist nowadays in the many relationships between online platform providers and the user, the multi stakeholder recommendations of the Internet Governance Forum's Coalition on Platform Responsibility on the implementation of the Right to an Effective Remedy (see IGF-DCPR 2019 Outcome Document) provide the major best practices for the provision of remedies by the responsible actors. As a whole (see especially section D with the relevant provisions on 'Safeguards relating to the implementation of the remedy'), these recommendations suggest that all online platforms should provide a detailed approach in their terms of service, including the offer of measures that are commensurate with the wide scope of internet technologies' impact and with the relevant human rights issues. These safeguards seek to enhance the remedial purpose of alternative dispute resolution mechanisms that are provided by the platforms' terms of service - with an additional value in comparison to the above-mentioned human rights' frameworks. Thus, platforms are recommended to foresee the need for continuous accountability and transparency during the implementation of the remedy and provide the sufficient scaling of the geographical scope of the remedy in order to contribute to tackling the challenge of effectively dealing with online harm.

 

References

 

See also other terms in this Glossary: no … “Accountability”; no … "Appeal”; no … Dispute Resolution (online); no .. “Harm (online harm)”; no … “Liability”; no … “Moderation”; no … “Pro- active Measures”; no … “Transparency”

 

Le Docte: Legal Dictionary in Four Languages. 2011. edited by Hans-Werner Zehnhoff AM, Hugues Timmermans, Erika Schmatz, Yvonne Salmon. Antwerpen: Intersentia.

 

Committee of Ministers/Council of Europe. 2016. Recommendation CM/Rec(2016)5[1] of the Committee of Ministers to member States on Internet freedom (Adopted by the Committee of Ministers on 13 April 201 at the 1253rd meeting of the Ministers’ Deputies). Available at

<https://search.coe.int/cm/Pages/result_details.aspx?ObjectId=09000016806415fa#_ftn1>.

 

Convention for the Protection of Human Rights and Fundamental Freedoms, adopted 4 November 4, 1950, entered into force 3 September 1953, ETS No.005 Available at

<www.coe.int/en/web/conventions/full-list/-/conventions/treaty/005>.

 

Council of Europe (2014). ‘Guide to Human Rights for Internet Users.’ Available at

<https://rm.coe.int/CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId=0 9000016804d5b31>.

 

IGF-DCPR 2019 Outcome Document, 'Best Practices on Platforms’ Implementation of the Right to       an                          Effective                           Remedy'.                             Available              at

<https://www.intgovforum.org/multilingual/index.php?q=filedepot_download/...>.  (id  at:

<https://doi.org/10.1016/j.clsr.2019.105379>).

 

International Covenant on Civil and Political Rights, Adopted, GA (XXI) of 16 December 1966, entered      into                force                23                March                 1976.                               Available       at

<https://www.ohchr.org/EN/ProfessionalInterest/Pages/CCPR.aspx>.

 

League of Arab States, Arab Charter on Human Rights, 22 May 2004, reprinted in 12 Int'l Hum. Rts.       Rep.        893       (2005),        entered        into       force       March       15,       2008.        Available at

<http://hrlibrary.umn.edu/instree/loas2005.html?msource=UNWDEC19001&tr=y&auid=3337655

>.

 

‘Guiding Principles on Business and Human Rights: United Nations “Protect, Respect and Remedy”                    Framework’,                    UN,                   HR/PUB/11/04.                                     Available                           at

<www.ohchr.org/Documents/Publications/GuidingPrinciplesBusinessHR_EN.pdf>. (see also Ruggie J. 2011, March 21. ‘Report of the Special Representative of the Secretary-General on the issue of human rights and transnational corporations and other business enterprises (UN Human Rights Council Document A/HRC/17/31). Available at <https://documents-dds- ny.un.org/doc/UNDOC/GEN/G11/121/90/PDF/G1112190.pdf?OpenElement>.)

77. Repeat Infringer

  1. The concept of "Repeat infringement" has been established by the Digital Millennium Copyright Act (DMCA or the Act), passed by the US in 1998. This piece of legislation originally aimed at protecting digital innovators while preserving the ability of copyright holders to prevent activities that may infringe upon their rights.
  1. Most relevantly, DMCA section § 512 grants so-called “safe harbourprotections to the intermediaries that act on actual or constructive knowledge of copyright infringement and “adopt and reasonably implement, and inform subscribers and account holders [...] of, a policy that provides for the termination in appropriate circumstances of subscribers and account holders [...] who are repeat infringers.”
  1. In this perspective, “repeat infringer” refers to the number of times a user has been identified as an infringer. According the DMCA § 512 a written repeat infringer policy, consisting of a set of guidelines that detail when a users’ infringing activity will result in termination of their account access. In this policy, the intermediary must explicit how often a user must be successfully accused of copyright infringement before the account for the user is terminated. (Sawicki, 2006)
  1. Moreover, to take advantage of the safe harbour protections, an intermediary must “reasonably implement” the repeat infringement policy. As such, upon receiving a copyright infringement notice, demanding takedown of specific content, both the complaint and its outcome must be recorded, to be able to identify when the infringer repeats their infringement. The intermediary records allow to identify users who accumulate a quantity of infringements deemed as sufficient to trigger the repeat infringer policy, which imply the closure of the user account and the blocking of his or her IP address.

 

References

 

Sawicki, A. (2006) "Repeat Infringement in the Digital Millennium Copyright Act," University of Chicago                Law               Review:                Vol.               73:               Iss.               4,              Article 7.

https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=5386&con...

 

 

78. Responsibility

  1. The concept of responsibility refers, in its simplest form, to a duty to undertake a particular action or set of actions. Such duty can be legal, but also moral, social or ethical. If it is legally enforceable, failing to fulfill the duty gives rise to liability. However, even where that enforcement is not available, failing to fulfil one´s responsibility can give rise to significant consequences from a legal, social and even financial standpoint. For instance, a boycott of advertisers (also known as “adpocalypse”) took place in 2016 due to an alleged failure in Youtube´s responsibility to prevent ads from being associated with terrorist content. A similar boycott, known as “Stope Hate for Profit”, occurred in 2020 due to Facebook´s failure to take responsibility for the incitement to violence against protesters fighting for racial justice in America in the wake of George Floyd, Breonna Taylor, Tony McDade, Ahmaud Arbery, Rayshard Brooks and many others.
  1. Platform responsibility is the concept that brought together a variety of stakeholders leading to the establishment of the DCPR in 2014. As noted in the DCPR Outcome book in 2017, facing the proliferation of private ordering regimes in online platforms, stakeholders began to interrogate themselves about conceptual issues concerning the moral, social and human rights responsibility of the private entities that set up such regimes. The use of this notion of “responsibility” has not gone unnoticed, having been captured for example by the special report prepared by UNESCO in 2014, the study on self-regulation of the Institute for Information Law of the University of Amsterdam, the 2016 Report of the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, the Center for Law and Democracy’s Recommendations on Responsible Tech and the Council of Europe’s draft Recommendation on the roles and responsibilities of Internet intermediaries.
  1. The declination of responsibilities for a particular stakeholder is typically linked to the “role” that can be attributed to it in a particular process or system: in Internet governance, this goes back to the 2005 Tunis Agenda for Information Society, which established the attributions of different stakeholders in the management of the Internet, recognizing in particular that governments should have an equal role and responsibility for international Internet governance and for ensuring the stability, security and continuity of the Internet. Practically, this is a careful choice of wording as it does not articulate a corresponding liability, while still calling governments for action in a particular domain. Although the notion of responsibility can be exemplified or explained with reference to specific forms of due diligence or even of a duty of care, this typically remains the task of adjudicators to make those articulations in defining the scope of responsibility. Occasionally, this interpretation can be facilitated by authoritative guidelines. An example is the articulation of the due diligence process expected to be followed by businesses in relation to their human rights impact, in particular (a) Identifying and assessing actual or potential adverse human rights impacts that the enterprise may cause or contribute to through its own activities, or which may be directly linked to its operations, products or services by its business relationships; (b) Integrating findings from impact assessments across relevant company processes and taking appropriate action according to its involvement in the impact; (c) Tracking the effectiveness of measures and processes to address adverse human rights impacts in order to know if they are working; and (d) Communicating on how impacts are being addressed and showing stakeholders – in particular affected stakeholders – that there are adequate policies and processes in place. This guidance is provided by the Guiding Principles on Business and Human Rights, unanimously endorsed by the UN Human Rights Council in 2011, which establish a clear separation between the duty of States to protect human rights, the responsibility of businesses to respect them, and the joint duty of both to provide effective remedies.

 

References

 

Stop Hate for Profit. Available at: < https://www.stophateforprofit.org/>

 

Luca Belli, Nicolo Zingales, ‘Online Platforms’ Roles and Responsibilities: A Call for Action’, in L. Belli & N. Zingales (eds.),Platform Regulations: How Platforms Are Regulated and How They Regulate Us (FGV Press, 2017), 21-32

 

Tunis Agenda for the Information Society (18 November 2005), WSIS-05/TUNIS/DOC/6(Rev. 1)- E, Available at https://www.itu.int/net/wsis/docs2/tunis/off/6rev1.html

 

Guiding Principles on Business and Human Rights: United Nations “Protect, Respect and Remedy” Framework

 

HR/PUB/11/04

 

Report of the the Special Rapporteur to the Human Rights Council on Freedom of expression, states and the private sector in the digital age, A/HRC/32/38 (11 May 2016) <https://documents- dds-ny.un.org/doc/UNDOC/GEN/G16/095/12/PDF/G1609512.pdf?OpenElement>

 

Center for Law & Democracy, ‘Recommendations for Responsible Tech’ <http://responsible- tech.org/wp-content/uploads/2016/06/Final-Recommendations.pdf>

 

Council of Europe, Recommendation CM/Rec(2017x)xx of the Committee of Ministers to member states on the roles and responsibilities of internet intermediaries. https://rm.coe.int/ recommendation-cm-rec-2017x-xx-of-the-committee-of-ministers-to-member/1680731980

 

 

 

79.  Revenge pornography / Non-consensual intimate images

  1. NB: While the term “revenge pornography” is commonly used, many argue that it is inappropriate since it suggests a degree of complicity or consent on the part of the person in the images. Instead, terms such as “non-consensual intimate images” are now considered to more appropriate describe the phenomenon and is the term used here.
  1. This entry: (i) sets out examples of definitions of the term and (ii) provides examples of existing regulatory responses to non-consensual intimate images.
  1. Examples of definitions of the term
  1. Non-consensual intimate imagery can be broadly understood as the distribution of sexually explicit images or video of an individual without their consent. Unlikes other forms of intimate imagery, such as consensual pornography, which existed prior to the internet, the distribution of non-consensual intimate imagery is a more recent phenomenon, spawned by “changes in technology, including the advent of social media and websites that feature user-generated content, in conjunction with the ease of taking and sharing digital photographs and video (particularly on smartphones, tablets, and mobile devices)” (Magaldi et al., 2020).
  1. Legal definitions of the phenomenon, particularly those that criminalise such distribution, often include further elements and exceptions, for example, a requirement that there be an intention to cause harm or distress, that the individual in the image or video had a reasonable expectation of privacy, and that no legitimate purpose to the distribution (such as for law enforcement purposes).
  1. Existing regulatory responses
  1. A number of states and subnational jurisdictions governments have prohibited distributing non- consensual intimate images through the creation of criminal offences (There are a number of databases of criminal laws in place, for example: The Center for Internet and Society and InternetLab – see references). While the specific wording may vary, common to all criminal offences are the core requirements that (i) a person disseminates an image or video of another person, (ii) that image or video is sexually explicit; and (iii) the dissemination is done without the consent of the other person.
  1. As noted above, some states have also included further elements and exceptions. In Canada, for example, the images or video must have been taken with a “reasonable expectation of privacy” (section 162.1 of the Criminal Code). In England and Wales, the person distributing the images or videos must have done so with “an intention of causing [the] individual distress” ( section 33 of the Criminal Justice and Courts Act 2015), and in South Africa, there must have been an “intent to cause them harm” (section 18F of the Films and Publications Act).
  1. Beyond outright criminalisation, there are few examples of regulatory responses, particularly when it comes to the liability of online platforms which are used to share non-consensual intimate images. One example is Article 21 of Brazil’s Civil Marco da Internet, which provides for a specific liability regime in cases involving “the breach of privacy arising from the disclosure of images, videos and other materials containing nudity or sexual activities of a private nature, without the authorisation of the participants”. In such instances, a platform can be held liable for such material where they fail to remove the content, in a diligent manner, and within its own technical limitations, when notified of it by the individual involved or their legal representative (i.e. a “notice and takedown” regime).

 

References

 

Magaldi, Jessica A. and Sales, Jonathan S. and Paul, John, ‘Revenge Porn: The Name Doesn't Do Nonconsensual Pornography Justice and the Remedies Don't Offer the Victims Enough Justice’ [January 29, 2020]. Oregon Law Review, Vol. 98, No. 1, 2020. Available at: https://ssrn.com/abstract=3527819

 

The Center for Internet and Society, ‘Revenge Porn Laws across the World’, available at: https://cis-india.org/internet-governance/blog/revenge-porn-laws-across-the-world

 

InternetLab, ‘How do countries fight the non-consensual dissemination of intimate images?’ [2018]. Available at: https://www.internetlab.org.br/en/inequalities-and-identities/how-do- countries-fight-the-non-consensual-dissemination-of-intimate-images/

80. Right to explanation

  1. The concept of right to explanation refers to the informational duties owed by a data controllers to a data subject in relation to automated decisions based on profiling. The basic provision for the construct of the “right to explanation” is Article 22 of the General Data Protection Regulation (GDPR), which establishes a right for any data subject not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”. This provision is the evolution of article 15 of the Data Protection Directive (DPD), in turn finding its historical root in the French law of 1978 “on computing, files and freedoms” which provided a broader right not to be subject to any decision involving an appraisal of human behavior based solely on the automated processing of data which describes the profile or personality of the individual. The same law also granted the right to know and challenge the information and reasoning used in such processing in case the data subject opposed the results.
  1. While the scope of this right is much narrower both in art 15 DPD and art 22 GDPR, one can discern from the Directive’s Travaux Préparatoires the same concern for human dignity- specifically that humans maintain the primary role in ‘constituting’ themselves instead of relying entirely on (possibly erroneous) mechanical determinations based on their “data shadow” 49. Arguably, that concern underlies art. 22 GDPR despite the more specific focus in its Travaux Préparatoires on the risks of decisions based on profiling, which is defined in art. 4 (4) as “any form of automated processing of personal data consisting of using those data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements”. Accordingly, the explicit mention of profiling could be interpreted simply as illustrative of one of the possible risks involved in automated processing.
  1. Another important difference of art 22 GDPR from the text of art 15 DPD is the requirement of “suitable” safeguards in case of application of any derogations to the right established art 22 (1), which are admitted only where provided by a law to which the controller is subject. “Suitable measures” to safeguard the data subject’s rights and freedoms and legitimate interests are also required when applying one of the exceptions to the rule laid down in art 22 (1), i.e. necessity for entering into a contract or performance thereof, and data subject’s explicit consent –a novelty introduced by the GDPR. Detailing the application of these exceptions, but arguably also informing the application of derogations, article 22 (3) specifies that “suitable measures” consist at least of the right of the data subject “to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision”.
  1. This is an aspect that has triggered significant discussion on the existence of a right to explanation52, as the suitable safeguards listed in Recital 71 of the GDPR include the right “to obtain an explanation of the decision reached after such assessment”, but this wording is conspicuously absent from the text of art. 22 (3). Regardless of the qualification of the right provided by art 22 as one to “an explanation” (as we’ll call it here for sake of simplicity), to “information”or to “legibility”, it is indisputable that the article seeks to put data subjects in the position to appreciate at least to a certain degree the logic of any algorithm relied upon to take measures which significantly affect them. Besides the obvious question of what is the requisite degree of transparency and granularity of an explanation, the provision leaves ambiguous two important issues: (1) whether art 22 implies a prohibition of processing personal data without fulfilling the relevant criteria, or rather a right for the data subject to actively object to such processing; and (2) whether the requisite degree of transparency and granularity for requested data33, and last but not least, the possibility for data controllers to condition access requests to the payment of a fee.
  1. Even admitting that consumers were able to use and keep perfect track of all the revealed data, they would still largely ignore predictions and decisions made on the basis of such data, at least in the absence of proactive measures taken by data controllers to that effect. EU data protection law specifically addresses this problem establishing the right for individuals to obtain basic knowledge on the logic of any automated decisions that produces a legal effect or significantly affect them otherwise, and a right not to be subjected to such decisions outside a narrow set of circumstances. Regrettably, the right to obtain such knowledge has historically been under- enforced, mainly due to the ambiguity of article 15 of the Data Protection Directive concerning its scope of application. However, there is reason to think that this situation will change in light of the forthcoming General Data Protection Regulation (GDPR), which details the minimum safeguards that should be offered when such decisions are made.

 

References

 

Wachter, B. Mittelstadt, and L. Floridi, ‘Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation’ (2017) International Data Privacy Law (2017).

 

L. Edwards and M. Veale, ‘Slave to the algorithm? Why a “right to an explanation” is probably not the remedy you are looking for’ 16 Duke Law and Technology Review (2017 )18; Bygrave, supra n 35.

 

B. Goodman and S. Flaxman, ‘European Union regulations on algorithmic decision- making and a "right to explanation”’, ICML Workshop on Human Interpretability in Machine Learning, preprint, arXiv:1606.08813 (v3) (2016); AI Magazine (2017).

 

Malgieri, Gianclaudio and Comandé, Giovanni, Why a Right to Legibility of Automated Decision- Making Exists in the General Data Protection Regulation (November 13, 2017). International Data Privacy Law, vol. 7, Issue 3, Forthcoming, Available at SSRN: https://ssrn.com/abstract=3088976

 

A. Selbst and J. Powles, ‘Meaningful information and the right to explanation’, 7 (4) International Data Privacy Law, 233

 

I Mendoza and L A Bygrave, ‘The Right not to be Subject to Automated Decisions based on Profiling’, in T Synodinou, P Jougleux, C Markou, T Prastitou (eds.), EU Internet Law: Regulation and Enforcement (Springer, 2017); University of Oslo Faculty of Law Research Paper No. 2017-

Available at SSRN: https://ssrn.com/abstract=2964855, p. 6.

 

Malgieri, Gianclaudio, Automated Decision-Making in the EU Member States: The Right to Explanation and Other 'Suitable Safeguards' for Algorithmic Decisions in the EU National Legislations (August 17, 2018). Computer Law & Security Review, 2019 Forthcoming, Available at SSRN: https://ssrn.com/abstract=3233611 or http://dx.doi.org/10.2139/ssrn.3233611

81. Right to be forgotten

  1. The right to be forgotten is a legal right that exists in some jurisdictions, and that is closely related to the right of privacy and the protection of personal data. This right generally enables people to require removal of information about them that is stored or otherwise processed by others. It can be argued that the right existed before the internet and the rise of online platforms, and that it could be read into pre-internet norms and case law. However, the right to be forgotten has gotten particular prominence with the development of digital information technologies. The ability to store and search great amounts of information has shifted the ‘default of forgetting’ information towards a ‘default of remembering’ (Mayer-Schönberger 2009). This required a rethinking of how we deal with private information and personal data in computer mediated interactions and spurred the development of the right to be forgotten.
  1. The right to be forgotten gained a lot of attention when it was read into the European Union Data Protection Directive – the predecessor of the General Data Protection Regulation (GDPR) – by the European Court of Justice in the Google Spain (InfoCuria, 2014) case. In its decision, the Court acknowledged that people have the right to have search results to irrelevant or outdated information about them delisted. Not all references to such information need to be removed under the ruling, as the Court explained that the right to be forgotten does not apply in cases in which there is a public interest in accessing the information in question. That may be the case if the person in question plays a role in public life, for instance because that person is a politician or celebrity. The right to be forgotten thus requires that a balance is struck between a person’s right to privacy and other people’s rights to share and access information (Kulk and Zuiderveen Borgesius 2017). After the Google Spain case, search engines have implemented forms that enable people to do removal requests, that are then processed by these search engines. As of July 2020, Google has received almost 950.000 requests concerning a total of 3.700.000 URLs.
  1. The right to be forgotten (or ‘right to erasure’) is enshrined in the European Union by means of the GDPR. It applies broadly to any kind of processing of ‘personal data’ – not just by search engines – in cases where such processing is unlawful or no longer necessary (Ausloos 2020). The California Consumer Privacy Act of 2018 established a ‘right to deletion’ as well. In both laws, freedom of expression is recognized as an exception to these rights.

 

References

 

C-131/12 - Google Spain e Google, 'Processo principal' (InfoCuria 2014) < http://curia.europa.eu/juris/liste.jsf?num=C-131/12>

 

J. Ausloos, ‘The Right to Erasure in EU Data Protection Law’, [2020] Oxford University Press.

 

V. Mayer-Schönberger, ‘Delete: the virtue of forgetting in the digital age’, [2009] Princeton University Press.

S. Kulk & F.J. Zuiderveen Borgesius, Privacy, ‘Freedom of Expression, and the Right to Be Forgotten in Europe’, [2017] Cambridge Handbook of Consumer Privacy (eds. Jules Polonetsky, Omer Tene, and Evan Selinger), Cambridge University Press.

 

 

82. Safe harbor

  1. A safe harbor is an area of exemption from liability guaranteed by the legal system, a “comfort zone” that enables the pursuit of what would otherwise be considered legally risky activities. Safe harbors can be important not only to enable experimentation and innovation, but also, and particularly in the context of speech platforms, to allow the free flow of information.
  1. The scope, limits and conditions of safe harbors for digital intermediaries are, indeed, a crucial topic of Internet law and governance. There are several models with different characteristics, but a distinction should at least be made between vertical safe harbors, whose exemption applies to only a particular type of liability (for example, liability for copyright infringement, as in the US Digital Millennium Copyright Act, or DCMA) and horizontal, which establish an exemption applicable across different types of liability. However, it is worth noting that very rarely is a safe harbor entirely horizontal, as there are typically several exempted areas: for instance, the safe harbor established in section 230 of the US Communication Decency Act (CDA) explicitly excludes from its scope federal criminal law, intellectual property law, communications privacy law, sex trafficking law, and more generally U.S. state law. Similarly, the EU E-Commerce Directive (ECD) excludes from its scope taxation, data protection law, cartel law, the activities of notaries or equivalent professions to the extent that they involve a direct and specific connection with the exercise of public authority, the representation of a client and defence of his interests before the courts, and gambling activities which involve wagering a stake with monetary value in games of chance. In addition, the Directive explicitly preserves the ability of Member States to impose reasonable duties of care to detect and prevent certain types of illegal activities, and there are also differences in the type of knowledge of illegal activity that is sufficient to fall foul of the safe harbor with regard to criminal matters (actual v. constructive knowledge that applies to civil matters).
  1. We can identify 2 different types of models regarding the conditions to be satisfied for the enjoyment of safe harbors for intermediary liability: one that applies broadly to a range of intermediaries, and another one that identifies specific safe harbors for different types of hosts. For instance, the CDA safe harbor applies to any provider or user of an interactive computer service, defined as an information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions. Similarly, Act No. 137 of 2001 on the Limitation of Liability of Intermediaries in Japan defined as provider of a service whose purpose is to communicate third party information to other parties, and the Indian IT Act defines an intermediary (which can benefit from the safe harbor) as “any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecom service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-market places and cyber cafes.
  1. By contrast, the ECD provides much more specific protections to communications conduits, content hosts, and caching services, while the US DMCA includes in addition to that the categories of information location tools and non-profit educational institutions.
  1. Another crucial point is what conditions must be fulfilled in order for the intermediary to enjoy liability. Broadly, two different models can be distinguished: one that is premised on good faith, as in the CDA, or other forms of context-dependent knowledge; and another which is premised upon a qualified notice. Safe harbors can also have a combination of the two: for instance, the US DMCA introduces a notice and takedown framework while also carving out from the liability exemption case of both actual and constructive (and thus more context-dependent) knowledge of illegality. Similarly, the Finnish, the Chilean and the Brazilian system introduce a system based on judicial notice, but contain exceptions for situations where constructive knowledge is admitted.

 

83. Self injury

  1. Self injury or self harm is often described as the act of purposely hurting physically or psychologically oneself as an emotional coping mechanism (Skegg, 2005; Daine et. al, 2013). These behaviors are not just suicidal-related, the practice of violent activities and challenges with a high risk to life and self-harm are also a part of this definition, such as feeding disorders and self-deprecation.
  1. Currently, cyberbullying represents a great source of suffering and many people, who are afraid or because they think that no one will help or even that talking can make the situation worse, live with this type of violence (Lindert, 2017). Besides that, prejudicial beliefs about mental health are quite common, which not only contributes to the increased stigma and taboo, as it hinders the search for help when there is intense suffering (Scavacini, 2019).
  1. The concern with self injury and the internet is not new. In fact, a survey published in 2006, in the scientific publication of the American Academy of Pediatrics, already pointed out that the spread of this practice on internet forums (Whitlock et. al, 2006). In Brazil, a survey published on 2019 showed that about 15% of internet users aged 11 to 17 years old had access to content on suicide or self-harm (Cetic.br, 2019).
  1. Despite not being proven effective, an attempt to address this problem has been to provide through criminal law for specific penalties for those who via the internet (through social media or live broadcast) encourage someone to commit suicide or self-harm or provide material assistance to do so.
  1. More recently, livestream suicides have re-launched discussions about the alleged lack of control over what is posted on online platforms (Ribeiro, 2019). In this sense, several platforms have consolidated strategies for dealing with self injury content. In addition to user support and artificial intelligence filters, the platforms have teams focused on collaborating with local authorities.
  1. Finally, human moderation of this type of content is an unhealthy job (Newton, 2019; Rocha, 2019), and the number of information about workers who developed post-traumatic stress syndrome as a result of this activity is growing and it shouldn't go unnoticed (Cardoso, 2019).

 

References

 

Cardoso, Paula (2019). Precariado algorítmico: o trabalho humano fantasma nas maquinarias da inteligência artificial. Media Lab UFRJ. Available at: <http://medialabufrj.net/blog/2019/09/dobras- 38-precariado-algoritmico-o-trabalho-humano-fantasma-nas-maquinarias-da-inteligencia- artificial/>

 

Centro de Estudos sobre as Tecnologias da Informação e da Comunicação (CETIC.br) (2020). Pesquisa sobre o uso da internet por crianças e adolescentes no Brasil: TIC kids online Brasil.

Available                                                                                                                                                                   at:

<https://cetic.br/media/analises/tic_kids_online_brasil_2019_coletiva_imprensa.pdf>

 

Daine, K., Hawton, K., Singaravelu, V., Stewart, A., Simkin, S., & Montgomery, P. (2013). The Power of the Web: A Systematic Review of Studies of the Influence of the Internet on Self-Harm and Suicide in Young People. PLoS ONE, 8(10), e77555. doi:10.1371/journal.pone.0077555

 

Facebook.                                            ‘Community                       Standards’                        Available                                                               at:

<https://www.facebook.com/communitystandards/suicide_self_injury_violence>

 

Instagram. ‘Self-Injury’. Available at: <https://help.instagram.com/553490068054878>

 

Lindert, Jutta. Cyber-bullying and it its impact on mental health (2017). European Journal of Public Health,         Volume         27,        Issue         suppl_3,          November,          ckx187.581.          Available at:

<https://doi.org/10.1093/eurpub/ckx187.581>

 

Newton, Casey (2019). The Trauma Floor: The secret lives of Facebook moderators in America. The Verge. Available at: <https://www.theverge.com/2019/2/25/18229714/cognizant-facebook- content-moderator-interviews-trauma-working-conditions-arizona>

 

Ribeiro, Paulo Victor (2020). After Livestreamed Suicide, TikTok Waited to Call Police. The Intercept. Available at: <https://theintercept.com/2020/02/06/tiktok-suicide-brazil/>

 

Rocha, Camilo (2019). O trabalho humano escondido atrás da inteligência artificial. Nexo. Available at:: <https://www.nexojornal.com.br/expresso/2019/06/18/O-trabalho-humano- escondido-atr%C3%A1s-da-intelig%C3%AAncia-artificial>.

 

Scavacini, Karen (2019). Prevenção do suicídio na internet: pais e educadores. Available at:

<http://www.safernet.org.br/site/themes/sn/sid2017/resources/AF_cartilha_Pais.pdf>

 

Skegg, Keren (2005). Self-harm. The Lancet, 366(9495), 1471–1483. doi:10.1016/s0140-

6736(05)67600-3

 

Twitter. ‘Suicide and Self-harm Policy’. Available at: <https://help.twitter.com/en/rules-and- policies/glorifying-self-harm>

 

Whitlock, Janis; Eckenrode, John & Silverman, Daniel                (2006). Self-injurious Behaviors in a College Population. Available at: <doi:10.1542/peds.2005-2543>

84. Self-preferencing

  1. Self-preferencing is the practice of giving preferential treatment to one’s own complementary products or services when they are in competition with products and services provided by other entities using the platform (Crémer 2019). The term rose to prominence after the Google Shopping case (2017) where the European Commission accused Google of self-preferencing its own shopping results over those of other competing shopping comparison services. However, self-preferencing is thought to encompass various practices, many of which are not new, such as refusal to deal or tying.
  1. The central concern around self-preferencing is that a dominant platform will leverage its power in the platform market either to expand its power in neighboring markets or to protect its dominant position in the home platform market (Graef 2019). The former is more common when the platform is vertically integrated and wants to establish itself or protect its position in the neighboring market, whereas the latter consideration occurs when the platform wants to protect itself from competitive entry or expansion in the home market.
  1. Self-preferencing can manifest itself in different ways, some well-known and some newer and less well-understood. Among the traditional practices that can result in non-affiliated products and services being in a disadvantageous position compared to the platform’s own products or services (or those affiliated with it) are refusal to supply, tying, abusive discrimination, and margin squeeze (Ahlborn 2020). For these practices there are well-developed legal standards.
  1. Newer practices have included the manipulation of the ranking of affiliated products and services compared to those of non-affiliated competitors, and the use of data from competitors who rely on the platform to improve the platform’s own products and services. The proper tests for when such practices should be considered problematic have yet to be developed. Among the relevant parameters that can be considered are whether the platform is indispensable to the non-affiliated products and services, whether the platform is dominant, the rationale and design of the self- preferencing, the extent of the negative effects of the self-preferencing, and whether it has any pro-competitive justifications (Ahlborn 2020, Zingales 2019).

 

References

 

Ahlborn, Christian, Will Leslie & Eoin O’Reilly. “Self-Preferencing: Between a Rock and a Hard Place.” CPI Antitrust Chronicle (June 2020).

 

Crémer,Jacques, Yves-Alexandre de Montjoye & Heike Schweitzer. “Competition Policy for the Digital Era.” Final Report for the European Commission (2019).

 

Case AT.39740, Google Search (Shopping), 27 June 2017.

 

Graef, Inge. “Differentiated Treatment in Platform-to-Business Relations: EU Competition Law and Economic Dependence.” Yearbook of European Law 38 (2019): 448-499.

 

Zingales, Nicolo. “Google Shopping: Beware of ‘Self-favouring’ in a World of Algorithmic Nudging.” CPI Europe (February 2018).

 

85. Self-regulation

  1. Self-regulation refers to rules that are autonomously developed and implemented by the thereof concerned persons, independently from any structured form of rule-making. The legitimacy of self-regulation is based on the fact that private incentives lead to a need-driven rule-setting process. Self-regulation is responsive to changes in the environment and can establish rules without regard to the territoriality principle. It is justified (i) if its application leads to a higher efficiency than provided by governmental law and (ii) if compliance with the community rules is less likely than compliance with private rules (Weber 2014: 23; Gibbons 1997: 509; Black 1996:32 seq.).
  1. Depending on the given circumstances, the concerned persons can belong to a single or to different market level(s). Guidelines imposing specific obligations on Internet Services Providers (ISP, for example in respect of notice and take down) only influence this professional category. In the context of some types of speech, user groups of a technology and platform “owners” might develop different soft law regimes; user groups could agree on certain expressions to be avoided, platform “owners” on control measures regarding uploaded contents thereby replacing governmental regulation (see also the contributions in Belli and Zingales, 2018: 41 et seq).
  1. A universally accepted theory as to the “legal quality” of self-regulation has not (yet) been formulated. Since self-regulation is not enforceable through public action, such rules do not have the quality of law in the traditional sense (Weber 2002: 81-83; Guzman and Meyer 2010: 179- 183; Abbott and Snidal: 2000: 429). The often used terms “contract” and “social contract” also do not fit. Self-regulation can be understood as a social control model or as a gentlemen’s agreement (Gibbons 1997: 519 et seq.). But compliance with its rules is usually more than only an ethical undertaking, even in the absence of direct sanction. Self-regulatory provisions correspond to standards that reflect the common sense behavior expected to be observed by the concerned person, i.e. – in overcoming the dichotomy to “hard law” – they represent the due diligence and good       faith       standards       being       legally       enforceable       (Weber       2014:       25).
  1. The strengths of self-regulation encompass the following elements being also relevant for platforms (Weber 2002: 83/84 with further references): (i) The rules created by the participants of a specific community are efficient since they respond to real needs and mirror the technology. (ii) Meaningful self-regulation provides the opportunity to adapt the regulatory framework to the changing technology. (iii) Self-regulation can usually be implemented at reduced cost. (iv) Due to the private initiatives, the chances are high that the rules contain incentives for compliance. (v) Effective self-regulation induces the concerned persons to be open to a permanent consultation process related to the development and implementation of the rules (i.e. a mechanism that helps to                            accurately reflect real needs).
  1. The weaknesses of self-regulation include the following aspects (Weber 2002: 84/85; Brown and Marsden 2003: 2; Tambini, Leonardi and Marsden 2008: 269-282): (i) Self-regulation is not generally binding in legal terms, the provision are only applicable to those persons that have accepted the regulatory regime. (ii) Self-regulation tends to be based on a case-driven approach rather than general rules. (iii) If the number of “outsiders” or “dark sheep” not acknowledging the self-regulation is large, the legitimacy of the respective rules becomes doubtful. In contrast, “outsiders” not being involved in the preparation and implementation of the rules can benefit from them free of charge (“free rider”). (iv) Self-regulatory mechanisms are not always stable and can depend on the concerned (market) segment; consequently, the applicable standards risk to remain on a low level. (v) The main problem of self-regulation lies in the lack of enforcement proceedings; non-compliance with private rules does not necessarily lead to sanctions.
  1. In order to overcome the described weaknesses of self-regulation, new models have been developed in different forms of co-operative rule-making or co-regulation (see no. 15). In reality, self-regulation already exists in many fields, traditionally in the media and the Internet environment as well as in the banking markets, however, the regulation of platforms can also be suitably done by way of private rule-making. In this field self-regulation is usually based on Codes of Conduct and Terms of Use (mostly phrased by the providers of the platforms).

 

References

 

Abbott Kenneth W. and Duncan Snidal (2000). ‘Hard and Soft Law in International Governance’, in International Organization 54 (2000), pp. 421-456. Available at https://www.cambridge.org/core/journals/international-organization

 

Belli Luca and Nicolo Zingales (eds.) (2017), ‘Platform Regulations. How Platforms are Regulated and How They Regulate Us’, Geneva 2017

 

Black Julia (1996). ‘Constitutionalizing Self-Regulation’, The Modern Law Review 59 (1996), pp. 24-55. Available at https://www.modernlawreview.co.uk

 

Brown Ian and Christopher T. Marsden (2013). ‘Regulating Code: Good Governance and Better Regulation in the Information Age’, Cambridge MA/London 2013

 

Gibbons Llewellyn J. (1997). ‘No Regulation, Government Regulation, or Self-Regulation: Social Enforcement or Social Contracting for Governance in Cyberspace’, Cornell Journal of Law and Public Policy 6 (1997), pp. 475-551. Available at http://scholarship.law.cornell.edu/cjlpp

 

Guzmann Andrew T. and Timothy L. Meyer (2010). ‘International Soft Law’, Journal of Legal Analysis 2 (2010), pp. 171-225. Available at https://academic.oup.com/jla

 

Tambini Damian, Danilo Leonardi and Chris Marsden, ‘Clarifying Cyberspace: Communications self-regulation in the age of Internet convergence’, London 2008

 

Weber Rolf H. (2002). ‘Regulatory Models for the Online World’, Zurich 2002

 

Weber Rolf H. (2012), ‘Overcoming the Hard Law/Soft Law Dichotomy in Times of (Financial) Crisis’, Journal of Governance and Regulation 1 (2012), pp. 8-14. Available at https://virtusinterpress.org/-Journal-of-Governance-and-Regulation-15-.html

 

Weber Rolf H. (2014), ‘Realizing a New Global Cyberspace Framework’, Zurich 2014

86. Shadowban / Shadow banning

  1. A shadowban refers to a relatively common moderation practice of lowering a user's visibility, content or ability to interact without them knowing it so that they can continue to use the platform normally but their content is not visible to anyone else. It can refer to user-driven or platform- driven blocking, and its meaning has expanded to include visibility in algorithmically-determined platform features. Shadowbanning is often a response to toxic or undesirable interaction related to a specific user. It can be observed by, and has been attributed to, the real or perceived visibility of an account.
  1. Shadowbans can be implemented by individuals or the tech platform itself.
  1. Some tech platforms allow individual users to restrict the visibility and engagement of other users with one’s own account, often implemented as an anti-harassment tool. Instagram and Twitter allow users to block other users without them knowing. In 2019, Instagram launched (Grothaus, 2019) a restrict feature to enable its users to block individuals without that user being notified or made aware.
  1. Platform-instigated shadowbans reduce the visibility of content from the affected user both in terms of the posts themselves as well as through algorithmic restrictions on amplification. For example, a shadowbanned account may no longer appear in algorithmically-determined recommendations for content, in search terms or autofill recommendations, or in a list of suggested accounts. Shadowbans can reduce search visibility or prevent an account from being indexed; or it is only visible to that specific user.
  1. Users who are shadowbanned can continue using their accounts as usual. Accounts that are shadow banned typically appear to be functioning normally for the affected user, but its content is undiscoverable and it may be unable to engage in certain ways. For example, on Reddit, the votes of shadowbanned accounts do not count, while on Twitter their engagement will be visible only to the user but not to the account holder or public.
  1. The concept of restricting content from a user without their awareness has a long history in internet culture as an approach for reducing or deterring undesirable content or communication, such as spam, trolling, or harassment. The approach as a moderation technique has been around as long as there have been public discussion spaces, though the term did not come into use until the mid-2000s. Reddit (Reddit, 2015) used shadow banning until 2015 to “punish” (Shu , 2015) users who broke the rules and deter spam. Shadowbanned accounts could continue to post but their content would only be visible to that user, meaning that they kept posting content without realizing their accounts were banned. In 2016, the right-wing media outlet Brietbart published what it called an expose revealing that Twitter (MILO, 2016) and Facebook (Bokhari, 2018) used shadowbanning to silence conservative voices. A 2018 Vice (Thompson, 2018) article, that was later discredited (Stack, 2018), claimed that Republican figures had been intentionally removed from Twitter’s auto-populated drop-down search menu. The term shadowban entered the popular lexicon as a highly politicized term when U.S. President Trump used the term to condemn unfounded claims that social media firms were shadowbanning Conservative users and threatened (Molina, 2018) to investigate social media companies.

 

References

 

Michael Grothaus, ' Here’s how to shadow ban your Instagram bullies with the new ‘Restrict’ feature'  (Fast                    Company                     2019).                    Available                                  at:                        < https://www.fastcompany.com/90412178/instagram-rolls-out-restrict-feature-allowing-users-to- shadow-ban-bullies>

 

Reddit,                         'On                        shadowbans.'                          (Reddit                                      2015)                    < https://www.reddit.com/r/self/comments/3ey0fv/on_shadowbans/>

 

Catherine Shu, ' Reddit Replaces Its Confusing Shadowban System With Account Suspensions' (Tech Crunch 2015) < https://techcrunch.com/2015/11/11/reddit-account- suspensions/?guccounter=1&guce_referrer=aHR0cHM6Ly9lbi53aWtpcGVkaWEub3JnLw&guce

_referrer_sig=AQAAALQDh-jllBW53kqlmJiwGIKcCkdHfrRIaOy8- cKg9LUg4_EKR_kLqqDNe5E4nw7pvSg3BUJXad_CqtAqq2RbH7vlBts8e3pLWy34BtXwhIjSP6q rMpP1NcmiJacUCExGZNjcHpfaKBIjZ5VXIgG67OecnM5FZNxn53kwNfMcHOwN>

 

MILO, ' EXCLUSIVE: Twitter Shadowbanning ‘Real And Happening Every Day’ Says Inside Source' (BreitBart 2016) < https://www.breitbart.com/tech/2016/02/16/exclusive-twitter- shadowbanning-is-real-say-inside-sources/>

 

Allum Bokhari, ' Facebook Admits to Shadowbanning News It Considers ‘Fake’' ( BreitBart 2018)

<                https://www.breitbart.com/tech/2018/07/13/facebook-admits-to-shadowbanning-news-it- considers-fake/>

 

Alex Thompson, ' Twitter appears to have fixed “shadow ban” of prominent Republicans like the RNC              chair               and            Trump                        Jr.’s                     spokesman'                  (                       Vice           2018)                         < https://www.vice.com/en/article/43paqq/twitter-is-shadow-banning-prominent-republicans-like- the-rnc-chair-and-trump-jrs-spokesman>

 

Liam Stack, ' What Is a ‘Shadow Ban,’ and Is Twitter Doing It to Republican Accounts?' ( The New York Times 2018) < https://www.nytimes.com/2018/07/26/us/politics/twitter- shadowbanning.html>

 

Brett Molina, ' Shadow banning: What is it, and why is Trump talking about in on Twitter' ( USA TODAY 2018) < https://www.usatoday.com/story/tech/nation-now/2018/07/26/shadow-banning- what-and-why-trump-talking-twitter/842368002/>

 

 

88. Social network

  1. A social network is a platform-based service that enables users to share and exchange information with other individuals also using the network. Its purpose can be the sharing and exchange of political, commercial, personal interests or mixed information. Each user has a profile, which can be added within one personal network.
  1. A social network has features enabling users to exchange information with each other publicly or semi-publicly.
  1. “Public” means that information sharing is unrestrained.
  1. “Semi-public” means that a user is able to disclose (private) information to a large set of users at a single time that have been authorized to be part of the users’ network.
  1. “Purely private” exchange of information, that allow individual to share and exchange information through direct messages or mails (even in groups), and which require to know a private identifier of an individual do typically fall outside the scope of the definition of a social network. (e.g. of identifier not publicly shared: e-mail address, phone number, private pseudonym. This contrasts with a public pseudonym or an official first name and surname)
  1. Within the social networks, the platforms that enable the publishing of content and information through multiple sources and devices are often called “social media”.

 

89. Spam

  1. The concept of Spam refers to any kind of unsolicited digital communication that is usually dispatched in bulk. Hence spam is any messages sent to multiple recipients who did not ask for them.
  1. As pointed out by the Internet Society (2017), the main problems caused by spam are due to the combination of the unsolicited and bulk aspects; the quantity of unwanted messages swamps messaging systems and drowns out the messages that recipients do want.
  1. The first example of Spam was sent via the ARPANET in 1978. The spam was an advertisement for a new model of computer from Digital Equipment Corporation and it created a strong precedent as the communication succeeded in increasing the purchase of advertised equipment.
  1. Some legislation, such as EU Directive 2000/31/EC on electronic commerce, utilises the concept of “unsolicited electronic communications”, which cover most sorts of 'spam' but leaves out all politically motivated unsolicited communications.
  1. Importantly, the use of “electronic communications” is intended to cover not only traditional SMTP- based 'e-mail' but also SMS and any form of electronic messages for which the simultaneous participation of the sender and the recipient is not required.

 

References

 

Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market ('Directive on electronic commerce')

 

Internet Society. ‘What is spam?’ [2018]. Available at: https://www.internetsociety.org/wp- content/uploads/2017/08/What20Is20Spam_0.pdf

90. Technological protection measures

  1. Due to the easy duplicability of information transmitted in digital form, copyright holders regularly resort to technical protection measures (TPMs), such as encryption-based paywalls, to prevent acts which are not authorized by the right holder of any copyright or any right related to copyright. The legal system reinforces this type of protection by explicitly outlawing not only any acts of circumvention of TPMs, but also the manufacturing and sale of devices which have the primary purpose or effect of enabling such circumvention. In the EU, in particular, Member States are required as part of the EU Copyright Directive to provide adequate legal protection against the knowing circumvention of any “effective technological measures”, thereby referring to any technology, device or component that, in the normal course of its operation, is designed to prevent or restrict acts, in respect of works or other subject-matter . According to the Directive, technological measures shall be deemed "effective" where the use of a protected work or other subject-matter is controlled by the rightholders through application of an access control or protection process, such as encryption, scrambling or other transformation of the work or other subject-matter or a copy control mechanism, which achieves the protection objective. Furthermore, adequate legal protection is required against the manufacture or sale of any device which (a) has the purpose of circumventing a TPM; (b) has a limited commercially significant purpose other than such; or (c) is primarily designed to do so.
  1. There are two critical aspects of this type of legal protection: first, although it is intrinsically related to copyright, it is also independent from it- specifically, any unlawful circumvention constitutes a breach regardless of whether it led to an actual copyright infringement. Second, it may interfere with the ability of users of copyrighted works to benefit from exceptions and limitations that are specifically provided in copyright legislation. In principle, the continued application of these exceptions and limitations is guaranteed through article 6 (4) of the Directive, which requires Member States to provide adequate measures to that end. However, Member States are only required to act in the absence of voluntary measures taken by rightholders, including the conclusion and implementation of agreements between rightholders and other parties.

 

References

 

Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society. OJ L 167 , 22/06/2001 p. 10

 

U. Gasser, ‘Legal Frameworks and Technological Protection of Digital Content: Moving Forward Towards a Best Practice Model’, [2006] 17 Fordham Intellectual Property & Media Law Journal 39,60.

 

U. Gasser & M. Girsberger, ‘Transposing the Copyright Directive: Legal Protection of Technological Measures in EU-Member States, A Genie Stuck in a Bottle?’ Available at http://cyber.law.harvard.edu/media/files/eucd.pdf

 

Zingales, Nicolo, ‘Of Coffee Pods, Videogames, and Missed Interoperability: Reflections for EU Governance of the Internet of Things’ [December 1, 2015]. TILEC Discussion Paper No. 2015- 026,                              Available   at                         SSRN:                        https://ssrn.com/abstract=2707570                or http://dx.doi.org/10.2139/ssrn.2707570

 

91. Terms of service

  1. Internet intermediaries, in general, and platforms, in particular, use to regulate the services they provide through standard contracts, commonly referred to as terms of service or terms of use. Terms of Service are standardized contracts, defined unilaterally and offered indiscriminately on equal terms to any user. Since users do not have the choice to negotiate, but only accept or reject these terms, Terms of Service are part of the legal category of adhesion agreements or “boilerplate contracts”.
  1. The main feature of standard contracts is that the text is not the product of a negotiation. (Prausnitz, 1937) On the contrary, the conditions are pre-determined by and expresses the one- sided control of a single party. Over the past few years, this type of contract has become the object of numerous critiques, ranging from the unilateral definition of the provisions, the almost entire absence of negotiation between the parties, and the quasi-inexistence of the bargaining power of one party that is required to adhere to the terms. (Radin, 2012; Kim, 2013; Belli & Sappa, 2017)
  1. Internet users’ mere adherence to the terms imposed by the intermediaries gives rise to a situation where consumers mechanically ‘assent’ to pre-established contractual regulation. Furthermore, terms of service usually foresee that the intermediaries may continue to modify the conditions of contractual agreement unilaterally. Hence, except for the possibility to “take it or leave it”, users have no meaningful say about the contractual regulation they are forced to abide by. This context of “contractual authoritarianism” (Ghosh, 2014) is further exacerbated in the Internet environment. Besides having the power to unilaterally dictate the terms of use, intermediaries also enjoy the capability to unilaterally implement, through technical means, the private ordering crafted by the contractual provisions. (Belli & Venturini 2016; Belli & Zingales 2017)
  1. As noted by the Recommendations on Terms of Service and Human Rights developed by the IGF Coalition on Platform Responsibility, the concept of “terms of service” utilised here covers not only the contractual document available under the traditional heading of “terms of service” or “terms of use”, but also any other platform’s policy document (e.g. privacy policy, community guidelines, etc.) that is linked or referred to therein.

 

References

 

Belli L, & Venturini J. ‘Private ordering and the rise of terms of service as cyber-regulation’, [2016]. Internet Policy Review. Vol 5. N° 4. Available at: https://policyreview.info/articles/analysis/private- ordering-and-rise-terms-service-cyber-regulation

 

Belli L & Sappa C. ‘The Intermediary Conundrum: Cyber-regulators, Cyber-police or both?’, [2017] JIPITEC. Available at: http://www.jipitec.eu/issues/jipitec-8-3-2017/4620

 

Belli L & Zingales N, ‘Platform regulations: how platforms are regulated and how they regulate us’. [2017] FGV: Rio de Janeiro.

 

Prausnitz. O. ‘The standardization of commercial contracts in English and continental law’, [1937] Sweet & Maxwell, London.

 

Radin. M.J. ‘Boilerplate: The Fine Print, Vanishing Rights, and the Rule of Law’, [2012] Princeton University Press

 

Kim. N.S. ‘Wrap Contracts: Foundations and Ramifications’, [2013] Oxford University Press.

 

Ghosh S. ‘Against Contractual Authoritarianism’, [2014] Southwestern Law School Review. Vol 44.

 

 

92. Terrorist content

  1. The concept of cyberterrorism was initially targeted at virtual attacks to infrastructure aimed at disrupting a nation or region in the same way that traditional terrorist attacks do, with the same visibility and clear intent of spreading fear. Cyberterrorism now includes the use of social media by terrorist organizations, a phenomenon that researchers