New Technologies and Risks to Online Security
Round Table - 60 Min
Increasingly, proposals across jurisdictions are pushing for content scanning or detection mechanisms in end-to-end encrypted environments. Many of them are premised on the claim that artificial intelligence and machine learning tools can be utilised to scan content "without breaking encryption". This claim is contentious and has received push back from security experts who hold that AI and ML tools have serious limitations, are not a silver bullet that will solve the difficult questions around content moderation on encrypted platforms, and will indeed undermine encryption, an essential tool for online privacy and security. However, these conversations are at a relatively nascent stage, presenting an opportunity to bridge the gap between AI/ emerging tech discussions and encryption debates, to ultimately contribute to policy development with a more nuanced understanding of what is at stake. This panel is an attempt to build on this opportunity by bringing together experts on encryption, and experts on AI, and creating a platform for dialogue. The issues for discussion will include: -- The viability of content scanning / detection proposals that rely on (implicitly or explicitly) AI/ML tools, for content moderation on end-to-end encrypted platforms -- Inherent limitations in AI/ML tools to assess edge cases and nuances in detecting illegal / unpermitted content, as well as the fragility of such tools and their susceptibility to adversarial attacks -- The impact of such proposals on end-to-end encryption -- The rights and freedoms that would be affected by policy decisions in this regard by the gov't or private sector -- Realistic and holistic assessment: questioning the purported promises of emerging tech; could over-reliance on a future tech result in potentially jeopardising existing, reliable tech like end-to-end encryption? -- How can policy dialogues on these subjects be approached to ensure that the false binary of "privacy v. safety" is not perpetuated, and to facilitate the development of a trust-based framework that equally prioritises both?
We will ensure that at least 25 minutes of the session are reserved for open discussion and Q&A. The online moderator will ensure that comments and questions by virtual participants are included in the conversation. We will also aim to facilitate diversity in participation.
Dr. Sarah Myers West, AI Now; Udbhav Tiwari, Mozilla; Riana Pfefferkorn, Stanford Internet Observatory; Eliska Pirkova, Access Now
Namrata Maheshwari, Access Now
Daniel Leufer, Access Now
Akhil Thomas, Access Now
Targets: "9.1 Develop quality, reliable, sustainable and resilient infrastructure, including regional and transborder infrastructure, to support economic development and human well-being, with a focus on affordable and equitable access for all" Encryption is a critical component of a resilient cybersecurity infrastructure that benefits individuals, the economy, institutions as well as governments. Proposals to use machine learning and AI to moderate content on end-to-end encrypted platforms would undermine encryption and therefore risk weakening our overall cybersecurity infrastructure.
There is agreement that encryption is a crucial tool, and AI has its pitfalls, but there continues to be disagreement on whether it is possible to scan content without breaking encryption. Privacy and security experts, platforms + others believe this is not possible and would increase vulnerability online. Child safety groups and others believe it can be done. AI is not a solution. More engagement on this intersection is required.
Robust, multi-stakeholder engagement on the role of encryption online is necessary given the far reaching consequences of proposals to scan content for the internet landscape and fundamental rights, so that consensus can be achieved in a way that ensures privacy and safety, which are mutually reinforcing. Equally, the acceleration towards deployment of AI for various purposes needs to be interrogated with a push for adequate impact assessments.