Digital policy and human rights frameworks: What is the relationship between digital policy and development and the established international frameworks for civil and political rights as set out in the Universal Declaration on Human Rights and the International Covenant on Civil and Political Rights and further interpretation of these in the online context provided by various resolutions of the Human Rights Council? How do policy makers and other stakeholders effectively connect these global instruments and interpretations to national contexts? What is the role of different local, national, regional and international stakeholders in achieving digital inclusion that meets the requirements of users in all communities?
Promoting equitable development and preventing harm: How can we make use of digital technologies to promote more equitable and peaceful societies that are inclusive, resilient and sustainable? How can we make sure that digital technologies are not developed and used for harmful purposes? What values and norms should guide the development and use of technologies to enable this?
Round Table - U-shape - 60 Min
The regulation of Artificial Intelligence (AI) systems has been intensively discussed throughout the last decade. Particularly, concerns about the threats that certain AI applications may introduce to people’s privacy, autonomy, and welfare have been raised and addressed by practitioners, civil society, and policy-makers. To a large extent, such concerns have shaped the new regulatory framework on AI proposed by the European Commission. However, despite incorporating sound ethical principles, many challenges are upfront regarding international law, applicability, and adoption among AI practitioners. This panel discusses an overview of this new proposal for “Regulation laying down harmonised rules on artificial intelligence” and its potential opportunities and challenges under international law. In particular, this panel will focus on the implications of including deceptive/subliminal techniques and remote biometric identification systems as prohibited categories within the new EU proposal and the socio-technical ethical implications around it. Furthermore, this panel will assess the implications of this new legal framework on AI within the international (human rights) legal framework. The topic implies an initiative to study a barely new regulation that attempts to limit a new technology created by humans but performed by non-humans (robots). The proposal research questions are the following ones: 1. What is the novel approach of the EU regulation about systems of artificial intelligence? 2. What are the practical and socio-technical implications of including subliminal techniques and remote biometric identification systems as prohibited categories within the EU regulation proposal on AI? 3. What are the main opportunities and challenges of these subliminal techniques and remote biometric identification systems within the rules of international law? 4. What are the main opportunities and challenges to strengthen the human rights law and its protection of fundamental rights? What is the impact on human rights? How to ensure meaningful, timely, and transparent multi-stakeholder participation in assessing human rights impacts? The AI systems represent the possibility of breaking the rules for creating and regulating human activity as we know. Therefore, any potential regulation on the matter, particularly when it contains provisions that may harm humans, must be an issue of analysis from a regional and international point of view.
This project is a 60-minute-panel group session. The onsite moderator will open the session with a five-minute-introduction of the subject. After the introduction, each speaker will conduct a ten-minute- presentation from his/her own experience and area of work. At the end of the presentations, there will be 15 minutes for questions of the audience (onsite and online). We expect the speakers to present concrete case studies that address the most controversial aspects of the proposed European regulation on AI. In the case of online participation, we also plan to use the following tools: the official virtual platform of the IGF to conduct part of the panel debate online. Additionally, we expect to use the social networks that follow (in live sessions) the IGF. The most common being Facebook, YouTube, and Twitter. The online moderator in charge of this panel has previous experience collecting the questions that come from different platforms at the same time and unifying them in a way that can be directed to the appropriate speaker. Moreover, the organizers intend that speakers answer all (or the majority) of the questions of the virtual audience. Previously to the IGF, organizers will publicize this panel (to be watched on these different platforms) in each one of the organizations where they work or are involved.
Fletcher School-Tufts University / ISP Yale Law School
Organizer: Patricia Vargas-León, The Fletcher School-Tufts University/ISP Yale Law School, Academia/Civil Society, GRULAC Organizer: Nicolás Díaz Ferreyra, User-Centred Social Media Research Training Group University of Duisburg-Essen, Academia/Civil Society, WEOG Onsite Moderator: Imane Bello, Sciences Po Paris, Lawyer/Civil Society, WEOG Online Moderator: Monica Trochez, Nucleo TIC, Technical Community, GRULAC Rapporteur: Manuel Zambrano Aquino, MGM Corporate Resources, Technical Community, GRULAC
- Patricia Vargas-León, The Fletcher School, Tufts University / ISP Yale Law School, Academia/Civil Society, GRULAC
- Nicolás Díaz Ferreyra, User-Centred Social Media Research Training Group University of Duisburg-Essen, Academia/Civil Society, WEOG
- Cornelia Kutterer, Rule of Law and Responsible Tech, EU Government Affairs, Microsoft, WEOG
- Daniel Leufer, Access Now, WEOG
Manuel Zambrano Aquino
Targets: As with any human creation, the outcome of using AI technologies may have a positive or negative impact on human life. AI has been questioned and defended by multiple academics, policymakers, and activists. Reasons focus on the challenge of the current rules of the market, the burden of a bias, the potential use of being as deceptive techniques, the discrimination factor, and even the possibility of replacing human judges in the court system. Hence, considering all these controversial AI applications and the consequences they may have in the developed and developing world, potential regulations and even suggestions for its regulation should be the purpose of an extensive research agenda. Moreover, currently, the difference between the developed and the developing world is that the latter is testing initiatives to implement AI programs, while the first one is testing regulations for the technology. Overall, the technology is already in place, even when companies use it without any regulation.
It's needed a global center that monitors the use of AI, just as there are internet that we have international regulators. This global center has clear human rights.
The technical community must be trained in terms of human rights in order for it to be considered in the development of AI-based systems.
The risk-based approach that the new European Regulation on AI wanted to address is that some risks cannot be mitigated. Therefore, any regulation on AI should have the possibility of bans. Additionally, there is a risk that some systems that pose threats are left out. If the mechanism for including new systems is slow, this could be a problem.
The rollback effects of AI systems pose a high risk that the community is not yet ready to deal with.
Engineers have a great challenge when developing this technology since they do not have a regulation that serves as a guiding instrument to assess the risks created by the product they designed.
In terms of human responsibility, the international human rights framework could determine the responsibilities of the actors involved in the algorithmic life cycle and thus define harm precisely.
International law is difficult to modify because it is based on the model of nation-states. One of the main problems is establishing the limits of human control over autonomous systems. Another point to consider is creating an entity responsible for artificial intelligence because entities in international law are human beings, nation-states, and international organizations. Under the terms of international law, there is no conceptualization of an AI entity is. Academics agree that it is necessary to examine AI entities by national bodies. The dichotomy between decision support systems and decision-making systems is persistent. It is difficult to identify who is accountable when autonomous systems support human decisions.
There is a risk-based approach to systems implemented with AI. Not only there are prohibited systems, but there is also an issue of liability and damages. Facial recognition is under question due to privacy concerns.
In the United States, the NIST is developing a more oriented proposal to analyze variables similar to those contained in the new European regulation, but this potential regulatory framework is not ready yet.
The proposed EU AI regulation has a title constructed around the prohibitions. In many of the legislative initiatives worldwide, there are no express prohibitions. Today, we still do not understand the impact, the full scope of technologies like AI, and the red lines that should not be crossed. Some risks cannot be mitigated.
The problem is that no one has developed a framework to assess the risks created for systems that surround essential aspects of our lives.
Risk assessment has become the most appropriate tool to face new technologies since they have entered the digital field. Nevertheless, we have been dealing with them for very little time.
The Council of Europe maps 506 regulatory / governance initiatives related to an AI governance regulation, and it does not even map them all.
Transparency must be considered within a technical context; this is the transparency of what we are using and the things we can clearly describe in human language. Transparency is also the basis for any legitimate and healthy use of data.
We are trying to find a more-global and meaningful way to assess risks that make the technology useable. A robust and universal framework can regulate, manage, or govern AI within a given jurisdiction. This is the lesson of the European regulatory framework.
How do we seat impact in terms of jurisdiction that AI could have?
Among the main conclusions of this panel, we can mention the following ones:
- A big issue is the lack of training on the ethics of AI, the basic principles of human rights, and risk evaluation.
- The technical community should be trained in human rights to consider in the development of AI-based systems.
- The risks involved in applying AI should be mitigated. AI regulation should include bans.
- Creating a global center that monitors the use of AI is recommended. This global center would create clear human rights guidelines to follow.
- A geopolitical consensus is needed for the governance of AI; global standards like the GDPR are necessary. Then other jurisdictions could impose similar bans.
- It is possible to have a robust universal framework to regulate, manage, or govern AI within a given jurisdiction and help us to assess risk for users.
- A broad definition of AI is necessary to adopt a risk-based approach. The risk needs a avoid the impact observed in some specific use cases, like migration and access to education. What is important is the impact AI systems can have on fundamental rights.
- A broad definition of AI systems is necessary to adopt a risk-based approach. The risk needs a deeper evaluation to avoid the impact in some instances, such as migration and access to education, among others. What is important is the impact that systems can have on fundamental rights.