What policy sector(s) does this fall under? (leave blank if not sure):
Overarching governance issues
Issue and Recommendation:
Background:
The area of health is considered an important field for use of AI, but has also stirred many human rights debates. Medical data and online apps can support improved health outcomes. But they might also exacerbate inequalities and erode privacy. Such concerns became even more visible amidst discussions around the use of online data to combat the COVID-19 pandemic.
Session report:
This session addressed the questions of how AI can be best utilised in the area of health and how governments should respond to the challenges linked to human rights.
Speakers highlighted that the use of AI can bring benefits, but that it also has risks, so that risks and benefits need to be balanced. Speakers raised particular challenges in the area, including inter alia data security and privacy concerns, potential inequality faced by middle and low income countries, bias and non-discriminatory use of AI as well as the lack of an existing binding legal framework.
The speakers agreed that there is a strong need to protect health data given their sensitive nature since for example genetic data can reveal very personal information about people.
Moreover, it was pointed out that the use of AI in middle and low income countries entails particular challenges that need to be addressed and resolved, including: the lack or low quality of data, which cannot or should not be used for developing AI applications; the use of data for commercial purposes without any human rights protection or prior informed consent (‘data colonialism’); and that AI technologies may be designed for particular global north addressees and not for lower income countries, which would make discrimination a significant concern. Efforts needed to improve data should not take away any resources needed for daily care of patients.
Bias and discrimination was also considered a major issue, which cannot be easily addressed, partly because of the lack of information about protected attributes. The absence of data on protected attributes (e.g. data on gender or ethnic origin) does not prevent a non-discriminatory use of AI due to possible other data highly correlated with protected attributes (i.e. proxy data). However, it was raised that AI built on data from one area in the world might not necessarily be usable in other areas of the world, which, again, can lead to discriminatory outcomes.
Considering the impacts of AI use, speakers of the session further emphasized that ethics cannot be sufficient and that, on EU level, a binding legal regulation can help addressing human rights challenges beyond the existing data protection and privacy framework. While existing laws in Europe would already protect against some of the potential misuses of AI in the area (e.g. Article 22 of the GDPR on automated decision making), other issues, such as AI not including personal data or not making decisions on individuals, might not be well covered by existing laws. Understanding the way AI systems work is considered a crucial aspect for making them compliant with human rights.
Recommendations and potential solutions:
As human rights standards already mark red lines for the use of AI. Existing data protection standards and the Oviedo Convention of the Council of Europe, enshrining principles that apply to the use of AI in the area of health, already help addressing the above-mentioned challenges of AI use.
Several recommendations came up in the presentations and discussions, including
• more effective data protection and security; mandatory human rights impact assessments;
• an extension of and a living up to the standards of the GDPR in and by other parts of the world;
• avoidance of the use of discriminatory and ineffective AI tools;
• having ethical audits;
• considering independent enforcement and monitoring systems; as well as
• the adoption of a binding legal framework guided by the principle “the greater the impact and harm to be expected, the greater the regulation”.
Speakers highlighted that there is no easy solution with regard to bias and discrimination of AI tools since the challenges depend on the context of use.
According to some speakers, ethical considerations should be considered during the development of AI tools.
The need for explainability and interpretability of AI tools was highlighted, as they should not be used without fully understanding and proving that they do not violate human rights, such as dignity and non-discrimination.