​Applying human rights and ethics in responsible data governance and AI

This session will bring ethical and human rights' perspectives of AI as a technology that holds many promises and raises many concerns. Like any other technology, it all depends on how we use it. AI can contribute to addressing some of the world's most pressing problems, but it can also lead to more inequality. AI can make people's lives easier, but it can also generate discrimination and bias. So how do we develop and use AI in a human-centric and trustworthy manner? How to make sure that the data used by AI is reliable, accurate, and complete enough as not to generate discrimination? How to avoid privacy and data protection breaches in accessing and processing the large amounts of data that are at the core of AI? And how do we make sure that there is transparency and accountability in how algorithms function and AI is used?

At IGF 2018, we started a discussion on the role of ethical frameworks and human rights legal instruments in addressing these issues. One conclusion was that ethics differ across cultures and geographies, and, as such, we might be more efficient and effective in building trustworthy AI if we focus on applying existing human rights principles. Meanwhile, several new ethical frameworks have been developed by different entities (OECD, ITU, EU Commission, IEEE), and renewed calls have been made to ensure that human rights are always preserved in AI contexts, including in design, use and deployment. So where are we now, and what next? We have multiple human rights and ethical frameworks, but is this enough? How to apply them consistently in governing data, developing algorithms and actually using AI systems? Who bares this responsibility? And are there (or should there be) mechanisms for enforcement and monitoring in place?

More specifically, the session will address three policy questions:

a) What is, in fact, a trustworthy and responsible AI, especially with regard to data governance?

b) What is the role of human rights legal instruments and ethical frameworks in ensuring trustworthy and responsible data governance and AI? Are there any lessons learnt from existing frameworks?

c) How to cross the bridge between defining human rights and ethical frameworks and implementing them in AI systems and SDGs? What is the role of different stakeholders and how can they work together to achieve the best results?

Messages from last year's Main Session on Emerging technologies

As a basis for the discussion we will be using messages from last year's Main Session on Emerging technologies and main principles from actual Ethical and human rights concerned AI initiatives.

  • Code is a reflexion of the people who have coded and influenced it. Ethics changes over time and across cultures, so is there some global ethics? Nevertheless ethics might be needed to guide a responsible approach to AI when regulation and laws can not keep up with the speed of technology developments.

  • When we talk about ethics, we do not talk about what is the regulation in different countries, but we talk about what are the common values shared in a society, and that is something previous to law, that is something that runs parallel to law, and that is something that is additional to law. But if we build technologies with a global approach, developers may need a single guidance.

  • Professional ethics is very much needed -- we have ethics for law, we have for psychologies, for medicine, but we don't have professional ethics in marketing, we don't have a professional ethics in engineering, and this is something that we need to start thinking more and more about.

  • Fairness is the first success of building ethical approach. Most people agree algorithms should be fair, that means they should treat different groups of people equally (they should not be discriminating). We need to be more pro people in the development itself.

  • Interpretability of algorithms is needed. Is there human responsibility and accountability for any decision taken by AI?

  • Instead of using ethics as something we need to enshrine in technologies, would it be better to use principles defined in the UN Declaration on Human Rights that is less dependent on national, cultural and other differences and has a broader consensus within countries worldwide

AI main principles clusters

Using actual AI policies and initiatives we tried to cluster the main principles into the following categories

Principles for a human-centred AI for social good

  • Serving people, society and the planet

  • Supporting social and economic growth that benefit society

  • Trustworthy, safe and secure AI

  • Fair, unbiased and nondiscriminatory AI

  • Respect for human dignity and choice

  • Respect for ethical values and principles

  • Respect for fundamental freedoms and rights (privacy, freedom of expression, etc)

  • Lawfulness 

Principles for responsible AI

  • AI as a solution to existing problems, not a solution in search of a problem

  • Transparency in how AI is developed and works

  • Algorithmic explainability

  • Accountability and responsibility in the development and use of AI

Mechanisms

  • Ensuring humans are always in control over AI 

  • Embedding privacy, safety, and security in the development stage of AI

  • Supporting the development of AI that benefits society, by making relevant data and infrastructures available 

  • Developing policies, guidelines, standards, principles and best practices regarding the development and use of AI for social good

  • Advancing the public’s understanding of AI and its implications

  • Preparing the workforce for an increasingly-driven AI society

  • Facilitating multidisciplinary dialogues on AI, its implications, and future

  • Supporting synergies and cooperation between existings initiatives focused on human-centred AI

Reference:

1. Key Policy Questions and Expectations

Policy questions that will be discussed during the session are:

I: What is, in fact, a trustworthy and responsible AI, especially with regard to data governance?

II: What is the role of human rights legal instruments and ethical frameworks in ensuring trustworthy and responsible data governance and AI? Are there any lessons learnt from existing frameworks?

III: How to cross the bridge between defining human rights and ethical frameworks andimplementing them in AI systems and SDGs? What is the role of different stakeholders and how can they work together to achieve the best results?

Possible output of the session:

An outline or roadmap on how to move from designing human rights and ethical frameworks for data governance and AI to actually mainstreaming and implementing them.

2. Summary of Issues Discussed

- Human centered approach towards technology development 

- Main principles of current AI initiatives 

- Responsibilities and accountability of all stakeholders in addressing and taking into consideration ethical and human rights principles in dealing AI

- Multidisciplinary approach

- Professional code of thics and potential standards

- Who should fund the role of civil society organisations and journalists in training and research processes?

3. Policy Recommendations or Suggestions for the Way Forward

- A call to enhance full compliance with the UN Charter and UN Guiding Principles, in order to assess the potential need for further normative framework.

- Current HHRR normative and legally frameworks should be the basis for further development of regulatory mechanisms.

- Consider the development of national framework that mirrors global legally binding instruments and other standards. 

- Provide incentives for industry to exercise due diligence in the development and deployment of AI

- AI governance should foster sustainable and inclusive development

4. Other Initiatives Addressing the Session Issues

- Partnership on AI

- IEEE standards

- OECD principles 

- European Council and European Commission principles

- EESC European Economic and Social Committee

- HLPDC

-  UN ecosystem initiatives 

-  Labour organizations

-  Other

5. Making Progress for Tackled Issues

The dialogue could be enhanced by bringing a broader community to the table. Also by breaking silos and fostering collaborative efforts. The AI advancements shall benefit all: AI data producers, users, governments, vulnerable communities (women, children, youth, disabled persons, minorities, LGTBTI, etc). AI developers, investors and consumers must respect and comply with human rights and ethical considerations and principles. AI development must be responsible, trustworthy, transparent, accountable, understandable, etc.

6. Estimated Participation

170 present participants and 15 remote participants

7. Reflection to Gender Issues

Problems with biased data. CEO North America (80% men, 20% women). 84%of developers  in Sillicon Valley are male and white, tendency to feed biases into algorithms. It is important to consider whether or not, we want to feed data as it is into the AI development or ”correct/adjust” the data with a more balanced and inclusive perspective, as an ideal approach (50-50%).

8. Session Outputs

a) What is, in fact, a trustworthy and responsible AI, especially with regard to data governance?

AI in design, deployment and use must be honest, trustful, transparent, accountable, inclusive, responsible and understandable. AI relies on huge amounts of data, so the technical aspect shall encompass the human aspect. The emphasis on ‘artificial’ is questionable, the focus shall be on the human being. In other words, data must be curated, the algorithm must be designed taking into account ethical and human rights considerations. Thus, we can and must develop AI for good. 

In terms of the geopolitical issues, how can we addressed those? From a governmental perspective and/or business perspective, AI shall benefit the new ecosystem of new technologies for economic and social prosperity. We must consider mainstreaming training, and developing the skills, including analytical thinking, empathetic and problem solving skills. This education starts at and early stage (including pre-school). The benefits and the technology must be shared, so it really becomes inclusive. We shall not be contributing to a division b/w data “owners” and data “slaves”. Who owns the data, it can not be monopolized by big companies? How AI can benefit the less powerful groups? AI is replacing jobs and changing the labour market and dynamics. Digital inclusion, including women and marginalized communities. 

b) What is the role of human rights legal instruments and ethical frameworks in ensuring trustworthy and responsible data governance and AI? Are there any lessons learnt from existing frameworks?

The question of the need for a broader global instrument to regulate AI was raised, so AI applications are accountable, transparent, responsible, safe, easy to understand, etc. A multidisciplinary dialogue on AI must be encouraged to elaborate on these aspects. AI design based on the concept of the human centered approach.

Human rights are global and legally binding. What impact are those principles having in the ground? So human rights can serve as a foundation for the development of AI regulation for the people, in real time, for real people. 

So the questions are so novel and unpredicted vis a vis the UN Charter, it is valid to consider the potential gaps. We have not done the hard work of applying fully the existing framework, before we come up with new regulations. UN Guiding Principles are fundamental, in terms of respecting human dignity and human rights. Principles shall be applied to data gathering and analysis. OECD principles for stewardship of trustworthy AI have now been adopted by OECD and non-OECD countries. 

c) How to cross the bridge between defining human rights and ethical frameworks and implementing them in AI systems and SDGs? What is the role of different stakeholders and how can they work together to achieve the best results? (this policy question could serve as a starting point from which the desired output of the session - a roadmap on how to bridge that gap - could be built.)

It may be valuable to consider whether the UN Charter is being implemented, and if there is room for enhance respect and compliance. 

We need to address what are the current mechanisms and processes to implement the systems and legal frameworks at all levels. The development of national advisory offices may help with that and customize their regulation to apply HHRR and ethical frameworks more effectively. And we need to include the accountability and responsibility issues in that approach, for governments and industry compliance. And then scale up that platform, including principles and best practices to the multilateral arena. There are many initiatives out there that can feed the dialogue, for instance the HLPDC and the Initiative for AI, the OECD principles, among others.

Is there a need for a ethical code, like the Hypocrites code for tech developers? 

There is a lack of understanding of how technology impact society, we must consider enhancing technical training with ethical and human rights considerations. 

Leadership, journalists and the people funding the system are key actors, and they need to lead by example and promote a multi-stakeholder approach at all levels for a trustworthy and responsible AI. We need an industry wide approach for AI governance, including content moderation. It is not sufficient for companies to regulate themselves individually, we need to strive for coherence and common standards. Consider the role of a Social Media Council. We need multistakeholder approach but also a multidisciplinary one (engineers, sociologists, etc), social construct is context dependent, and need to be taken it into account.