a) What is, in fact, a trustworthy and responsible AI, especially with regard to data governance?
AI in design, deployment and use must be honest, trustful, transparent, accountable, inclusive, responsible and understandable. AI relies on huge amounts of data, so the technical aspect shall encompass the human aspect. The emphasis on ‘artificial’ is questionable, the focus shall be on the human being. In other words, data must be curated, the algorithm must be designed taking into account ethical and human rights considerations. Thus, we can and must develop AI for good.
In terms of the geopolitical issues, how can we addressed those? From a governmental perspective and/or business perspective, AI shall benefit the new ecosystem of new technologies for economic and social prosperity. We must consider mainstreaming training, and developing the skills, including analytical thinking, empathetic and problem solving skills. This education starts at and early stage (including pre-school). The benefits and the technology must be shared, so it really becomes inclusive. We shall not be contributing to a division b/w data “owners” and data “slaves”. Who owns the data, it can not be monopolized by big companies? How AI can benefit the less powerful groups? AI is replacing jobs and changing the labour market and dynamics. Digital inclusion, including women and marginalized communities.
b) What is the role of human rights legal instruments and ethical frameworks in ensuring trustworthy and responsible data governance and AI? Are there any lessons learnt from existing frameworks?
The question of the need for a broader global instrument to regulate AI was raised, so AI applications are accountable, transparent, responsible, safe, easy to understand, etc. A multidisciplinary dialogue on AI must be encouraged to elaborate on these aspects. AI design based on the concept of the human centered approach.
Human rights are global and legally binding. What impact are those principles having in the ground? So human rights can serve as a foundation for the development of AI regulation for the people, in real time, for real people.
So the questions are so novel and unpredicted vis a vis the UN Charter, it is valid to consider the potential gaps. We have not done the hard work of applying fully the existing framework, before we come up with new regulations. UN Guiding Principles are fundamental, in terms of respecting human dignity and human rights. Principles shall be applied to data gathering and analysis. OECD principles for stewardship of trustworthy AI have now been adopted by OECD and non-OECD countries.
c) How to cross the bridge between defining human rights and ethical frameworks and implementing them in AI systems and SDGs? What is the role of different stakeholders and how can they work together to achieve the best results? (this policy question could serve as a starting point from which the desired output of the session - a roadmap on how to bridge that gap - could be built.)
It may be valuable to consider whether the UN Charter is being implemented, and if there is room for enhance respect and compliance.
We need to address what are the current mechanisms and processes to implement the systems and legal frameworks at all levels. The development of national advisory offices may help with that and customize their regulation to apply HHRR and ethical frameworks more effectively. And we need to include the accountability and responsibility issues in that approach, for governments and industry compliance. And then scale up that platform, including principles and best practices to the multilateral arena. There are many initiatives out there that can feed the dialogue, for instance the HLPDC and the Initiative for AI, the OECD principles, among others.
Is there a need for a ethical code, like the Hypocrites code for tech developers?
There is a lack of understanding of how technology impact society, we must consider enhancing technical training with ethical and human rights considerations.
Leadership, journalists and the people funding the system are key actors, and they need to lead by example and promote a multi-stakeholder approach at all levels for a trustworthy and responsible AI. We need an industry wide approach for AI governance, including content moderation. It is not sufficient for companies to regulate themselves individually, we need to strive for coherence and common standards. Consider the role of a Social Media Council. We need multistakeholder approach but also a multidisciplinary one (engineers, sociologists, etc), social construct is context dependent, and need to be taken it into account.