Session Title: Making artificial intelligence (AI) work for equity and social justice
Date: 20 December 2017
Time: 3:00 PM to 4:30 PM
Session Organizer: Parminder Jeet Singh - Just Net Coalition, Roberto Bissio - Social Watch , Valeria Betancourt - Association for Progressive Communications, Mishi Choudhary - Software Freedom Law Centre India
Chair/Moderator: Parminder Jeet Singh - Just Net Coalition
Rapporteur/ Notetaker: Anita Gurumuthy and Nandini Chami - IT for Change
List of Speakers and their institutional affiliations:Juan Carlos Lara - Derechos Digitales, Norbert Bollow - Just Net Coalition, Mishi Choudhary - Software Freedom Law Centre India, Preetam Maloor - International Telecommunication Union, Malavika Jayaram - Digital Asia Hub
Key Issues raised (1 sentence per issue):
1. When we discuss the implications of artificial intelligence(AI) for equity and social justice, it is important to move beyond viewing AI as a set of discrete technologies to acknowledging it as a social force that is reorganising society and economy.
2. For AI to be an effective and efficient tool in addressing the root of social problems, we must avoid the trap of tech-solutionism that results in us finding AI solutions to problems that were not important for society, to begin with.
3. AI must be accountable. Transparency is a necessary first step towards this, and the encoding of inclusivity and diversity in these technologies is an equally important next step.
4. The UN System has acknowledged the new inequality that the use of AI can result in, as a frontier issue requiring urgent attention, and is determined to promote global cooperation on ensuring that AI is used for human dignity and global good
5. Developing countries are looking to AI to fast-track development without paying heed to traditional values of equity and social justice.
6. The centralisation of power in AI systems is a challenge for equity and social justice.
If there were presentations during the session, please provide a 1-paragraph summary for each presentation:
Parminder Jeet Singh kick-started the workshop by stating that the interest in AI stems from its reorganization of social relationships, governance functions, and the economy. To distinguish the subject matter of the current discussion from other discussions on the different technological components of AI, a more accurate nomenclature would be digital/data-based intelligence and its re-constitutive impacts on society. Having clarified the definition/conceptual understanding of AI driving the workshop, he went on to clarify that social justice is not a mere add-on. It must be viewed as an integral part of the evolution of AI itself.
Juan Carlos Lara pointed out that in the Latin American context, AI is often a marketing buzzword that is used to push technologies whose design does not have elements of data-based intelligence. Another problem, when we are dealing with those kinds of tools that can actually process data and make decisions on the basis of that data, is that governments and companies trying to adapt these tools for contextual problem-solving have not concerned themselves with the peculiarities of the local. Developing nations are not only using technology that is not adapted to local contexts. They are also using it to provide solutions to social problems that society may not find important. Social issues cannot have technological solutions alone. But though we must move away from tech-solutionism, we need to acknowledge AI as an effective and efficient tool in tracing the root of social problems.
Norbert Bollow’s presentation focused on the technical definition of AI. He started by contrasting AI against natural intelligence that humans and animals possess, which works on the inputs provided by sense organs. A simple definition of AI acknowledges it as a technology that can identify patterns and generate human readable output by processing data. It has two main components: the algorithmic component devised by the programmer’s reasoning, and neural networks that recognise patterns, even if these patterns are not expressly articulated by the programmer. In the second component, the output/ goal is identified by the programmer and the AI figures out the means to the ends. Though such independent pattern recognition is very powerful, its success is predicated upon the availability of large datasets. One needs lots of data to train computers enough so that they recognize the patterns and recognize them in such a way that they will know which patterns are important for generating the desired outputs. A key limitation of AI is that you need to have a precise definition of what you want to optimize. Since social good cannot be reduced to a number, AI systems cannot generate it as an output.
Mishi Choudhary’s presentation reflected on the implications of applying AI to social setting. She pointed out that we already have examples of how big data and machine learning are impacting us socially. Human society is going to permanently shift because of the power we are investing in the robot and in automated decision-making. AI has become the buzzword not just for digital corporations but also for companies functioning in traditional sectors such as agriculture, food etc. Researchers have understood it is not the algorithm but data sets that produce optimal results. The data sets that we create every day are fueling the functioning of these systems. While AI is helpful, it must meet some demands for accountability- the first of which is transparency. Closed data sets are fed into closed propriety software, leading to an absence of trust. Currently, we see companies self-regulating AI. This may be a precursor for other developments that result in codified law. Also, developing countries place primacy on innovation hoping that it can fast-track progress. But without emphasising traditional values of social justice, equity and democracy, which has taken us generations to build, we may not go far. “ If skin color, if culture, if gender, if various other things which we think is what makes the human race are not even being reflected in the data sets, you are really far away from what the machine will actually learn and then spit it out.”
Preetam Maloor explained that the UN system’s approach to AI is that it has the potential to improve life around the world in fundamental ways and that it could have a major role in achieving the Sustainable Development Goals. Admittedly there are also many challenges: ethical( bias and discrimination), safety and security, technical (algorithmic transparency), data challenges (privacy) and socio-economic challenges (the need to ensure developing countries and their vulnerable unconnected populations do not become marginalized in the move towards data-based decision making). AI is leading to the emergence of a new sophisticated digital divide and the UN has taken heed of the matter. Its approach to AI is threefold. First, it recognizes expertise to understand that technology may lie outside of itself and thus the UN is seeking to establish a platform for multi- stakeholderism and facilitating inter-agency coordination. Second, a research and review system to study the impact of AI on current UN frameworks and developing evidence based research on social impacts is needed. Thirdly, capacity building to ensure equitable distribution of the benefits of AI is required.
Malavika Jayaram pointed to the fact that despite the various kinds of good AI can do, the popular discourse has remained overwhelmingly critical of it. And such critical perspectives are very useful, as they force us to pay attention to questions of what can go wrong. For example, the robot Sophia being granted citizenship triggered a conversation on certain demographics/populations who are denied their human rights. Another conversation that technological diffusion is triggering is about who should bear the burden of adapting to an AI driven world. Oftentimes, the emotional and physical labour of adapting to these changes is visited on the people who are least able to do it. Another pertinent issue is determining what is algorithmically solvable and how can software code account for multiple definitions of the same idea. For example, in a Princeton class, the students were asked to come up with definitions of fairness and there were 21 different definitions the class came up with. How can this be translated effectively into code? All these issues have implications for equity. Malavika concluded with an appeal to recognise AI as our present, part of the current ecosystem that we inhabit, and not as a futuristic development.
Please describe the discussions that took place during the workshop session (3 paragraphs):
Comments from the floor:
- We need to understand AI systems better. Even if the data is unbiased we cannot be certain that a system to whom no rules are specified can produce the correct outputs.
- While AI may have efficiency gains in production; with respect to distribution, equity not efficiency should be the guiding factor. In such a case we may decide that AI should be not used at all.
- Just as there are technical solutions to social problems, there are social solutions to technical problems. Social problems and technical solutions are not to be seen as separate from one another. As the panel suggested, if AI is a social construct, then issues of agency and inclusion are integral parts of the system.
Observations from the panel:
- Parminder Jeet Singh underlined that just because a social good indicator is not measurable, it should not be seen as less important than one which can be measured. This is important to acknowledge in the push towards data-based decision-making.
- Article 25A of the ICCPR states that every citizen shall have the right and opportunity to take part in the conduct of public affairs. Defining what a AI algorithm should do, Norbert Bollow pointed out, is becoming a matter of public affairs that citizens must demand to be a part of.
- Mishi Choudhary elucidated that citizens place their trust in technology as tech is seen as predictable and hence capable of being trusted, whereas humans decision-making is corruptible. Citizens are now starting to become aware and concerned about the biases in AI and the immense power that digital corporations wield in shaping social interaction. Parminder Jeet Singh added that apart from biases, interests of the corporate are written into the code, which proves problematic.
- Malavika Jayaram questioned if we want technology to reflect reality that is riddled with problems such as bias and discrimination, or do we want it betters reality in some way. It is important to realise in that endeavour, just as in any endeavour to bring about social justice, there is bound to be some backlash.
Please describe any participant suggestions regarding the way forward/ potential next steps /key takeaways (3 paragraph):
- Programmers should build AI that records the sequence of steps towards an outcome in human readable form, in order to audit AI to weed out socially unjust steps.
- Algorithmic transparency is a necessary first step for accountable AI.
- AI must train on inclusive datasets in order to prevent it from reproducing discriminatory outcomes.
- Equity and social justice should not be an afterthought to the new digital systems around artificial intelligence that are being set up. They must be integral to the design of such systems and social values have to be encoded into AI at the outset.
Gender Reporting:
Estimate the overall number of the participants present at the session: 65
Estimate the overall number of women present at the session: 25
To what extent did the session discuss gender equality and/or women’s empowerment?If the session addressed issues related to gender equality and/or women’s empowerment, please provide a brief summary of the discussion.
The workshop spoke to the question of structural exclusion and injustice. Speakers reflected on the propensity of AI to replicate existing socio-structural hierarchies. There was an explicit reference to how gender and racial discrimination may be amplified through AI if social values of equity and justice are not encoded into AI. Further, the discussions reflected on intersecting axes of exclusion and marginalisation and the need to address them head-on, in the quest for leveraging AI for social good.