IGF 2017 WS #129 Making artificial intelligence (AI) work for equity and social justice

Short Title
Making AI work for equity and social justice

Proposer's Name: Mr. Parminder Jeet Singh

Proposer's Organization: Just Net Coalition

Co-Proposer's Name: Mr. Roberto Bissio

Co-Proposer's Organization: Social Watch

Co-Organizers:

Mr. Parminder Jeet, SINGH, civil society, Just Net Coalition,

Mr. Roberto, BISSIO, civil society, Social Watch,

Mr. Hans KLIEN, academic community, Georgia Tech

Valeria BETANCOURT, civil society, Association for Progressive Communications

Mishi CHOUDHARY, Technical community, Software Freedom Law Centre of India

Additional Speakers
  1. Kate Logan, private sector : Kate is a Lead Product Strategist at ThoughtWorks and is also the Global Programme Manager for Intelligent Empowerment that works on how artificial intelligence, machine learning, deep learning are leading an industrial revolution.
  2. Luca Cirigliano,trade union: Luca is the  Central Secretary of the Swiss Federation of Trade Union- the largest national trade union center in Switzerland
  3. Alexis Dufresne, private sector: Alexis is the CEO Faveeo.com, a company that utilises artificial intelligence to enable brands to accelerate their consumer outreach by automating the discovery and publishing of impactful content at scale without compromising quality.
  4. Preetam Maloor, intergovernmental organisation, : Preetam is a Strategy And Policy Advisor in the Corporate Strategy Division of the International Telecommunication Union General Secretariat and an expert on international Internet-related public policy matters. (personally confirmed, awaiting organisational confirmation)
  5. Malavika Jayaram, civil society: Malavika is the inaugural Executive Director of the Digital Asia Hub, and Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University. She is on the Executive Committee of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. 
     

 

Format

Session Format: Round Table - 90 Min

Agenda
The onsite moderator will open the session with a statement on the intention of workshop organisers. He will also provide a state of play on what is already being done in the area to ensure that artificial intelligence works for and not against equity and social justice, as is often the fear from many accounts of AI's impact.

Time assigned for the introduction is 5 minutes.

Following the moderator, the 5 subject-matter experts will be given 5 minutes each to give their views and perspectives on the issue described above.

Time assigned for the expert statements is 25 minutes

Following the introduction of the subject, and expert statements, interested persons from the attendees of the workshop will be given the opportunity to make short and focused interventions on the precise statement of the problem. They can also make suggestions/ prescribe directions to be taken to ensure that AI actually works for equity and social justice These proceedings will follow a round-table format and the moderator shall make sure that that interventions do not stray from these two avenues detailed above.

Time assigned for interventions is 50 minutes.

The session will conclude with each panelist detailing key take-aways from the session and indicating how it will influence their work going forward. Each speaker will be given 2 minutes.

Time assigned for the conclusion is 10 minutes

Content of the Session:
The world had not yet gotten over its shock and awe at the power of open information flows, networking and then big data, that just within the last year or two we see that it will be artificial intelligence (AI) that could be the real game changer. AI hits at the very model of knowledge making that human civilisation has developed over millennia, which is at the base of all its evolution and development. Knowledge building was based on studying empirical facts, developing hypothesis, testing them and building theories and models of general action and prediction. Continuous micro digital mapping of human and allied activities puts so much data or 'facts' in the hands of machines that they can correlate them and find patterns that defy the most sophisticated existing model building practices. Correlation trumps causation and provides much greater predictive value, which can thus be used to control social and natural phenomenon. Whole systems of society can work pretty autonomously based on AI, which for the first time represents a wholesale disembodiment of intelligence as happened with mechanical power with the advent of the industrial revolution.

Those who control AI would be able to exert control across whole sectors, and whole of society, in ways that are unprecedented. Almost all of it is currently owned by corporation, and thus as the trends stand, AI powered society may represent a new level of corporatist re-organisation of society. A society requires both economic efficiency to maximise production as it requires political processes enforcing equity and social justice, for a just distribution of its productive outputs. AI may well solve the issue of production forever, which makes us need to focus on the processes for equity and justice. However, with near complete control over AI by a few corporates, and little political and regulatory advances in this area, it is not clear how AI will helps us more to a more equal rather than unequal society. With AI, where even the machine cannot spell out the basis of its actions other than justifying it with efficient results, the issues of ethics, equity and justice need to addressed anew, starting from conceptual levels, and building political processes and regulatory practices upon them.

This workshop will address these fundamental issues. How can human beings keep track of what AI systems are up to, what is the basis of their actions, which is necessary to anticipate and “control” them? Can some kind of ethical and regulatory super-instructions be built into all AI systems, as a politically enforced requirement, which overwrite all AI actions, however efficient they may otherwise it, and even “controls” its learning? How can these social and political imperatives override straightforward efficiency (and corporate interest) driven AI systems?

These are of course complex issues and questions that stand at the intersection of the socio-political realm and technology developments, which however, in our view, must be begun to be addressed right away as we stand at the cusp of a new technology wave, that could redefine social organisation.

Relevance of the Session:
Almost all big digital corporations have declared that AI will be core to their strategies. We are seeing corporations begin to dominate different social sectors, like transportation, e-commerce etc, increasingly employing AI. Governments like those of the US, UK, and EU have developed policy documents that begin to outline the significant issues regarding regulating AI, but these mostly only acknowledge that there are important social and political issues at hand, and yet do no more than nibble at the margins of the problem. There are alarming instances of AI making racially and gender-based prejudiced decisions on issues as diverse as whether a prisoner gets parole or not, to eligibility for social benefits, credit, employment etc. And of course AI is responsible for increasing displacement of labour, even at the white collar levels. Most of these issues have surfaced in the last two years or so, but the trend is such that massive changes in the next few years are predicted. The issues and questions that the proposal seeks to address therefore are both extremely important as well as urgent. We need a sustained process of dialogue among civil society groups, governments, businesses and the technical community in this regard.

Tag 1: Artificial Intelligence
Tag 2: Social Justice
Tag 3: Regulation

Interventions:
The listed speakers will make some opening remarks and the discussion will then be taken to the round table where everyone will be able to give their views, in two rounds, responding to two sets of questions posed by the moderator. Remote participants will be given an equal chance.

Diversity:
The list of initial speakers have a gender and geopolitical diversity. Since a round table format is being employed, we expect to hear a great diversity of views and perspectives. 

Onsite Moderator: Parminder Jeet Singh
Online Moderator: Norbert Bollow
Rapporteur: Nandini Chami

Online Participation:
Online participation will be provided and facilitated, and remote participants given an equal chance to intervene as the physically present ones. 

Discussion facilitation:
As mentioned, the subject will first be introduced very briefly by three speakers, and then the moderator will list two set of questions for two rounds of open participation by round table participants. 

Conducted a Workshop in IGF before?: Yes
Link to Report: http://www.intgovforum.org/cms/wks2014/index.php/proposal/view_public/198

Session Report (* deadline 26 October) - click on the ? symbol for instructions

Session Title: Making artificial intelligence (AI) work for equity and social justice

Date: 20 December 2017

Time: 3:00 PM to 4:30 PM

Session Organizer: Parminder Jeet Singh - Just Net Coalition, Roberto Bissio - Social Watch , Valeria Betancourt - Association for Progressive Communications, Mishi Choudhary - Software Freedom Law Centre India

Chair/Moderator: Parminder Jeet Singh - Just Net Coalition

Rapporteur/ Notetaker: Anita Gurumuthy and Nandini Chami - IT for Change

List of Speakers and their institutional affiliations:Juan Carlos Lara - Derechos Digitales, Norbert Bollow - Just Net Coalition, Mishi Choudhary - Software Freedom Law Centre India, Preetam Maloor - International Telecommunication Union, Malavika Jayaram - Digital Asia Hub

Key Issues raised (1 sentence per issue):

1. When we discuss the implications of artificial intelligence(AI) for equity and social justice, it is important to move beyond viewing AI as a set of discrete technologies to acknowledging it as a social force that is reorganising society and economy.

2. For AI to be an effective and efficient tool in addressing the root of social problems, we must avoid the trap of tech-solutionism that results in us finding AI solutions to problems that were not important for society, to begin with.

3. AI must be accountable. Transparency is a necessary first step towards this, and the encoding of inclusivity and diversity in these technologies is an equally important next step.

4. The UN System has acknowledged the new inequality that the use of AI can result in, as a frontier issue requiring urgent attention, and is determined to promote global cooperation on ensuring that AI is used for human dignity and global good

5. Developing countries are looking to AI to fast-track development without paying heed to traditional values of equity and social justice.

6. The centralisation of power in AI systems is a challenge for equity and social justice.

If there were presentations during the session, please provide a 1-paragraph summary for each presentation:

Parminder Jeet Singh kick-started the workshop by stating that the interest in AI stems from its reorganization of social relationships, governance functions, and the economy. To distinguish the subject matter of the current discussion from other discussions on the different technological components of AI, a more accurate nomenclature would be digital/data-based intelligence and its re-constitutive impacts on society. Having clarified the definition/conceptual understanding of AI driving the workshop, he went on to clarify that social justice is not a mere add-on. It must be viewed as an integral part of the evolution of AI itself.

Juan Carlos Lara pointed out that in the Latin American context, AI is often a marketing buzzword that is used to push technologies whose design does not have elements of data-based intelligence. Another problem, when we are dealing with those kinds of tools that can actually process data and make decisions on the basis of that data, is that governments and companies trying to adapt these tools for contextual problem-solving have not concerned themselves with the peculiarities of the local. Developing nations are not only using technology that is not adapted to local contexts. They are also using it to provide solutions to social problems that society may not find important. Social issues cannot have technological solutions alone. But though we must move away from tech-solutionism, we need to acknowledge AI as an effective and efficient tool in tracing the root of social problems.

Norbert Bollow’s presentation focused on the technical definition of AI. He started by contrasting AI against natural intelligence that humans and animals possess, which works on the inputs provided by sense organs. A simple definition of AI acknowledges it as a technology that can identify patterns and generate human readable output by processing data. It has two main components: the algorithmic component devised by the programmer’s reasoning, and neural networks that recognise patterns, even if these patterns are not expressly articulated by the programmer. In the second component, the output/ goal is identified by the programmer and the AI figures out the means to the ends. Though such independent pattern recognition is very powerful, its success is predicated upon the availability of large datasets. One needs lots of data to train computers enough so that they recognize the patterns and recognize them in such a way that they will know which patterns are important for generating the desired outputs. A key limitation of AI is that you need to have a precise definition of what you want to optimize. Since social good cannot be reduced to a number, AI systems cannot generate it as an output.

Mishi Choudhary’s presentation reflected on the implications of applying AI to social setting. She pointed out that we already have examples of how big data and machine learning are impacting us socially. Human society is going to permanently shift because of the power we are investing in the robot and in automated decision-making. AI has become the buzzword not just for digital corporations but also for companies functioning in traditional sectors such as agriculture, food etc. Researchers have understood it is not the algorithm but data sets that produce optimal results. The data sets that we create every day are fueling the functioning of these systems. While AI is helpful, it must meet some demands for accountability- the first of which is transparency. Closed data sets are fed into closed propriety software, leading to an absence of trust. Currently, we see companies self-regulating AI. This may be a precursor for other developments that result in codified law. Also, developing countries place primacy on innovation hoping that it can fast-track progress. But without emphasising traditional values of social justice, equity and democracy, which has taken us generations to build, we may not go far. “ If skin color, if culture, if gender, if various other things which we think is what makes the human race are not even being reflected in the data sets, you are really far away from what the machine will actually learn and then spit it out.”

Preetam Maloor explained that the UN system’s approach to AI is that it has the potential to improve life around the world in fundamental ways and that it could have a major role in achieving the Sustainable Development Goals. Admittedly there are also many challenges: ethical( bias and discrimination), safety and security, technical (algorithmic transparency), data challenges (privacy) and socio-economic challenges (the need to ensure developing countries and their vulnerable unconnected populations do not become marginalized in the move towards data-based decision making). AI is leading to the emergence of a new sophisticated digital divide and the UN has taken heed of the matter. Its approach to AI is threefold. First, it recognizes expertise to understand that technology may lie outside of itself and thus the UN is seeking to establish a platform for multi- stakeholderism and facilitating inter-agency coordination. Second, a research and review system to study the impact of AI on current UN frameworks and developing evidence based research on social impacts is needed. Thirdly, capacity building to ensure equitable distribution of the benefits of AI is required.

Malavika Jayaram pointed to the fact that despite the various kinds of good AI can do, the popular discourse has remained overwhelmingly critical of it. And such critical perspectives are very useful, as they force us to pay attention to questions of what can go wrong. For example, the robot Sophia being granted citizenship triggered a conversation on certain demographics/populations who are denied their human rights. Another conversation that technological diffusion is triggering is about who should bear the burden of adapting to an AI driven world. Oftentimes, the emotional and physical labour of adapting to these changes is visited on the people who are least able to do it. Another pertinent issue is determining what is algorithmically solvable and how can software code account for multiple definitions of the same idea. For example, in a Princeton class, the students were asked to come up with definitions of fairness and there were 21 different definitions the class came up with. How can this be translated effectively into code? All these issues have implications for equity. Malavika concluded with an appeal to recognise AI as our present, part of the current ecosystem that we inhabit, and not as a futuristic development.

Please describe the discussions that took place during the workshop session (3 paragraphs):

Comments from the floor:

- We need to understand AI systems better. Even if the data is unbiased we cannot be certain that a system to whom no rules are specified can produce the correct outputs.

- While AI may have efficiency gains in production; with respect to distribution, equity not efficiency should be the guiding factor. In such a case we may decide that AI should be not used at all.

- Just as there are technical solutions to social problems, there are social solutions to technical problems. Social problems and technical solutions are not to be seen as separate from one another. As the panel suggested, if AI is a social construct, then issues of agency and inclusion are integral parts of the system.

Observations from the panel:

- Parminder Jeet Singh underlined that just because a social good indicator is not measurable, it should not be seen as less important than one which can be measured. This is important to acknowledge in the push towards data-based decision-making.

- Article 25A of the ICCPR states that every citizen shall have the right and opportunity to take part in the conduct of public affairs. Defining what a AI algorithm should do, Norbert Bollow pointed out, is becoming a matter of public affairs that citizens must demand to be a part of.

- Mishi Choudhary elucidated that citizens place their trust in technology as tech is seen as predictable and hence capable of being trusted, whereas humans decision-making is corruptible. Citizens are now starting to become aware and concerned about the biases in AI and the immense power that digital corporations wield in shaping social interaction. Parminder Jeet Singh added that apart from biases, interests of the corporate are written into the code, which proves problematic.

- Malavika Jayaram questioned if we want technology to reflect reality that is riddled with problems such as bias and discrimination, or do we want it betters reality in some way. It is important to realise in that endeavour, just as in any endeavour to bring about social justice, there is bound to be some backlash.

Please describe any participant suggestions regarding the way forward/ potential next steps /key takeaways (3 paragraph):

- Programmers should build AI that records the sequence of steps towards an outcome in human readable form, in order to audit AI to weed out socially unjust steps.

- Algorithmic transparency is a necessary first step for accountable AI.

- AI must train on inclusive datasets in order to prevent it from reproducing discriminatory outcomes.

- Equity and social justice should not be an afterthought to the new digital systems around artificial intelligence that are being set up. They must be integral to the design of such systems and social values have to be encoded into AI at the outset.

Gender Reporting:

Estimate the overall number of the participants present at the session: 65

Estimate the overall number of women present at the session: 25

To what extent did the session discuss gender equality and/or women’s empowerment?If the session addressed issues related to gender equality and/or women’s empowerment, please provide a brief summary of the discussion.

The workshop spoke to the question of structural exclusion and injustice. Speakers reflected on the propensity of AI to replicate existing socio-structural hierarchies. There was an explicit reference to how gender and racial discrimination may be amplified through AI if social values of equity and justice are not encoded into AI. Further, the discussions reflected on intersecting axes of exclusion and marginalisation and the need to address them head-on, in the quest for leveraging AI for social good.