IGF 2021 WS #59
Colors of AI: The Blindspots doesn't see or hear.

Organizer 1: Gabriel Karsan, INTERNET SOCIETY [email protected]
Organizer 2: Innocent Adriko, Internet Society -Youth IGF Ambassadors
Organizer 3: Lily Edinam Botsyoe, Ghyrate Ghana
Organizer 4: Nancy Njoki Wachira, INTERNET SOCIETY [email protected]
Organizer 5: Wathagi Ndungu, EV Digital

Speaker 1: Héwing Gérald Dorvelus , Technical Community, Latin American and Caribbean Group (GRULAC)
Speaker 2: Pablo Nunes, Civil Society, Latin American and Caribbean Group (GRULAC)
Speaker 3: Wathagi Ndungu, Private Sector, Western European and Others Group (WEOG)
Speaker 4: Nancy Njoki Wachira, Technical Community, African Group
Speaker 5: Ihita Gangavarapu, Civil Society, Asia-Pacific Group


Innocent Adriko, Civil Society, African Group

Online Moderator

Gabriel Karsan, Civil Society, African Group


Lily Edinam Botsyoe, Technical Community, African Group


Round Table - U-shape - 60 Min

Policy Question(s)

Economic and social inclusion and sustainable development: What is the relationship between digital policy and development and the established international frameworks for social and economic inclusion set out in the Sustainable Development Goals and the Universal Declaration of Human Rights, and in treaties such as the International Covenant on Economic, Social and Cultural Rights, the Conventions on the Elimination of Discrimination against Women, on the Rights of the Child, and on the Rights of Persons with Disabilities? How do policy makers and other stakeholders effectively connect these global instruments and interpretations to national contexts?
Promoting equitable development and preventing harm: How can we make use of digital technologies to promote more equitable and peaceful societies that are inclusive, resilient and sustainable? How can we make sure that digital technologies are not developed and used for harmful purposes? What values and norms should guide the development and use of technologies to enable this?

Issues to be addressed

- How do we make sure gender, social and cultural diversity are incorporated into AI design and development
- Encourage governments to invest in the development and adoption of AI to secure its many benefits for the economy and society.
- The need to consider social inclusion and diversity
- The need to Enable beneficial AI Research and Development.


- Exclusion of marginalized groups
- Support to innovation
- Digital Inclusion
- Unequal skills of development of AI.



Targets: We are directly linked to SDG 10 of reducing inequalities among people and nations, SDG 9 on how AI can build resilient infrastructure, promote inclusive and sustainable industrialization and foster innovation and SDG 4 on how quality education can help in breaking the inequalities and growing more inclusion through equal skills of development of AI.



Our session taps on the experiences and aptitudes of emerging youth entrenched in the AI sector. With so many maxims, theories and practices already available in tackling AI and it's Blindspots, Cracks still exist which haven't fully accommodated all of it's beneficiaries further contributing to critical junctures that uphold it's vicious cycle nature. Our coalition that heads this session upholds inclusion and diversity with people who have experienced being on the other side of the AI coin. These schools of thought shall be solution oriented in contributing to better practices in policy, governance and functionality of AI so we could attain it's equal distributive benefit for all, without leaving the marginalized behind.

Description of the content

Color is an invention of the mind that further show us as delusional species subjected to perspective evolution whilst there's only light, this links with the prejudices and biases at a conscious and cultural element on how we develop and interact with AI. Algorithmic biases are critically subjected to one's environment, culture, and scope of thought. One tends to see the world in a local sense to them and without aggregating and permitting localized viewpoints and developments of AI division is enforced. As consciousness and the reasoning behind it's making is unequal.

Implicit biased development of AI causes the marginalization of Africans, Polarity in representation of diversity and equal world. From a normative focal point, under representation leads to misrepresentation and misinterpretation of a particular race and group of people. And this misrepresentation can lead to cases of rights violations, discrimination, and violence. Hierarchy of AI production is non-inclusive in it's holistic sense, intelligence is unbiased and has no color but AI due to polarity of it's workforce and institutions tend to leave some marginalized in an abyss not serving it's purpose as an inclusive tech based on togetherness. There is Systematic flaw in it's creation and deliverance to developing nations.
As AI exponentially grows so does the impact to communities marginalized by it, in the Aspect of it as Resources for forces of production to harness and Aid in the realization of the full potential of Humanity, it's literacy and accessibility as well.

AI marginalizes people from developing countries, specifically people of color, fundamentally it reopens the conversation of colonial mindset architecture of oppression and a backward Africa ultimately threat of sovereignty due to the presence of supreme digital powers. Colonialism created a single, linear, objective history of humanity as a form to justify the conquest of the non-Western World. Some uses of AI follow this colonialist mindset.

Only 1/3rd of people in a field are needed to segregate and AI in it's with the biases is almost Tipping the scale to reinforce digital resource segregation. Though it's highly difficult to prove due to boundary permeability brought by limited inclusivity on how it operates it's subtle nature from it's creators who lack maximum diversity and equity.

AI-based on norms & practices that produce exclusion and division need to change and our study navigates on how to achieve that with the local lenses of developing countries, through reiterative Emulation in learning AI and accusing it easily to be localized by local communities, relearning of better practices by industry practitioners to counteract biases and unlearning of the elements that cause division.

Curiosity-driven models of research and digital literacy programs should be widespread by the creators of AI so an adept diverse labor force can participate in it’s development to battle the biases through equitable algorithms, Creators should set ethical tests and principles of inclusion that promote voices from marginalized groups in problem-solving and decision making when AI is involved, Our study will showcase the core deficiencies lacked in marginalized communities with an aim to bridge the gap to achieve balance on the fields development and raise interest for all in it’s equitable progress.

Expected Outcomes

Our workshop will collect from speaker and attendees their inputs that will be implemented into evidence based reports and prototypes through design thinking approach.

Policy lab will be set up for all who would like to be part where our prototypes and recommendation will be further discussed and translated into policy papers and best practices guideline to navigate the loopholes and cracks of equal AI distribution.

The session will be in a round table format to foster participant engagement through questions and comments.Through the collaboration of the online moderator, online participants will also be engaged actively to foster equal participation. Inputs will be gotten from participants after the first phase of speakers inputs. Then an open discussion will follow.

Online Participation

Usage of IGF Official Tool.