IGF 2022 Day 2 WS #439 Afro-feminist AI Governance; Challenges and Lessons

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> IRENE MWENDWA:  Thank you for joining us on time.  We're giving everyone just a few minutes to settle and then we'll begin.

Thank you for everyone online for also keeping time.  We look forward to having a full discussion with you all, Afro‑feminist AI Governance: Challenges and Lessons.  Thank you once again for joining us in person and online, virtually as well.

We had given ourselves 2 minutes before we start and they're over.  We would like for everyone to settle so that we can begin our session for today.  Our session for today, as you can see on the screen, those joining us in person, it is Afro‑feminist AI Governance: Challenges and Lessons of the IGF of 2022.  We have a Zoom link, if you want, you can share with your colleagues back in your countries.  It is available on the website as well.  You can be able to check out who is on the panel today on the official IGF website.  The panel session is WS439 so that you can get additional information about everyone on the panel today.

We want to request that you bear with us.  We're seated on this very big panel.  It is not like us.  It is so that we can be able to I think offer translation and also because the microphones down here are not working very well.  We are together in spirit, although quite far from you.  Feel our warmth.

For those online, we hope you have your cups of tea ready, we hope you have your coffee and let's have fun.  If you need to share anything, yes, you see Yolanda Lannquist, if you need to share anything, please share in the chat if you feel maybe there is something we're not clear enough.  Do let us know in the chat and someone will be following up and coordinating.  Bridget Boakye and Yolanda Lannquist, feel share to share with us if you feel we need to repeat the question, repeat something so that we can work together.

To officially start us off, I would like to start us off by asking everyone in this room and everyone on the Zoom link, how many sessions have you attended at the IGF this year or this year on AI that you have seen a woman, an African woman on the panel by a show of hands.

One ‑‑ on the Zoom link as well, I'm so happy five, to see men of you attended different panel discussions and a woman or an African woman is represented.

To set the ball rolling, we'll start with the speakers on the Zoom link, Yolanda Lannquist and ‑‑ yes, we'll start with the online team, Bridget Boakye and Yolanda Lannquist.  Then we'll proceed to the in‑person team who are with me here today.

Bridget Boakye, please take a few minutes to introduce yourself and your organization and then followed with Yolanda Lannquist.

>> BRIDGET BOAKYE: Sounds good.  Hello, everyone.

Once again, I see people nodding, I'm assuming everybody can hear me in the room as well.

It is a pleasure for me to be here.  My name is Bridget Boakye once again.  I am the artificial intelligence lead at the Tony Blair Institute for Global Change.  I have a data science background and I'm passionate about using technology to deliver for citizens.  It is my pleasure to have this conversation.

I look forward to sharing a bit more about what we do at the Tony Blair Institute as well as specifically our initiative on African Women in AI as well as AI adoption on the continent in a bit.

Thank you once again.

Over to you.

>> YOLANDA LANNQUIST: Thank you, Bridget.

Thank you for inviting me.  Hello to everyone.

It is great to be here virtually.  I'm in Brussels right now because we have a conference at the European Parliament tomorrow on AI policy.

Yolanda Lannquist, Director of AI governance at the Future Society, The Future Society is a non‑profit based in the U.S. incubated at Harvard Kennedy School of Government, focused on AI policy with a mission to align AI through better governance.

On this panel, we led the development of three national AI strategies in Africa, one by competitive tender with GIZ, the Fair Forward Project, it is Germany's Forward Development Agency with the government ministries of the country, which were Rwanda, Ghana, and Tunisia and with Smart Africa.  I'll be trying to share the recommendations that we developed with those strategies towards inclusive AI policy. 

Looking forward, and thank you so much.

>> IRENE MWENDWA:  Now to our physical team here present if the IGF in Addis Ababa.  We begin with Amber.

>> AMBER SINHA: Thank you so much. 

My name is Amber Sinha.  I'm here from India.  I work at the intersection of law, technology and society, and a lot of my research focuses on digital rights and regulatory systems around them.  I'm currently the Director of Research at Policy Data Institute, and I also serve as senior fellow working on trustworthy AI with a foundation.

Prior to this, I worked at the Centre for Internet and Society in India where until this year earlier I was Executive Director.

I'm really looking forward to this session.

>> KRISTOPHINA SHILONGO: Hello everyone. 

We're I'm based in Cape Town.  I'm from Namibia and work on several projects at Research ICT Africa focused on AI, specifically just AI and data justice.  I'm very interested in adapting feminist frameworks into how conceptualizing AI policies and frameworks.

I'm excited to have this conversation.  I'm very excited to learn from the audience.

Thank you.

>> IRENE MWENDWA: I'm moderator today.  I'm Irene Mwendwa.  I'm a Kenyan working for policy, a Uganda‑based feminist think tank and I lead initiatives there.  Looking forward to learning from this panel and learning from everyone.

Over to you.

>> BOBINA ZULFA:  Good morning. 

I'm a data, digital rights researcher with policy, policy, it is a civic area in the intersection of data tech and society.  Myself, I have been working on this framework for AI governance and I'm looking forward to the session because it is a part of the consultancy to just get more viewpoints to add to, you know, the work we're doing and, yeah, excited for this discussion with everyone.

>> IRENE MWENDWA: Time has passed, and we would like to give Bobina a few minutes to talk about the framework we have gotten into as Afro‑feminist and encompassing everybody's contributions before diving straight into the question.

>> BOBINA ZULFA:  Just very quickly.  I will ‑‑ so this Afro‑feminist AI governance framework, one would ask themselves I think a couple of question, what do you mean, Afro‑feminist?  Because we know there is a number of, you know, AI frameworks and principles that are being developed globally.  However, with the work that we have been doing at policy, just looking at gender and AI on the continent, there is a number of, you know, I would say gaps in terms of structuring inequalities pertaining to, you know, AI and you know African women that we felt the need to address through a framework directly addressing a number of gaps and issues.  This was conceptualized at the basis and we're doing this together with a number of partners.

Basically, the framework proposes, you know, outputs across a number of the problems that we think pertain particularly to, you know ‑‑ directly effecting African women in AI development and deployment across the continent and proposing a number of solutions.  It is interesting that we have just ‑‑ we get to get out in spaces like this and get in more viewpoints to our contributions to the production as a whole.

Yeah.  I look forward to just having more discussions with everyone here, even just this conversation itself.  That's a little bit about that.  It is really just to contribute to ‑‑ yesterday, one of the stakeholders during the Opening Ceremony talked about, you know, a UN family and, you know, we believe to have, you know, what we call global, you know, to have a global framework, it should be representative of different perspectives from, you know, different people and different, you know, demographics and we believe, you know, the particular African women's voice in the discourse has not been represented.  We think this is something that would help directly address that gap.  Yeah. 

That's pretty much about this.  Yeah.

>> IRENE MWENDWA: Thank you very much, everyone, for the introductions.

Participants, feel free to follow policy, to be able to get contacts of the team members you see on stage and on the screen.

To get into the panel of today, we're going to be discussing on issues of marginality, power and not necessarily bias and we would also look at feminine AI governance, African women being engaged in AI Governance and also the vision of AI for all. 

To get right into it, I'm going to address a few questions to I'll start off and can you specifically in light of a number of principles and frameworks to ethical, responsible AI, why is there a need for multiple perspectives in addressing challenges of AI systems from across the globe.

For you, Bobina, marginality, what's the role do you think of African women in all of their diversities.  Thank you, welcome.

>> KRISTOPHINA SHILONGO: Thank you, Irene, for that question.

It is a very important question and I want to start this conversation by saying that I think it is sometimes very unfortunate that often as African women and women of color, people who are marginalized vulnerable in our societies have to come to these stages and these platforms and talk about being included.  Yeah.

Having said that I think different multiple perspectives are important because the principles and the frameworks that we have, we're drafting, developing, determine how the various policy actions and policy areas are operationalized and in which policy issues are prioritized.

The first and foremost, I think most if you look at the OECD principles for instance and many other frameworks, even the African Union data policy framework, a principle that's present in all of them, it is creating or developing human‑centred technologies that are based on human‑centred values and fairness. 

From an African perspective, I think we have ‑‑ we're a large continent, first of all, we also have very diverging, various values.  From a feminist perspective, what ‑‑ you know, the social values that could be ‑‑ that people have for instance in Europe where it is more very small‑knit families, for instance, communities, we're more larger groups, it is different.  When we ask ourselves is what is the greater interest?  When we develop AI, what's the interest of the people who are the people?  We bring the different people to the stage and we hear their perspective, values, hear what risks they are faced from these technologies and in developing principles we are also able to come up with appropriate safeguards against the risks that the technologies pose to communities.

I think protection of women for instance as well as children as well as our concepts of what the Harms are, I think ‑‑ what I'm trying to say, designing technology with appropriate safeguards requires different perspectives and these perspectives come about when we talk to various people.  I think in Africa for instance what do we do when we ask our self what do we do when we come into communities or in countries where our rights are not recognized as women, as queer people, LGBTQ people, with the different institutions which safeguard the Rights of people living with disabilities, for instance, how do we incorporate those frameworks and those principles into the design of AI technologies?

If we conceptualize those kind of safeguards we need, then we're able to come up with the capacity for human intervention and oversight.  So if we say we need to protect women against gender based violence, we have seen in southern Africa, we have seen AI technologies for good looking at gender violence.  The way we think of privacy, the way we think of families is very different in Southern Africa. 

People, a lot of women who are victims of gender based violence live with their perpetrator, they share phones ‑‑ women do not have phones for instance by themselves.  They share phones with the people who abuse them.  If you're using an AI chat board to respond or provide them with assistance how does that, you know, effect that social issue?  I think in other countries for instance, other countries where people, more people have access to phones, don't share phones, those might work.

Those kind of technologies posed put women at further risk.  So when we come up and say, yes, it's good that we can have technology, good that we can have a chat board to talk to women, it is very different on the ground.

Another thing, the principles of freedoms and how we perceive equality and social justice.  A very important part of AI principles is how we look at the good that comes from the technologies that we're, you know, developing.  A lot of times we have to go back and think about the social justice aspects and unstructural inequality, historical inequalities that we face as Africans, women, young people as well and incorporate that in how we design our policy, how we design our strategies, AI strategies.

Secondly, the risks that are posed by AI technologies disproportionately effect certain groups, women, people living with disabilities, and so when we, you know, how do we define, conceptualize and define harm.  The only way that we can do that, there is no one way, what's harmful to someone else in the east of Africa is different, what is harmful to someone else in the west, in the South of the continent.  So we need to have these multiple perspectives that really encompass and bring together how we define harm and not posing harm, in how we think about do no harm technologies.

Proportionately as well, we have seen kind of research that we have done, we have seen that organizations have in Southern Africa specifically had projects that they are looking at the conservation of the environment.  Overall projects have good objectives.  You know, when you introduce technology, when you introduce AI, if you are looking at conservation of environment, it is non‑personal and not related to any person, but then when you look at those, you know, technologies even deeper, the issues around that, they pose a lot of risks to the people who live in these communities.  We have seen for instance technologies that are tracking wildlife in Southern Africa, and those ‑‑ it is just looking at footprints of wildlife.  In looking at how the wildlife tracks, you also are able to in a way, in a very insidious way surveil another mad communities like the sand communities.  They move around with how the animals move.  We think about that.  You wouldn't think about that, okay, what kind of risk would some technology that is tracking wildlife pose on the people around that.  It also determines how we think about how technology can impact conservation, environmental sustainability.

Finally, I want to talk about the issue of sustainability.  In bringing up all of our perspectives into these principles, how we view them, I think we contribute not only to ‑‑ you know, to the economic sustainability of AI, but also the cultural, the social. 

And we have always talked about Ubuntu and when we do, we forget about the fact that they're women, we forget about the principle ‑‑ the people principles that can be included from the west.  Yes, we want a philosophy or we want policies and frameworks that are based on Ubuntu, relational value, and that's not separated from feminine values, and somehow women are always lost in those conversations where we talk about Ubuntu principles for instance. 

I want to speak of an example in Namibia, we talk about perspectives and participation.  In the conservation sector, when they came I with the participation framework, they used very ‑‑ they used ‑‑ they tied it to Ubuntu.  We're coming in this constituency and they are going to control ‑‑ govern the way shared resources are used within that constituency.  We're using Ubuntu.  Later on they realize the challenges that they have, only men are benefiting from the use of the resources.

We have to look at the value and the harms that come from that.  I'm sorry, Irene, I'm very happy to discuss further.

>> BOBINA ZULFA:  You have gone over a number of things that I wanted to speak on as well.

You know, I think really building from, you know, a number of things that you have been going over in terms of just intersectionality, I think it is just important to highlight that, you know, the need for I feel frameworks from this particular Afro‑feminist AI Governance framework is recognizing that particularly groups like African women are ‑‑ there is a whole number of areas, you know, where which there are structurally oppressed.  You find that we look at, you know, as an African women in the day‑to‑day interactions with a number of platforms where obviously the AI system is involved, as a black women, there is a number of, you know, structural oppressions that you're going to meet being a women as well, being ‑‑ in terms of the socioeconomic conditions that you find there is a number of limitations in terms of access, usage, and even issues like, you know, language because we see for example language is a big issue in terms of a lot of AI systems because barely any African languages are a part of the models that are being developed.  Of course, that mean there is marginalization in terms of usage, access.  I think it is very important to just point out the need to look at the intersectional needs of a number of people, including African women in developing AI systems and just deployment across the continent beyond just developing systems that are ‑‑ that target a universal user because there is no such thing as a universal user.  There are a number of people with different needs.  There is a need to pay attention to the needs.

This speaks to the rate at which a lot of AI systems is developed is obviously expediential and there is barely ‑‑ let's take care, let's pay attention to the needs of different people but it is just like let's develop, go quicker, et cetera.

I think that there is a need to pose and factor in the needs of all of the different people, particularly African women that we're proposing.

Yeah.  I think that's my contribution to that.

>> IRENE MWENDWA: Thank you, Bobina and Kojo Boakye. 

Those just joining us, thank you very much.  This is the panel on Afro‑feminist AI Governance: Challenges and Lessons. 

I'll go straight into our online team and those of us that are joining us virtually, Yolanda Lannquist.  We'll be speaking about power and not necessarily bias to show you how power ‑‑ the person who has power really is in charge of putting out the best system, the best policy, so forth.

Over to you.  I hope you can hear me.

Should AI or artificial intelligence, challenges be approached as power issues and not just bias?  Also, for example, issues on practicality of artificial intelligence principles and frameworks, algorithmic information or hate speech.

Over to you.

You have 7 minutes.

>> YOLANDA LANNQUIST: Thank you so much, everyone for the previous remarks.

There could be other perspectives from the panel on this.

In terms of power dynamics that I bring expertise on from hearing from others, a crucial, crucial point I want to make that is core to the three national strategies we worked on, again those are Rwanda, Ghana, Tunisia, I know Smart Africa is taking seriously as well, it is digital inclusion.

This is digital infrastructure like Internet, rollout penetration, reliability, affordability, affordability and access to smartphones because once there are more people online, rural areas, lower income areas, then you get more Diversity & Inclusion, not just in who has access to AI, but crucially, whose data is being contributed by a digital systems to AI.  That could limit bias.

It is very important.  The data collection challenges in a lot of the African countries, there are data shortages in ‑‑ it is data collection, it is a challenge even in the U.S., but particularly in regions where there is less digitized data, there is more paper records like in public sectors, where there are fewer people online including in demographic lines.  We have smart young people, graduates from computer science programmes and using pretrained models from the west, models trained on Western data, not as ‑‑ I think it was mentioned, the languages by Bobina, not enough local language data, not enough local image data and so that's why digital infrastructure, digital inclusion is so important.

We can add to that AI literacy, digital literacy campaign, a couple of the countries have targeted programmes specifically for women in AI, coding for AI and this goes to education reform and curriculum reform to make sure that the number of women and also other demographics, including rural areas are really included.  Also when we recommend AI hub, AI meet‑up, not just in the capital city but also in other parts of the country for example and there should be again once a target of women. 

These are some ‑‑ and along the lines of data as well we'll also have the core pillar, the national AI strategy data share, data governance guidelines.  Besides governance for security and privacy for data, the data protection Commission or a key factor in the AI community can provide guidance on how to collect and share data responsibly.  This could facilitate data sharing so that we have more data and more representative data.

I will go ahead and hand this over to others that want to speak more specifically on power dynamics if there are any other views or questions on this.

>> IRENE MWENDWA: Thank you very much, Yolanda Lannquist.

Because of time, I'll move to the next set of questions and then I'll be able to ask for ‑‑ checking online for any comments and questions and then we'll take comments and questions in the room.

Allow us to proceed.

Amber, this is for you.  Our data justice ideals such as data ownership, meaningful consent data regimes in the current capitalistic data exchange, the global structures, are they just that?  Ideals?  Are S. there a way to realize  their practicability.

>> AMBER SINHA: To begin with, we need to recognize in inequities that exist in the data economy.  When we talk about the data in governance debates, in the recent past, perhaps there is no better indicator than the very, very hacking as metaverse described it.  In the last few years, besides data being likened to all, it is compared to mineral deposits, dividend deposits, also compared to the permanent fund.  On the other end of the Spectrum, people have looked at the harmful impact of data processing and people like Martin for instance have compared it to carbon dioxide, others compared the impact for harm to uranium, other pollutants.

When we have to think about confronting the structural inequity, we must try to figure out what the appropriate metaphor for data would be.

Data ownership for instance is something that you mentioned and that has a lot of intuitive power as a metaphor.  When we start likening data to property, or as a lot of policy documents across countries have fashioned data as an asset to be leveraged, the thing that we have to be careful about, it is that within the existing data economy how it positions different set of actors.  So in the way that data exists right now, the way that the flows exist right now, the ownership rights to data largely through non‑negotiable one‑sided contracts are largely controlled by data processes and data controllers.

Even if you were to reverse that trend, what we have to be careful about, it is to ensure that data ownership, even with the data subjects, it doesn't lead to outcomes which are exploited because in the current market economy what that might end up meaning, it is that data will just position itself more and more as a tradeable asset.  People who are more disadvantaged will actually get a deal for their data.

The other way to look at data, data as a label.  You know, data is, you know, very clear manifestation of the label put in by those who are involved in the generation of data.  People who are participants in processes which leads to the generation of the data and there also I think when we start fashioning data as label, we have to be careful in terms of what kind of label values it attaches to the different demographics.  Somebody that can Zoom data luxury product, a better off demographic.  There is more advertising interest in their data.  How we assign the label and value, it is important. 

According to me, the best way to fashion data, it is to think of a metaphor around autonomy so I think going back to what we talked about, going to earlier, the Rights of collective, and autonomy, it can exist in various levels.  It can exist in individual and collective levels, and I think looking at data as something that arises from people's bodies and how it compromises individual choice, centring that part of the conversation, it is important.  That's not something that in a lot of debates around data is happening.  There are various pieces, you spoke about practicality, there are various pieces to that conversation. 

First level, we have to agree that autonomy is an appropriate metaphor for data, and then we need to look at ways in which that can be realized.  Collectives which represent identity‑based interests and that can confront the power dynamics at play.  Individuals are not often in the position to exercise that sort of power.

Then looking, you know, how collective choice versus individual choice needs to be balanced.  Perhaps in cases of conflict we have to be very clear that we privilege individual choice and individual autonomy over the collective interest. 

Those are pieces of conversation that need to happen.  What we need, it is more workable models, collective experiments that try to realize that vision of data.

>> IRENE MWENDWA: Wow.  Thank you very much, Yolanda Lannquist and Amber Sinha.

Just before I proceed to Bridget, I want ‑‑ I just will speak something strong from the last two speakers.

One, it is the metaphor, the metaphors we use, we read, write Article, academic papers, we say data is the new mineral, data is the new ore, we have to be quite careful.  If we position this way then we continue to disadvantage those who are never able to afford those minerals in the first place.

Just to jump to Bridget, I just ‑‑ I'll go straight to Bridget because we have other questions, just to ask you directly, Bridget, can you hear me?

>> BRIDGET BOAKYE: I can.  Yes.

>> IRENE MWENDWA: Thank you.

So I'll just jump straight to you and ask how can governments be empowered in terms of artificial intelligence oversight responsibility?

>> BRIDGET BOAKYE: Thank you for that question, Irene.

It is one that really is at our core at the Tony Blair Institute for Change where we do two main things, public policy and government advisory.  Literally every day our question is how do we help governments to deliver for their citizens.  There is a reason why I also really like this question because we can human nice this a bit in thinking how to support them in their work.

In technology especially I think we tend to sort of give this overarching power to governments where we can do a lot to support them in thinking about how they deliver the AI future that we all want.

In terms of this question, you know, how we empower governments actually to deliver on this, and to use AI responsibly, including oversight, we think of it in two ways, we recently published a report where I would invite everyone to read if you're interested in learning more about this.

I think the first way, it is to support governments and really to understand the opportunity available to them through AI and then to help them to test and adopt the technology responsibly.  We say this because of two primary reasons.  One, it is really difficult to regulate or have oversight over something that you don't understand.  We think there is a tremendous opportunity to educate policymakers and governments on AI and, you know, an example I like to give from this year, we have seen some of the oversight here and surrounding technology harm, et cetera.  It is really clear that some of our policymakers in Africa and around the world don't really understand what AI is.  It would be easy to ridicule this fact.  I think instead we have for such a broad and fast‑moving technologies such as AI, we have an opportunity to bring society, Future Society, industry together to provide trainings and workshops that really helps policymakers to understand what the technology it is, the harms, the benefits as well.

I think the other reason why it is important to educate policymakers in this regard, as step towards empowering them, there is a true value in having policymakers understand how they can use the technology responsibly to deliver effective digital applications and systems for their citizens.  I think Yolanda Lannquist, Kristophina, amber talked about the concerns we have here.  There is, through the concerns, there is a clear opportunity that presents itself and policymakers can take advantage of if they know what it can do in the current context, restraints rather than in the hype of what AI is.

The second reason we really believe in empowering policymakers and taking this facade away from who they are, what they do, it is an opportunity to provide frameworks which we appreciate about what policy does, not talking just about educating policymakers but we provide tools for them to do their work.

We cite responsible AI is key for policymakers to enable the transformational in the public sector, key for foreign direct investment and assistant to attraction and retention.  This idea that responsible AI has Cleveland dividends need to continue to be shared and developed and the tools that we provide from Afro‑feminist frameworks to some tools we have in our toolkit continue to empower governments to think about AI more holistically, to think about the issues of digital inclusion, ethics, and procurement, et cetera, that I think we have discussed on the panel.

Again, thank you for the question, Irene.

To sum it up, we need to take away the mysticism of who governments, politicians are, we can understand that they can be supported, empowered through the work we do, all of us do on, you know, in our respective spaces, and that education is really at the key of advancing the issue of oversight and responsible governance.

Thank you.

>> IRENE MWENDWA: Thank you very much, Bridget.  We are literally almost the same person, quite passionate about political and government engagement.  I would like to request all of you, you have to find ways to continue to engage your local governments, elected officials beyond your national level elected officials in AI and technology discourse.

I have to interchange questions, I thought that the next question will be more time, and it is based on the first question that we asked in this room on how many of you have attended sessions this year on AI, on artificial intelligence, and you have had a woman on the panel speaking or an African woman being an perks on AI speaking.  We asked that right at the beginning.  Those that joined us, I hope you're able to interact with that question and think about it for the 2022 and for your future work to continue to invite African women to speak on your AI panels and engage in your artificial intelligence work.

Techno chauvinism is more imminent in the development for AI programmes.  We know why this happens in the Global South under the pretext of progress.  How can we enforce a safer artificial intelligence or AI development in deployment that does not violate fundamental Human Rights and dignity.

>> BOBINA ZULFA:  First of all, I want to start off by just pointing out even in this conversation about techno chauvinism and its affects, we are cognizant of, you know, potential that AI and data shows for the continent and for African women as a demographic.  We would like this to be realized in it being beneficial to this group.

However, with techno chauvinism, it is really this Outlook on tech, this approach where we look at tech at a fix to all of our problems.  It is, you know, becoming ‑‑ it is a phenomena that's obviously global, not just on the African continent and on the African continent, it is more prevalent especially because a lot of technological process, it is easily just looked at as advancement, as its development.  We just will go with anything that's being brought on the continent and particularly here in terms of AI systems that are being developed and deployed across the continent.

What we're saying, there is need to, you know, look at the harms that, you know these technologies are posing and because, you know, research and a number of studies have shown that there is a number of Harms coming with a number of technologies deployed, particularly in a number of AI for development programmes.  We have seen a number of poverty programmes on the continent which have ended up being, you know, where we see surveillance, most surveillance, because they're watching the progress of a number of villages, communities, we see active surveillance of citizens is what we see, that's a violation of privacy and dignity as human begins.  We see, you know, people's humanities basically attacked fundamentally.  This crosses over to a number of things where AI is being deployed and same with the biometric systems of identification and a number of people cannot access a number of services, ID, passports, and, you know, of course, this goes back to our initiative discussion about intersectionality, looking at how a number of the issue, particularly effect African women who are living at the margins of society.

We were just say, there is a need to look at ‑‑ to move away from looking at tech, any tech, that's coming on to the continent as progress, as advancement, as development, and rather question how does this work for the continent?  How does this work for the population, African women and just the continent as a whole.  Yeah.

>> KRISTOPHINA SHILONGO: I agree.

I would add further, I think in framing tech as always progressive, how we think about ethical action, it is a question of how do we understand ethical action and also the social constraints that impact that ethical action.

From a big tech or start‑up perspective, I think what is a way that we can come up with AI ‑‑ the AI technologies, and ‑‑ yeah, AI technologies that don't violate fundamental Human Rights is in diversifying how we think about ethical action.

In terms of tech start‑ups, I think often sometimes the go‑to actions are in diversity hiring for instance or talk about race, more data, more data, data mining and extracting.  In always bringing up those constraints.  Sometimes even though ‑‑ yes, we want to have ethical AI and be technology based on ethical principles, but it is our understanding of that.  How do people coming into communities on the continent in the global ‑‑ within the global majority, how do they understand the social dynamics that impact how we conceptualize ethics?  Another one, I think also in how we think about consent, you know, is consent taking on in the AI consent or I accept these cookie, I give you my consent to use this technology, my data for this technology.  In diversifying as well how we think about consent.

Another one, in my opinion, it is very important, and Bobina alluded to this, the investment or the balancing of how we prioritize policy interests.  Yes.  We have seen many people talk about AI for energy.  We don't have energy.  I have lived in South Africa, we have gone 8 hours without energy, how will we power a cloud system?  Those are true things.  Are we going to divert the conversation or the policy conversation away from building energy infrastructure to AI?  Thinking very critically about how and what investments we're making and at what expense are we ‑‑ what expense are we foregoing as policy interests for the intention of developing AI technologies.

I think as well, when we talk ‑‑ people, when they talk about ‑‑ when we talk about AI for good, tech for good, the for good is always at the end of the technology.

We don't think about what happens in that process or the life cycle of that development of that technology.

Thinking at every stage when, you know, data is mined, what are the harms there.

You know, at design, what are the potential harms and risks and at deployment, thinking about all of the risks at that stage as opposed to what we do normally of being, like, okay, this is good.  It is ‑‑ I don't know, it is technology for ‑‑ I have seen one on seeing or tracking how, you know, seeing which kind of clients, HIV patients are most likely not to come back for their treatment.  Okay, we want to see the behavior, but what's that do in terms of how those people come back?  How we stereotype who doesn't come back for treatment and how we see this pandemic.  Yes.  Maybe.  It may be good ‑‑ actually it is not very good, but it's causing more harm at other stages before that final stage.

I will stop there.

>> IRENE MWENDWA: Thank you very much.  You have touched on Bobina, Kristophina Shilongo, you have touched on challenging issues that continue to ail the development discourse and it is for everyone in the room really to find ways to better address these challenges.  Tomorrow, things can be better and the environments we work in can be better.

I would like to go to the next question and then if anyone online, in the room have questions, comments, prepare yourself, and then we'll be able to take questions.

Yolanda Lannquist, over to you, and then amber and Bridget maybe with your contribution, how do you believe that data protection in governance could be made more effective on the continent in the Global South generally considering what's happening currently.  You've alluded to the fact that you have supported the governments of Rwanda, Ghana, Tunisia, maybe you can share some learnings from those countries that other countries can immolate or can consider while they prepare their AI strategies.

Thank you.

>> YOLANDA LANNQUIST: Thank you so much.

This point is about capacity building for government, forced data governance as well and it doesn't need to be direct, doesn't need to be regulation, it could be just providing guidance, for example, for AI developers and the technical community like on appropriate local ethical guidelines, data sharing in an ethical way, preserving privacy, et cetera, cybersecurity, data collection to make it more inclusive, representative, et cetera.

When we talk about AI policy, we don't just mean regulatory, but we could be equipping the community.  I mentioned three pillars earlier that are key to enable the AI ecosystem in a way that supports inclusion.  Those were education policy, inclusive education policy, digital infrastructure, digital inclusion, data sharing and data governance.  These are three pillars that are common in the three ‑‑ in the national AI strategies which we supported in Africa.

There are three, four more, and those are targeting AI adoption in key sector, the private sector, the next pillar, it is targeting AI adoption in the public sector, including capacity building for policymakers and then other pillars usually include ethical guidelines, supporting the AI community or scientific research.

Tony Blair Institute, it is as Bridget mentioned doing a great leadership on capacity building for government policymakers.  I shared in the chat a report as well on AI for policymakers by GIZ and with Smart Africa there is trainings going on for policymakers.

I think also as Bridget mentioned, we need to equip and support public sector to be able to enforce data privacy because often countries have data privacy guidelines and they may not have the capacity to enforce them and cybersecurity as well, which is extremely, extremely important as we have more digital devices and they're not secured.

So private sector, it may not be across countries aware or incentivized enough to uphold data privacy, cybersecurity, ethical guideline, representation, accountability, transparency, so many issues.  So that's where capacity building and training for public sectors is so important in providing as mentioned guidelines on data sharing or tools for trustworthy AI so this is also the OECD, the AI policy observatory which I'm an expert as well has a toolkit for trustworthy AI framework.  They have a database, this is OECD.AI, where people can all over, developer, this could be disseminated to developer, they can find assessment tools, guidelines that they can use directly as well as following ethical principles which can be adopted for local use.

Of course we want these to be adopted locally because so many as mentioned in the first question, ethical guidelines as well as AI policies are developed in the north, but they're used and adopted in Africa, maybe directly, fork, governments we work with that would like to use as a starting point the E.U.AI ethical guidelines or the OECD one or the de facto, for example, when people used AI models pretrained on Western data.

Another core recommendation for the public sector is a responsible AI office, but often governments don't have the resources for that.  Having an in‑house coordinating body that's mandated to coordinate across different ministries is really important as well.  We see that in the U.K., Singapore, Egypt as well.

>> BRIDGET BOAKYE: Thank you.  I don't know if I can come in here on one point, Irene.

Yolanda, I couldn't say more on the regulatory and non‑regulatory sort of examples that you set out for how we improve and continue to support data governance and policies on the continent.

I think the other thing we have seen in our work at the institute that's worth mentioning, it is in regards to what diversity of policymakers does for more effective governance.

When I started this role in early 2020 we had about 28 of 54 African countries with data protection laws.  15, about 15 of the countries had data protection authorities to enforce the laws.  As of the start of this year, looks like the number from some of our research, a number has increased to about 33 countries on the continent, they now have data protection laws, and there are 18 data protection authorities.  We have seen governments have appointed a record number of women to these position, women make about 45% of these appointment, the data protection commissioners.  I think this is a really cool place where we see the intersection of sort of gender, feminine, governance whereby when we have African women at the helm leading, supporting the work of effective technology governance, effective AI governance, we see more activity in the space.  We have more data protection laws.  There are so many constraints.  I wanted to take a minute to applaud the work that a lot of the Ministers of ICT, data protection are doing in terms of providing those regulatory guidelines, providing the non‑regulatory supports whether it is self‑assessment, industry guideline, things like that.

A lot of the women Ministers that we speak to are also doing a great job of bringing more exposure and public education to young girls and women as it relates to the field.

So again I wholeheartedly support Yolanda Lannquist's points around, you know, the enforcement.  We can't stress that enough.  There are laws, we know that we need more funding across the continent to support governments to enforce the data protection policies.  I think it is a tremendous feat that we have more women pushing for non‑regulatory tools to do the work that hopefully is more inclusive, responsible than we would have otherwise.

>> AMBER SINHA: I wanted to take off what Bridget had talked about, the protection law and the data protection authorities in the continent.  That's a key piece.

You have emerging regulators entering a space that before that existed very much in the state of regulatory vacuum.  The challenge for the regulators is to move quickly from a state from fairly primitive data practices to a set of robust data governance ecosystem.

We're talking about jurisdictions where often there may be a capacity challenge.  There is a resource in funding challenge as well.  So one of the things perhaps that needs more attention is how regulation, and particularly the enforcement of regulation can be smart and then I think for countries who are kind of for lack of a better word, arriving late to the party, there is the advantage of foresight in terms of about 20 to 30 years of enforced practices in other parts of the world.

For instance, there are strategies which have worked, not worked in other parts of the world.  For instance, use of the trust model in a country like Japan, the use of very high monitoring penalties by the U.K. to convince or try to convince companies to enter into enforcement contracts or for instance the sort of gradual move from hard power to soft power in a place like Spain or in the U.S. where the FDC largely has regulated by making an example out of large players.

The key thing, when you are operating in the State of sort of limited capacity, sort of small government in that sense, it is to then draw from these learnings and see what can be applied in the local context.

I think that conversation perhaps needs to happen much more.

>> IRENE MWENDWA: Thank you very much.

Everyone has been speaking on the use and importance of the data protection act or policies and the authorities in countries and, of course, since we are at IGF, we know some countries that are yet to begin this processes or are in the process of putting this in place.  Maybe it is also a call for every actor in this room to find ways to engage in the discourse back in your countries to push our governments to develop this quite rapidly so that they can be used by both public and private sectors to seek different justice that they would need.

Then going to our final questions, and we have one for the audience as well, so you can start preparing your responses, it is ‑‑ I'll start with ‑‑ let me see, who have ‑‑ who haven't I heard from the most?  I think Bridget, I'll start with you.  Sorry to put you on the spot.

Considering the opacity of a majority of the AI systems, how can a climate of trust around artificial intelligence development and deployment in the Global South be realized?  I know you had already alluded to some of that.  You can speak to this with a vision of women, African Women and Girls in mind.

Thank you.

Then Yolanda Lannquist, then my team here on the stage.

>> BRIDGET BOAKYE: Thank you, Irene.

Again, it is wonderful to be on such an esteemed panel.  I know I can't mention or won't be able to mention all of the tremendous initiatives that are ongoing to do this.

I will just speak to it from our perspective in terms of where we see, you know, what governments in particular can do in building trust in AI.

At the beginning of the year we actually published a report on trust and technology, specifically AI where we looked at a number of countries, including countries in Africa, Nigeria, South Africa, Egypt, Kenya those were the four that we looked at on the continent.

We asked questions around how people saw acceptance on various AI use cases.  What we saw, generally in the Global South emerging markets as well as the African countries that I mentioned, there was a higher level of acceptance for various AI use cases than in more developed markets in let's say the Global North.

Specifically people saw AI as an opportunity to improve welfare distribution, to improve health outcomes and to improve agriculture, et cetera.

I agree with the point on being keen of not over selling the hype of what technology can do.  I think what we came away with from the research, it is that in the Global South, Africa, there is generally a perception that, you know, because of some of the constraints we have around resources that technology can help us as Bobina had also mentioned, helping us to address many challenges that we have around delivering more equitably for everyone.

In terms of building trust, I provide that context to say that, one, in the Global South we do see that there is some level of trust, it is not tremendously high, it is not 90% as we would like.  It is much higher than we have in other places.  We can build on top of the trust that's there by, one, not over selling what the technology can do.  I can't say that enough from where I sit.

Two, we need to promote public knowledge about the technology and specifically celebrate and build on positive use cases.  We have seen a number of positive examples coming out of places like Ghana, Kenya, et cetera, around smart healthcare, culture, we can do more to disseminate the benefits to a larger population as opposed to what was mentioned earlier, just the urban centre, a small tech crowd in many African countries.  Ghana where I sit, it is specifically a group of people that consistently get this information.  How can we make sure that there is more public knowledge, you know, in diverse languages, for women that are relevant in the local context that we can share?

I think the other thing, it is we need to ‑‑ in terms of building trust, we have all mentioned to a certain extent the fact that a lot of the AI technologies in Africa, especially those currently being deployed by governments are actually brought ‑‑ are imported through the continent.  Earlier this year there was an M.I.T. review report on surveillance technologies and in Southern Africa.  It was really sort of scary in terms of what's going on when we have the technologies being brought in from outside.

The last thing I'll say here, I know there are many more that our panelists will add, the last thing I'll say, it is important to work on international cooperation in trust and AI, the technologies in Africa, they're not just ‑‑ they're not only from Africa, they're not primarily being developed by only African developers.  So if we can build a global conversation where the Global South Africans have a seat at the table around what responsible ethical AI looks like, we can continue to get these important technologies that don't promote that trust, that are not effective and eventually don't promote the trust that we have to see in the technology.

I'll stop there.  Overall, we think there is a lot more government can do in terms of facilitating trust.  We think there is a tremendous opportunity given the climate and what people want to see from the governments.  What they want to see delivered on the continent and what they think technology can do more generally.

Thank you.  Back over to you, Irene.

>> IRENE MWENDWA: Just confirming the time?  Yes.  11:00.

Yolanda Lannquist, I'll bounce the question again to you to speak for a few minutes and then my panelists here with me and I have one general question for the audience.  Let's spend the next 15 minutes reflecting on that.

>> YOLANDA LANNQUIST: Thank you.

To amplify bridges' points that were brought in, all very important, on point.

The Future Society's mission is to align AI through better governance, it is not just to promote AI adoption.  Right.  Responsible, inclusive, sustainable AI adoption.  As Bridget mentioned, many countries in the Global South have a lot more enthusiasm for innovation and less mistrust or precaution national AI strategies Europe for example.  There tends to be more of a culture of, yeah, this will help us in education, this will help us in agriculture, this will help us in transport without having done enough tests or preparing and precautions in temperatures of ethics, safety, security.

For example, if we're using AI in the classroom, we're using AI on children and AI systems fail and there are many other failures mentioned.

Ethics by design, safety, privacy, security, test, human‑centred design so there is a Stanford University, they have a centre on human centred design which is all about making sure that it works for the person is critical.

Another critical point as Bridget mentioned, increasing public knowledge.  When we have more Public Knowledge in AI literacy, digital literacy in the community, then the community can hold AI company, developers, regulators to account and support the governance, by providing feedback, social media, their use, and feedback to government and say hey, this isn't okay, this is a failure.  They know for example this could be fixed through more testing and controls and representative datasets.

We can also build trust, I'll put a last plug by referring to examples, for example, risk management frameworks by OECD, I pasted that in the chat, we have a collection of tools, of course, these should be developed locally as well.

I'll stop there and see if others have key points.

>> IRENE MWENDWA: Thank you.

We'll start with you.

>> BOBINA ZULFA:  In the interest of time, I'll point out very quickly, I think one of the things ‑‑ there is a number of things that could just be considered in terms of creating a climate of trust in AI development and deployment on the continent.

One of them, it is just a accountability and I think that's going to be buttressed across the entire ecosystem from the developers to the governments that are buying and deploying this or even just developing the systems on the continent.  Where harm arises, it we should be able to address it directly and because a lot of AI model, they're really just functioning sort of black box models.  I think accountability, it is one of the things we could definitely look to a climate of trust.

>> KRISTOPHINA SHILONGO: I think in the sense I would want to say that we ‑‑ I see opacity as an opportunity in the sense that oftentimes we look at the harms.  We say these are the risks.  Maybe you could ask the questions, like if we don't understand, what we don't understand about a certain AI system and if you ask questions like legitimate aim for instance, proportionately, if we don't have an answer to that, then we should build in frameworks, yeah, policy frameworks and safeguards to answer those questions.  What we don't understand about AI we have to develop policies for them or governance frameworks for the questions.

In terms ‑‑ I also think that we are at an advantage in terms of what was said about learning from other countries and also learning cross‑sectors.  Earlier this year we did a study, research ICT Africa, we looked at AI use in Southern Africa and one of the things we did, really trying to kind of come up with a taxonomy of how the kind of AI that's present in the region.  In identifying that, you know, it is ‑‑ you know, it's functional, it is text‑based, it is analysis, in understanding that we came to the realization that there is a silo of different sectors.  They are within different sectors in Southern Africa, people have been thinking about technology and how it is applied in those sectors.

We have seen, okay, we can take from the conservation sector, we can take from the health and there ‑‑ there are learnings and kind of, like, yeah, learnings that each of those sectors can bring to the table in developing national AI systems.

Another one, it is just too I think in understanding that we don't understand how AI systems work.

We should also have this thinking that a lack of evidence of harm is not a lack of ‑‑ is not the same as evidence of lack of harm.

So just because we don't, you know ‑‑ we don't see the harm, it doesn't mean it is not there and also doesn't mean that we should not come back with the safeguards to protect human dignity and Human Rights.

I'll stop there.

>> AMBER SINHA: I'll just sort of add to what's been said so far.

I think in terms of the opacity problem that we face, in the last few years, there is almost an explosion of literature from the field.  What's happened as a result, there is not a lot of consensus on what the concepts even mean.  What we are under risk of creating, a lot of models without very clear ideas on how they may be implemented.

I think what we need to do, it is to also centre the need for meaningful transparency, which can actually foster accountability.

We need to reframe the idea of transparency in some ways, because in most situations it doesn't lead to the kind of accountability that it seeks to achieve.

Particularly in the context of AI, obviously the fact that we deal with opaque algorithmic systems that complicate the landscape further.  Perhaps what we do need, it is not even complete transparency about the system but enough transparency so that user who is deal with them can form a conceptual model of what they're working with.  That may be enough to start the process of accountability.

With that, I'll pause.

>> IRENE MWENDWA: We have a question for you, the questions are not in the order we hoped for but we had 90 minutes to cover how African women engage with AI and how government cans do better but we will pose this question to you, even as we walk out, time is not on our side and to reflect on the role, I know that there are journalists in the room and I know there are government officials in the room, there are students in the room, I know there are mainstream Civil Society actors and Human Rights Actors in different areas in the room.

Who are the key stakeholders that should be involved in the artificial intelligence or AI ethic discourse and why?  How can they all be engaged in the conversation meaningfully?  Maybe by show of hands if you want to come in, share, there is a mic in front of you, you can just switch it on.

Anyone?

I'll start off by saying, this is something that I'm quite passionate about, to talk about how journalists, both broadcast and ‑‑ both legacy and alternate ‑‑ alternative media or journalists ‑‑ journalists, they can be able to bring all this conversation closer to home.  You all watch your news now either on your phone, your digital devices or on your television sets or on the radios.  Based on all of the news that you hear, I can guarantee you AI, artificial intelligence, African media is rarely talked about.  The power that journalists in our community reporters have, it is to promote this top, debate, to start talking about the issue so that our communities at the local government level can start picking on this and then as Human Rights defender are here in the room, who don't ordinarily maybe speak or engage in technology work or artificial intelligence, it is for you to keenly conduct Human Rights based assessments on some of the policies coming up on different technology models on the data protection, the AI policy framework, coming about being developed in conjunction with some of the communities and to look at Human Rights, you know ‑‑ when you conduct the Human Rights assessment, you'll be able to see where you need to ask governments and other stakeholders to put emphasis on inclusion, put emphasis on promoting good technology as opposed to tech for good as was said by Kristophina Shilongo and to promote the hierarchy of needs.  At the end of the day, we know that there is a hierarchy of needs also as Kristophina Shilongo and other panelists shared before.

You cannot ‑‑ you cannot put development of AI models when you have ‑‑ when you do not have power on in your countries.  It is quite important that as different actors in this room who feel that, you know, AI is far off from our main day‑to‑day work, strategies, it is important for you to start considering because it is here to stay and we need models that work for us and are by us as Africans.

On that note, because of time, I would like to thank everyone who has contributed today, Yolanda Lannquist, Bridget Boakye, everybody who joined us online.  Thank you very much. 

I would like to request that you check our social media handles, we'll be sharing some resources that The Future Society has shared and Tony Blair Institute and other resources that policy and research already have.

We would also request that you find us outside and engage with us so that we can be able to share with you additional information that we have based on this panel and we thank you all for coming here today.  Thank you for joining us online.

Have a wonderful rest of the week.  We'll talk soon!  Bye!  .

I forgot to ask if there were any questions.  I see one already.  Let me switch off.

>> AUDIENCE: 

Can you hear me?  Okay.  I'm based in Johannesburg.  I have two questions if we can stay a bit longer.

First of all, thank you for putting this perspective together, it is refreshing, empowering, different, so it is great.

My first question is, most of ‑‑ some of the visible work on race, gender is focused on the U.S., there are also roots in Africa but how do you engage with that kind of literature?  Race, gender relationship in the U.S. is different from South Africa, India, Namibia, so forth.  Sometimes the U.S. doesn't get Africa, how do you critically make use of this work but at the same time make sure it is rooted where, you know, we live?

The second question it may be a bit strange, we try to make it more precise.  From the AFRO future perspective, is there anything on the Internet fragmentation?  The other room, it is really packed, there is a session here at the IGF and you discussed about AI, but the two things, they are related.

An example, two, three years ago there was a lot of fuss about the project that official recognition company called CloudWalk signed in Zimbabwe, and the problem was is it going to be like massive surveillance of everyone or is CloudWalk training the visual recognition software to recognize the African faces and whose data is this?  The data of the Government of Zimbabwe or belong to CloudWalker?  The point is, maybe the Afro‑future framework doesn't have a perspective on Internet fragmentation, but it would be great if it did because, one, it will bring something new, as we saw today rather than China's position, Singapore, U.S.' disposition.

The second pointed, what you focused on is incredibly important, I also see some risk to self‑marginalize.  We focus on the margin, we don't focus on the big questions but maybe there is something new that we will all benefit from if we use that specific framework.

>> IRENE MWENDWA: Any other questions?  I don't want to miss you out.  Yes, one, two, okay.

>> AUDIENCE: I'm from India.  Thank you, amber for being here.  You put a perspective on it.

The idea of Africa, in terms of simplicity of simple data inclusion, it is totally missing.  Right.  To begin with, I think I'm a little bit of a pessimist in the sense that even this big AI models, whatever you call them, they have to take a step back.  We can come to the conversations only with the data inclusion.  If data inclusion is not happening, I think this conversation is basically faceless.

The second point, the number of panelists brought in, they were ‑‑ I think that the idea of explainability will only come to sense when we have credible number of ML and AI engineers in Africa and in the Global South, whatever you call it, the AI is still for engineer, it does not really explain to people like public policymakers or bureaucrats, anything else.

I think the two fundamental components are missing.  You can't focus on the sense of the direction of, you know, how inclusion of Global South in general I would say, in Africa, South Asia, wherever, can come to place.

Thank you.

Just a comment, I don't require a response.  Thank you.

>> AUDIENCE: Thank you.  I think she ‑‑

>> AUDIENCE: Thank you so much for this panel.  This really gives me so much empowerment, so much energy to continue advocating for Afro‑feminist in a technological context.

It was really inspiring.  I would love to see more of those panels for sure.

I was asking myself because you talked specifically about the structure of oppression of African women and tech chauvinism, how do you think colonial continuities play a role there in AI policy and what can be done to, like, also in a way decolonize the processes, especially a developing context? 

>> AUDIENCE:  Well, thank you.  Thank you so much for the panel.  It was amazing.

I have two main questions.

The first, and their more practical in the advocacy level.

Bridget brought are very interesting points regarding how to communicate with governments about these issues when policymakers put them on our side, think tech, thinking responsible, ethical ways of technology.  I would like to know if you have ‑‑ if you guys have specific experiences to share about this?  Just to give an example, I'm from Brazil, we have conducted work, some research in South America about the deployment of facial recognition systems and tech chauvinism, tech solutionism, it is around the dialogues we had for instance in the feud of public security with public software so, on.  How do you deal with it, especially with most of the time professionals who are not that used to a Human Rights discourse, so on.  It would be lovely to hear if you have experiences.

The other question relates to also a point that you brought about how to bring information, capacity, to build capacity and to help people understand how the systems work and also Bridget talked about good examples, so on.  I was also interested in what are the thoughts on bringing meaningful transparency using amber's words so as to also empower and to question the technologies especially when they have negative outcomes toward them, the individuals themselves.

Finally, very practical ‑‑ sorry ‑‑ if Bridget could also share the study she mentioned about Egypt and Kenya, there were ‑‑ I can't remember all of the countries.  Thank you very much.

>> IRENE MWENDWA: There's another one.  Okay.  Our time is up.  We keep it short.

>> AUDIENCE: It's such a great panel discussion.  I really enjoyed it.  I know some of you personally and work with you so, yeah, it is great to be here.

I guess my question is there is a literature in Africa that's outlined various theories of Afro‑feminist.  For example, Sylvia Tomali, the amazing Uganda scholar, other, they have all written about Afro‑feminist, African feminism from the realities and perspectives of African women, as we talk about Afro‑feminist and governance, how do we ‑‑ how can we incorporate their works into the governance and policy space?  I'm sure you all agree, their work is critical.

Thank you. 

>> KRISTPHINA SHILONGO:  Thank you very much for all the insightful questions.  I think we're also going to think a lot about them, inform our work.

I'm just going to answer one question, which is tied to the gentleman that spoke first on how critical feminist theories from the West, North America intersect towards those from the continent.  That question is in a way ‑‑ I say this very politely, it is a question that is weaponized against us as Africans where, you know, the second question was about how do we, you know, view our frameworks and policies as ‑‑ as part of the global community.  You know, feminine discourse also, the Afro‑feminist doesn't exist outside of how race and feminism is constructed in America or elsewhere.  I think, you know, our different perspectives, they all come together, they all are intersectional and all in some way work together and inform the way that we view and should in a way inform the way that we develop and design AI policies and frameworks.

I think as well on how we can action Afro‑feminist theories, I think a lot of times the work ‑‑ most of the work in terms of feminism is offline, the kind of participate racker of thinkings we have, that we inject in the policy, into the frameworks and into the governance frameworks that we design.  The African Union data policy framework for instance, you know, we have a section in there on data justice.  I think when we think about justice, we also think about social data justice, we think about incorporating the different perspectives and also how do we breakdown how we conceptualize data governance?  Is it how do we ‑‑ you know, how do we think about participation or the different communities that we want to impact or that we want to be impacted by data government frameworks.

It is influencing ‑‑ it is about participation, also about understandings of issues, and, yeah, I ‑‑ I really strongly think that most of the work is not about ‑‑ related to technology or AI per se, it is in how we think about these issues and how when we're in a room, when people speak, who are we listening to?  Who is allowed to be creative?  Who is not allowed to be creative?  Who is harmed, not harmed?  All of those issues that are not part of technology per se.

That's all.  Thank you.

>> BOBINA ZULFA:  I will start with the question, being very quick, we're out of time.

A lot of the scholars around Afro‑feminist, Sylvia Tamale herself, they're people that guide the work we do, with policy, I know with a lot of our partners, similar work, so I would say a lot of the theory, it is very incorporated in almost everything we do.  It is a blueprint, there isn't so much work around Afro‑feminist and so what is there guides a lot of what we do in our academic writing, et cetera.

I will cross over to the question here as well around coloniality, and the technologies, I definitely will point it out in this particular framework how we see, you know, neocolonialism with, say, the data extractive practices ongoing with the continent and where we address these and we call it out as such.  You know, as you say what can be done about that, I think it is a number of ‑‑ it is from discussions from these and what we come up with, to build up a number of this, but a number of things obviously we have suggested, and it was mentioned, creative, that's something that's very important.  We talk about coming up with, you know, creative, new ways of thinking around, say, if it is ‑‑ you know, if ‑‑ data collection, data analysis, overall, overall of the practices around AI development and practicality across the continent fitting in with ways of thinking and the objective of the neutral, you know, ways of thinking.  So bringing in new ways of thinking, Afro‑feminist right now, guiding our thinking around our work.

Yeah.  I'm happy to have, you know, this conversation with you outside of the room.

>> AMBER SINHA: I think I'll just most of what was asked, it was perhaps covered by the responses.  I wanted to cover a couple of quick things.

There was a question on dealing with professionals who are not quite steeped in the Human Rights discourse and I think there was a good point earlier during the session on policy objectives and hierarchy of policy objectives.  A lot of times I think that sort of efficiency, language perhaps speaks to stakeholders who may not be as sympathetic to Human Rights discourse.  I also feel that the efficiency, language actually, it is not used enough sometimes because often when we talk about this technological models, they created in a very sort of Western, Caucasian context and they're largely without too much thought extrapolated or adopted in another context and they don't really work in the context.  Also demonstrating how they don't work, it could be quite useful in those conversations.

I think on Afro‑feminist and race in general, I think we have covered that in the framework and detail.  I think I wanted to add I think also bringing in my own context from India, a little bit there, I think centring the anti-colonialism in that conversation, perhaps it becomes very important and a lot of technological adoption that we see in things from digital identity, to facial recognition software, it is very much repurposing a lot of colonial technologies that we have seen being imposed for several ‑‑ for some centuries now.

I do think ‑‑ I completely agree, it is very important to ensure that historical context is always at the background of these conversations.  We need to work harder to sort of bring them in.

Thank you.

>> IRENE MWENDWA: Bridget, I would like to give you a minute, we have not deserted you.

Do you have something maybe to share.

>> BRIDGET BOAKYE: Yes.  Briefly to add on to amber's comments around communicating on policymakers.

I certainly agree, one of the things we have seen, speaking about Human Rights‑centred AI, which we hold responsible AI, at the heart of it, it is about centring people and their needs first, it is that we need to communicate the social and economic benefits for policymakers.  We cannot stress that enough.

I think in the private sector, there is a lot of work being done around how responsible companies perform better on even stock indexes, things like that, but that's not being translated into the public sector, how do we quantify and help policymakers really understand the benefits and actually the tremendous opportunities that human‑centred AI, responsible AI affords everyone in being able to kind of have more effective technologies generally.

The second point, briefly on decolonial AI, there is tremendous work being done by ‑‑ Amber also mentioned ‑‑ Abiba Hurain, there is Sabela in Bali and I may pronounce the last name incorrectly, in Labi I think, in Harvard, who talk about this.  I think a lot of what we discussed actually play as role in this.  When we talk about decolonial AI, it is about investigating power, especially power that's coming from a lot of the foreign companies that own the data that then decides a lot of the ways in which we interact with AI.

I think an investigation of power, and the distribution of resources, as well as the historical context, it is key to advancing the conversation further.

I have learned so much from the questions as well.

Thank you, everyone, for adding in.

>> IRENE MWENDWA: Thank you so much, everyone.

This is an official good‑bye.  We have spent over 15 minutes of our time slot.

As you can tell, it is a very much needed discourse.  We would like to invite all of you to follow us on our Twitter and go on our website to see some of the work we have put about this topic and we are Pollicy with double L, you should be able to find us.

Thank you for being wonderful, a wonderful audience.  We can network outside.