IGF 2022 Day 0 Event #23 Global Tour of Feminist AI: One Year On

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR: Well, hello, everybody. Sorry for the delays. We are the A+ Alliance for Inclusive Algorithms and the Feminist AI Research Network. We are very happy to participate again in IGF, now in Addis Ababa. In this session, we're going to share our last year's work in this journey on reflecting on and building feminist AI. Also, we're going to give an advance of what is coming for the next year. Join us, share your comments and questions in the chat, and we are going to be happy to read them and reply.

Let me introduce my colleagues. From Mexico, Paola Ricaurte, from Tecnológico de Monterrey. She's the leader of Latin America and Caribbean FAIR regional hub. Thanks, Paola, for being here Paola, even though it's very late in Mexico.

In Thailand we have Soraj Hongladarom, from Chulalongkorn University. He's co‑leader of the Southeast Asia fair regional hub.

We have in Egypt Nagla Rizk, from Access to Knowledge for Development Center at the American University in Cairo. Nagla is the leader of Middle East and North Africa fair regional hub.

I don't see here ‑‑ I'm sure if Caitlin is here in the room? Could someone confirm? I think she is not, but maybe she will be joining us soon. Caitlin is ‑‑ should be now in Dakar. She's usually based in Geneva, Switzerland. She's founder of Women at the Table and the A+ Alliance coleader of the FAIR Network.

And we have, from our sister network in Africa, Laure Tall and Ernest Mwebaze, who we will get soon in the call.

We can start now talking a little bit about the alliance, what about A+, why are working on this in feminist AI. To give a little introduction, we are working on Feminist AI because we all know that AI has potential to improve the quality of life of people. But this technological tool also has a lot of risks of reproducing existing biases and reproducing colonial destructors in the digital world, like concentration of resources, opportunities and wealths, and accusation between racialized and sexualized groups, class stratification around models of production, silencing and extractivism of many kinds. In this moment, it's crucial to reflect on who is developing AI, who benefits and who gets harms from AI, from which perspective the tech is build.

We think AI also implies we have to consider a process of producing technology in a broader sense. It's not only the software. We have to think where the software is running, how the infrastructure was built, who maintains it, who participates in the software design process, how an app, a tool, is governed, who is going to pay for it, among other questions.

There is a limited number of places for tech creation, mostly in California and China, and this implies that their cultures are the only ones that are being codified. Lots of other visions are not being taken into account. The majority of the world is getting out of this conversation, wo we need urgently to make a shift on how technology is built.

With Feminist AI, we are thinking on how AI should look like, and we want to implement technology with the vision of feminist movements, marginalized communities, and, in general, other perspectives that dispute the hegemonic point of view how AI and decision‑making software is being constructed and used.

Now I would like to turn to Paola to go deeper on decolonialism, indigenous thinking, Latin America feminist movement and how this relates with what we want to achieve.

>> PAOLA RICAURTE QUIJANO: Hi, good morning. My name is Paola Ricaurte. I am the leader of the Latin American and Caribbean hub of the Feminist AI Research Network.

And I usually think that technology reinforces hegemonic world views, but especially when we think about technology as a way to build the world, as a way to present a specific model that, in our case, is reinforced in the current technological development, is reinforcing not only gender inequality and racism, but also epistemic dominance.

And in our region, we have the experience of many communities that think that technology is also a way to relate to the world, to the environment. So, usually, these world views and these epistemic views of the world are not considered in these hegemonic developments, as you mentioned before, that are especially developed by people in the Global North or in China, but that actually do not reflect the diversity of cultures, languages, and sensitivities of people that are inhabiting other territories.

So yeah. Just like to think that when we speak about technology, we don't have to only focus on specific individual harms, but also consider collective harms, harms that are addressed to cultures, to peoples in different parts of the world.

>> MODERATOR: Thank you, Paola. We will continue with Nagla. Let's talk about Feminist AI from a MENA perspective.

>> NAGLA RIZK: Thank you very much, Jaime, and hi, everyone. Let me just maybe focus on what is unique. The perspective of Feminist AI for the MENA region, and that will emanate from the uniqueness of the region.

So while there are similarities in language, for example, but there are many dialects. When there are different variations between countries in the level of technological advancements, in the diversity in wealth, in human resources, distribution, some countries are more populated than others, there are different levels of, as I said, developments, technological achievements. So when we look at technology from that sense, we want to look at the promise, but also the perils of AI.

So this is a very region that has unique demographics, young region, a large percentage of youth, but also large percentages of higher degrees of unemployment of the youth, of educated. Globally, the highest rates of female unemployment, and the lowest rates of female labor participation.

There are, as I said, unique culture characteristics, a shared language, but different dialects. It is a region that has its unique political ‑‑ it's tumultuous in many ways, has multifaceted inequalities between and within countries. Digital inequalities and otherwise, what I call analogue divides.

These are which challenges, but we can also turn things around and look at the wealth of human educated resources, the wealth can be turned into a richness of human resources in addition to other resources that are available in the region.

So the challenge then becomes how can we capitalize on this. And this is where the AI comes in, Feminist AI comes in, as transformational. We want to capitalize on the promise of technology. We want to make sure that we mitigate the challenges of data invisibilities, for example, of certain groups, of biases in the language, biases in the technology, so we want to make sure that, in a constructive way, the way technology is constructed, the way AI is constructed, in a way that reflects the realities of the region, reflects the priorities and mitigates the challenges. We want to make sure to capitalize on these technologies and enhance the use of Feminist AI as transformational in the region, addressing the unique challenges, but also capitalizing on the unique resources of the region, as I mentioned, the wealth of human resources, the challenges. We want to mitigate the challenges and make sure that we use the technology in that sense.

And to do away with any biases or any invisibilities. Within our making of code, within our making of the technology for inclusion. So I hope that ‑‑ you know, this is how we see it as the MENA hub. As I mentioned, we are recent members of the network, very proud members of the network and look forward to working with colleagues and within communities in our region in MENA. Thank you.

>> MODERATOR: Thank you, Nagla. Now let's move to Asia, with Soraj. Soraj, you have been working with a Buddhist perspective on technology. Would you like to tell us about how the Buddhist perspective combines with AI, please?

>> SORAJ HONGLADAROM: Yes, thank you, Jaime, and greetings from Bangkok, Thailand. I'm here at my office, so it's rare to join with you during the daytime.

My work has been focused, as Jaime said, on the Buddhist perspective on AI FX (phonetic) and on technology in general. And this is part of the overall idea of the project on Feminist AI, in that we would like to include not only feminist perspectives and ways to include women and other groups into the fold, but we are also focused on looking at the non‑western perspectives and intellectual traditions in ‑‑ when it comes to technology.

And my work has been focused on ethics. I'm from the Department of Philosophy, and I have been trying to model a way of thinking about ethics of AI and technology in general on the system of Buddhist ethics, which is a system of ethics that is based on how to realize the good life.

It is quite similar to the virtual ethics theory, where you have a model of perfection, so to speak, of what constitutes the good life, the overall end that we would like to achieve.

And all action is such an action that contributes to realizing that supreme end, that conception of good life. So this is a very basic idea, and of course there are differences between Buddhist ethics and virtual ethics, but the overall structure appears to be the same.

And my work has been looking at more on the theoretical side of things, where the focus is more on how to think about ethics and how to come up with guidelines, and ideas, and values that should be useful, should be beneficial for us when it comes to using these values into formulating workable guidelines in the field, so to speak.

So I am responsible for the Southeast Asian hub of the -- in debating feminist AI project, and I'm proud to be part of this really great group of scholars.

I have a colleague of the project. Unfortunately, she is not available to join us this time, because she has just arrived back from the U.S. She is from the Department of Electrical Engineering. So there is a good mix of disciplines which will enrich the work of the project as a whole.

I'm from the more humanistic values side. I'm a teacher of philosophy, and she is from the technical area. But we share this goal in common, in that we would like to see how AI could translate into tools or technologies that not only empowering women and other traditionally excluded groups, but to create a way in which we can have more social justice. And as Paola said, in a way, decolonizing our region.

Thailand, technically speaking, has not been colonized, but we have been in a way colonized by more subtler factors. So we could include these factors into consideration when it comes to finding a way, how AI could contribute to realizing a way, to getting a way of becoming decolonialized. Thank you.

>> MODERATOR: Thank you. Thank you, Soraj. Now let's talk a little bit about what happened in Year 1 of the project. This FAIR Network, Feminist AI Research Network, works in three phases basics. We start with call for proposals for articles, then the articles present their results, and we select the ones that go to prototype stage, the prototype phase, and then to a pilot phase.

So we go papers, prototypes, and pilots. And in last year, we have two cohorts of papers, and now we have one cohort in prototypes.

So let's talk a little bit about how the phases go. Let's start with Paola. How were the projects in the LAC region for first cohort?

>> PAOLA RICAURTE QUIJANO: Thank you, Jaime. With the first call for proposals, we experienced that there is an opportunity to reimagine new technologies in different communities to respond to the problems of these specific communities.

So we, during the first year, worked with three teams, multidisciplinary teams from different countries in Latin America, and these three projects were exploring ways to address gender justice, gender equality, and also social justice in general.

For example, one of the projects, it's about how to address access to justice for women. So, in Mexico, the Mexican criminal justice system is in Spanish. And many communities in Mexico are excluded because they speak different languages. In Mexico, 68 languages are spoken.

So, in this case, language, Spanish, reinforces structural bias from the state. So one of the projects is addressing this issue. And they are using a natural language processing tool for communities to participate and understand the problems they're been accused of, specifically women. So the question that this project wants to answer is how communities, and especially women, have ‑‑ can have better conditions to defend themselves against the state.

Another project is developing a natural language processing tool for people to explore intersectional biases in Spanish. We all know that many studies have focused on race and gender bias, but there are other intersections of discrimination. So, in Spanish, this team has developed this tool, and they have found that social roles, professions, physical appearance is related also to gender, class, and race discrimination. So this is trying to make possible for people without technical knowledge or experience to understand these intersecting biases and also to unveil these biases using this tool.

And the third team was working on developing a framework to understand what are the basic conditions that we need to address or meet if we want to develop a feminist AI project or technology. So they were revising many tools, and frameworks, and projects, mainly from Latin America, and they are going to use this framework to see if the new projects that are coming in the new cohort can apply these guidelines to their projects. So this will be like a teamwork, where some teams are, like, developing some tools, and other teams are working towards helping other teams to achieve their goals and also to meet their feminist principles in their projects.

>> JAIME GUTIERREZ ALFARO: Thank you, Paola. Those were the projects of the first cohort. They started working the first part of this year in the product; they're working right now on their prototypes. We also have a second cohort of articles for Latin American region. Right now, they are working on four projects: one conversational agent to support interpretations of indigenous languages in Mexico, also.

There is another project where they are working with AI crowd work, with a gender perspective. They are trying to build tools to support and to help the workers to improve the way they work in order to have a better quality of life.

Also, we have project that is a design, a redesign of model of formulating data signs projects using the feminist criteria, so they are working with what's facilitating workshops in order to create and redesign these formulation projects.

And the fourth project is a chat bot. Actually, it's a two-solution based in AI applied to monitoring response and systematization situations of digital gender violence. This project is developed by a Chilean team, and they are working on this chat bot to identify gender violence, and also to help to identify the situations, and then to send messages to alert authorities or alert communities.

These four teams are working right now in these articles phases.

We also have another project, another team that is right now in the prototype phase. This one is Derechos Digitales, the one that Paola just mentioned. Right now, they are implementing to facilitate workshops in order to try and implement this feminist perspective that they work in the previous -- the article phase.

Also, we have two more teams working in this cohort. They are from Asia. Soraj could tell us more about these projects.

>> SORAJ HONGLADAROM: Oh, yes, yes. One project ‑‑ we have had two projects that have been funded through the first year of the project. The first one is by a team of philosophers, and educational researchers, and computer scientists from Mahidol University in Bangkok, Thailand. And the title is "Virtual Reality as Learning Assistance System for Impaired Women."

So the idea is to develop a kind of virtual reality system that could assist impaired women, women with disabilities, for their learning activities. So it's a learning assistance system.

So they would like to propose the kind of VR simulation, which can be a learning system for impaired person through the use of games, which can be used as therapeutic tools to enhance the women, especially women and impaired persons, to develop their occupational routine.

So it's a fascinating project, and right now, they have already started doing the real work, consisting of interviewing groups of women with disabilities to find out their preferences, their needs, and so on. That can be used when the VR system is developed. So they have been making a lot of progress.

And the other project is from the Philippines, from a team of philosophers and computer scientists from De La Salle University in Manila, Philippines, and the title of their project is "AI Empowered Mobility of Women: Sociocultural, Psychological, Personal, and Spatial Factors to Urban Transit Certainty Informing AI-driven Philippine Women Safety Apps."

So the idea is to develop a kind of apps on mobile phones that would help women navigate the labyrinth of such big cities as Manilla with safety, because they have experienced a lot of problems and they have reported that women in such big cities not only in the Philippines, but also in big cities in other countries in Southeast Asia such as Jakarta, in Indonesia, or Bangkok, in Thailand.

Women traveling alone in these big cities through public transportation have faced similar problems, in that they have concerns for their own safety. When they travel alone, and in their culture where it's male-dominated, as we perhaps are familiar with.

So they would like to develop and find out how AI itself could help with women when they use AI in the apps on their mobile phones. And before doing that, they have reviewed a number of existing apps on mobile phones that are available in the market, and they have found those apps to be rather wanting, in that they don't address the real problem of how to really empower women in such a way that they feel safer when they navigate the complex road system, transit system in their cities through public transportation.

So their work starts with interviewing a number of commuters, you know, women commuters, in the area, in Metro Manilla area. They have also reviewed the existing apps, as I have said, and they also look at the theoretical feature of the idea of the kind of app they would like to see being developed through their research and they would like to see how that really addresses the concerns that have emerged through their interviews, through their investigation of how women navigate this, you know, complex urban transport network.

So, past week, the Mahidol group, they have made a lot of progress, and we look forward to more projects during the second year of our work in the big project. Thank you.

>> JAIME GUTIERREZ ALFARO: Thank you, Soraj. These are the six projects that we have right now. It's the articles phase. (Off mic). Now we are opening ‑‑

(No audio)

>> JAIME GUTIERREZ ALFARO: Hello?

>> SORAJ HONGLADAROM: Yes, I can hear you better now.

(No audio)

>> Hello? Hello?

>> SORAJ HONGLADAROM: Now quiet again.

>> NAGLA RIZK: Yes, thank you, Jaime. Yes, I would love to speak about the plans for this month before the call for proposals. As I mentioned ‑‑ can you hear me? Yes?

I hope so.

>> Yes.

>> NAGLA RIZK: Yes, OK. So, as I mentioned, we have recently joined the network as the MENA hub. And our plan actually capitalizes on our work as the hub for responsible data and AI for MENA and working on gig work and new forms of work in the region, as well as being the North African hub for the Open African Innovation Research Network.

So, having done also a number of activities, and research publications, and workshops in the area of AI inclusion and gender equality, we are really excited to be part of the network. And our plan at the moment is, first, to strengthen, deepen and expand the network ‑‑ the regional network for Feminist AI research. Multidisciplinary, you know, from a point of view, including social scientists, Civil Society, economists, ethnographers, data scientists, machine learning experts, also expanding our reach of partners to social enterprises, incubators, developers, techies, domain experts, et cetera.

So we would like to build on our current activities, but also extend actively engaged in building the network. We are hosting a series of webinars, the first of which is in a couple weeks, the second week of December. That webinar, we're bringing in speakers from the network, as well as potential partners within this network, including people in AI and technology, actually women we have worked with previously, in areas related to digital inclusion, gender inclusion, and AI and inclusion, the future of the digital economy.

So this webinar, we are starting in two weeks. And this in preparation for the research activities that we would solicit for the call for papers. There are a number of areas that we would love ‑‑ I would love, as I'm a development economist and always think of Feminist AI from a developmental angle, from the angle of inclusion and development in its wider sense. So development as wellbeing, as quality of life, better quality of life for everyone, inclusive development, and it's in that sense we think of Feminist AI, as I mentioned earlier, as transformation. So we continue to participate in the weekly leadership meetings with AI, the weekly meeting within the network.

Our very first step is the workshop that we are preparing for two weeks, and then there will be the call for papers. Of course, we're learning from our partners, and there are incredible areas of participation for the region, and my hope and expectation is that this would actually come out the earlier challenges I mentioned: the developmental challenges, the needs we have for positive and impactful role for Feminist AI. I'm hoping this area in developmental areas will come up.

But, for the moment, we are starting, as I mentioned, a series of workshops, starting in December. Expanding, actively engaging with members, expanding the network, and creating MENA. You know, a network of researchers, and activists, and technologists, and interested parties working on Feminist AI.

That's, at the moment, what we are planning to do immediately. And then the next phase, maybe later at this meeting I will talk more about what we are hoping to ‑‑ what we are expecting as papers, as topics for papers, prototypes and pilots. Thank you very much.

>> JAIME GUTIERREZ ALFARO: Thank you.

(No audio)

>> PAOLA RICAURTE QUIJANO: Hi, yes. I think that one difference that we notice with the experience of various cohorts is that projects that include communities from the beginning are projects that really capture the needs of those communities. So that's why we want teams made up of diverse people, but also teams that include the communities that are intended to ‑‑ that these technologies are intended to serve.

So, in our experience, when technologies are not developed, taking into account the communities from the very beginning, from the design. These technologies usually tend to reinforce those structural differences and discriminations that we are trying to avoid.

So in our calls, we are trying to promote diversity in all these senses. Diversity in terms of disciplines, diversity in terms of trajectories, diversity in terms of the communities that are involved in the project itself.

So I think that this is basically the main difference of this call and other calls that are trying to develop technologies. We are trying to embed Diversity & Inclusion from the beginning, from the very conception and design of the technology.

>> NAGLA RIZK: OK. I believe Jaime would like me to speak about the project expectations for the next call, so allow me to take the next few minutes to do that.

As I mentioned, we are having a workshop in December to announce the network, to solicit interest and expand the knowledge about the network ‑‑ about the Feminist AI for our regional network. Then the call for papers will come out in January, after introducing the network, as I said, in December.

We are learning from our partners, so from Paola, Jaime, and Soraj, your experiences are very, very useful for us. We are learning that we would like to have the community involved from the beginning, the Civil Society involved from the beginning.

So, basically, what we have in mind is ‑‑ the call for papers will go out to different communities, to the larger networks. We would hope to have sort of a partnership from including the community together, the Civil Society. I would hope and expect that the topics to be dealt with would really address the priorities and the needs in the region.

So I would expect that perhaps language biases would come out as hope, to look -- to find proposals that include tools that address, for example, the language biases. Very important in this region, as I mentioned in my introduction, as to what relates to work in the region.

As I mentioned, we have high rates of unemployment of youth, of education of women, and low rates of labor participation. And we also have large portions of informal economies. We have a huge informal sector in our region. So what I would hope we find ways whereby the proposals, papers and eventually the prototypes and pilots would address these issues.

FinTech, for example. Entrepreneurship. Platform‑mediated cloud work and platform‑mediated ground work. The gig economy. Mobility of women, which was mentioned earlier by Soraj. It's very safety for mobility for women. This is an important issue in this part of the world. Intersectional biases as well, as mentioned by Paola. Family law is an important ‑‑ is something to be tackled in this part of the world.

We hope, in our workshop, to sort of discuss the different issues and solicit interest, but the priorities, and the research questions, and the research issues have to come organically, ground‑up from the presenters, for the proposals. People who will propose the papers, which will eventually develop into the prototypes and the pilots.

My hope is that we trigger enough interest and enough buzz about the network in a way that we would be receiving interesting proposals that address priorities, developmental priorities, inclusion priorities, and also that would engage different stakeholders, including the community and Civil Society as well.

So we're very much looking forward to that. And please stay tuned for announcements from this end. Thank you so much.

>> SORAJ HONGLADAROM: Yes. So it's my turn to speak, right? What we have been looking for in proposals during the second year of the project. Firstly, you know, in this region. And it reflects what has been said from other regions by Paola and Nagla.

Also, firstly, we would like to look for more diversity of proposals. We only had two successful proposals for the first year. And we would very much like to see more successful proposals in the second year, including those from other regions, other countries in Southeast Asia, also, for example, Malaysia, or Vietnam, or Cambodia, you know, these countries which have not been represented much during the first cohort, during the first year of the project.

That said, the content. We would like ‑‑ of course, we would like to look for proposals on how AI could be developed in such a way as to contribute to social justice, to equality, to women empowerment, how inequality is diminished, including economic inequality and social inequality, and so on.

And projects can look at this broad outline and find their own take on the issue.

For example, the team from Thailand and the Philippines that I have told you about, they just come up on their own on mobility for women and how ‑‑ help women with disabilities. So they have their own agenda, and it's really fortunate that these agendas kind of mesh with our overall concern.

So yes, we are really looking forward to receiving good proposals from all countries within the Southeast Asian region for the next cohort of the project. Thank you.

(No audio.)

>> SORAJ HONGLADAROM: OK. I just got the message from Jaime. The audio from Jaime is not coming to me, so there is a time lag between the message from him. Example of decolonial AI. Very interesting, very good question. There can be many ways in which AI can be part of the decolonialization.

One way is to look at the colonialization not only as political, of a country on another country, the colonizer and the colonized. Another way is to look at this attempt at colonizing in a more subtle way. For example, in such a way that creates patterns of oppression, or inequality, or the mindset that things that I am always on the disadvantaged side, and there is no way to escape this situation.

I think the more interesting and more important part of the discussion, discourse on decolonial AI is to find ways to get rid of this mindset, and AI could help make this possible. And that's the challenge for us in the project.

For example, as in the mobility for women project, instead of AI kind of, you know, accentuating the existing inequalities, AI in this project is being looked at, is being worked on in such a way that we'll create ways to empower women so that they can feel safer and they can become actually safer.

So that could be regarded as a way of decolonizing AI also.