IGF 2021 – Day 0 – Event #48 <A Global tour of Feminist AI> Who is coding it and deploying it?

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> Good morning, good afternoon.  My name is Paola Ricaurte.  I'm based in New York City. 

     For me the first ‑‑ at least coming from Latin America for me the first start is to think about the region itself.  Our region is a very unvalued region.  Our governments are not interested in ‑‑ in gender equality and considering all this ‑‑ all these obstacles, we are right to imagine how can we build technologies as Jaime is responsible for the names of the people and how can we make that technology is not only realm of field for engineers or for educated people in the global north. 

     So when we speak about reversing power symmetries, we are trying to think or imagine and try to materialize technologies that are built with different values that the text in which these technologies are going to be implemented because usually technologies are assigned somewhere, and they are implemented somewhere, and those worlds usually do not connect.  They do not talk to each other. 

     So considering, as I said, the context we live in, one of our purposes is to generate conditions historically excluded people can participate not only us data providers.  But as part of the governance model be involved in the design and production of technologies that respond to their needs. 

     So diversity and equity for us are fundamental dimensions for achieving autonomy, social just, systemic justice and identifying lives.

>> I think that's beautifully put.

     I would like to I'd also like to add to that that it's the idea of coming ‑‑ or coming back to the beginning of the conception of this work and actually conceiving the AI for a positive ‑‑ it's not only inclusion but also really to use its power to actually shape and help the lives of the people ‑‑ with the people who are going to be using it, and I know you're ‑‑ a little bit later you're also going to be talking about indigenous AI and colonial AR.  We're going to keep moving around the world, so all of us get a chance to talk, and I'm so happy all of our colleagues managed to get into the session so ‑‑

     And now I'm going to head over to Raya Sharbain.  I'm sorry, Raya.

>> So my name is Raya.  I'm from the Jordan Open‑source Association, and we're one of the partners in this important PAC, and we're a civil society rather than an academic association, so I think we have an interesting project. 

     I think as Paola said AI and algorithm decision‑making they're these quiet monsters that are embedded in our everyday lives and whether we notice them or we don't notice them even for us in Jordan and in the region from applications we use on a daily basis so even systems that governance deploy from the identification programs and so on.  These are things we see on a daily, daily basis. 

     And as everyone has said already there's a great power asymmetry who's developing the technology and using the technology, and this is something we're trying to bridge.

     There's also the symmetries in the sense that the people who are developing the technology are in one part of the world, and those who are using it are in another part of the world, the part of the world we like to call the global south.

     If we talk about specifically the MENA region.  We experience a lot of inequality especially gender inequality.  I mean, women have the lowest participation rates in economic and political lives, so that alone in any form of decision‑making around technology, so a question we're trying to answer through this project is how can we build a more inclusive AI ‑‑ in environments like that how can we bridge in academic silos and surpass ‑‑ let's say bias in AI systems and there are some existing initiatives that we hope to be able to support through the A+Alliance and also perhaps sow the seeds for a better future in the region.

>> Very well said. 

     We have Soraj Hongladeron.

>> Thanks a lot, Caitlin.  I'm so glad I was able to get into the ‑‑ actually, I was just admitted as a participant something happened, and I agree with Paola and Raya and my region is not too different from these other regions. 

     Women are being disadvantaged in many ways and when talking about this as a man, but my research has been on the ethical aspects of AI, and I direct the center for science, technology and society at the university in Bangkok, so I come from a different area from Raya.  She's from NGO, and I am in the academia area, so we complement each other, and I try to ‑‑ try to contribute philosophical and ethical aspects to the project, which is a wonderful project.  I am so glad and honored to be a part of this project. 

      And as far as the situation in southeast Asia, many countries in this region are in turmoil, and we may have to say this ‑‑ we look at what is happening in Myanmar, and we are looking at what is happening in my own country in Thailand, these countries are experiencing deep political transformation, which are not only political, but the transformations are economic and cultural and ‑‑ and so in many other aspects.

     So I believe ‑‑ we believe that AI can contribute to becoming a force for the good.  And in contributing a more equal and more inclusive societies, and I think there is a way for us to harness the power of AI, so that instead of becoming a tool for the rich, so to speak, or a tool for widening the social and gender and economic gaps, it could ‑‑ it can, you know, become a force to do the opposite, to ‑‑ to reduce those gaps and to create a more inclusive society.  Not only women but people who are, you know ‑‑ who are in need of better inclusion.

     So I'm very glad to be ‑‑ to be a part of this and, you know, I'm also glad to be able to joined this session ‑‑ join this session finally.

>> Yes, thank you, Soraj.

    So those of you who are ‑‑ you will notice, of course, our team members, there are men, there are women.  We are not only geographically diverse, we're in many ways ‑‑ we're diverse in many ways, and that's actually the notion of feminism.  Feminism is, of course, at its core is about equality. 

     It's equality for women; it's equality for all, and all of us and all of our intersections, and all of us and all of our intersection glory, and that's the first thing I want to say.  It's inclusive.  Everybody is welcomed, and we all ‑‑ we consider this a joint project for all of us to deliver equality outcomes to correct for historic inequities.

     We also think that there's a tremendous amount of work ‑‑ this AI is moving at velocity.  It's moving at scale in ways you're all technologists I would assume at the Internet Governance

Forum, so there's ways that none of us have imagined that it would move. 

     And at this very moment ‑‑ so there's been a tremendous amount of research about mitigation, and that's very, very important but as activists we also looked around, and we thought:  Well, do we really want to fight 24 hours a day 7 days a week.  To mitigate for the bias, so the bias remains at the level that it's been historically for hundreds of years, if not thousands of years in terms of power inequities, in terms of imbalances in terms of those economic and social dynamics.

     So we thought as part of our project, what we would do is not to merely mitigate for bias, but what we would try to do is use this moment of ‑‑ which could be a fearful moment to try to turn it into a positive moment to say:  Let's take a step back and let's use this time to think about how to actually create technology that is going to serve the people that it's meant to serve? 

      How can we use this moment of digitization where every nation on earth is digitizing, no matter where they are on this sort of AI sophistication scale?  Everybody's moving to services that are becoming more and more part of machine learning.  How do we just ask everybody to stop for just one moment and instead of taking their land tenure laws or their pension subsidies or their university allocations or their conditional cash transfers and trying to make them sort of bigger, better scale, more ‑‑ with more data but to really think about:  What is at the core of why those services are being provided?  And are those services being provided in an efficient way, not ‑‑ an effective way, excuse me, and the most effective way for most people instead of merely efficiently.

      In other words, instead of just thinking about traffic lights, smart cities, big traffic lights, we're trying to think about really how we serve people, so I just wanted to share that taking my ‑‑ as a moderator.

     I would like to turn to Paola for a second and talk about how you see this project ‑‑ when we talk about intersections, relating to your work also with the indigenous AI and decolonizing AI.  We see them as very interrelated, and I'd love your take on that, Paola?

>> Thank you very much, Caitlin. 

      Yeah, we think that ‑‑ we think that different technologies and ways of seeing the world are not reflected in the development of technology.  Currently technologies reflect hegemonic world view that reinforces racial and gender differences, and so we're trying to emphasize that technology ‑‑ they are biased, but they're not only basic in the sense that, of course, they better predict when the population that they are referring to is white, but we are also trying to think beyond the actual bias when the model is put in place.  We are trying to think beyond that and ‑‑ and considering that technology, especially AI, is a way of building the world, so the models that are used, they are reinforcing racism and gender differences.  Not because the model itself but because technologies embed those values, those racist and ‑‑ and excluding ‑‑ exclusion values.

     When we speak about the decolonial framework, we speak about those ‑‑ let's say sediments that remain from the colonial experience to this day, and this category, colonial power that was a term coined by a sociologist, this process, this logic of inclusion that was used to ‑‑ was used to sustain ‑‑ exclusion that was the used to ‑‑ to use the supremacy of the west and its form of through the colony through racial and gender difference, this category, and this process, this logic is taking place today.

     So this ‑‑ this idea of coloniality to explain how colonial power relations are still in force even at the end of historical colonialism or in some place where colonialism is still a current process, it's important to re‑imagine differently the way that ‑‑ the way that we can create and build technologies.

     So I'm trying to put my ideas together. 

(Laugh.)

>> We're trying to make sure that when we think about technologies, it's not only about data or models but about technologies itself, like the actual technology as a tool to re‑enforce racism or gender differences or other intersectional differences, of course. 

      So thinking in a macroscopic way and a micro way and a personal way, we're trying to develop projects or support projects that are considering these structural problems, these structural violation that we are living across condition colonies or across colonies, and we are trying to make sure that people that have been experiencing colonialism for centuries have the opportunity to use resources to develop their own technologies.

      In the sense we can, for example, support technologies that are developed by indigenous communities in their own languages or, for example, people that are people who are not considered by the state:  Migrants or refugees.  So we're trying to focus our efforts to support those technologies who are not usually considered because they are not, for example, a product for the market

      They are not, for example, as Caitlin said ‑‑ they are not considered part of the logic of efficiency, so I would say that our purpose is to imagine and create technologies that are meant for these traditionally historic excluded populations.

>> Yeah, I totally agree with you, Paola. 

      In addition to that, I think we're also so excited about exploring the wisdom of some excluded populations in ways that they allocate resources and see how the math might be able to be applied to some of that.

      If we look at an indigenous ‑‑ so any sort of collectivist sharing of resources whether it's indigenous, whether it's traditional or whether it's the past commune from 1789, wherever it comes from, something that might have worked for 30 people really brilliantly and really kindly and really effectively maybe was not able to scale, and we've been in ‑‑ historically, a  period where scale is good, big is good, but we still might be able to take one of those models and be able to apply a map to it that now would be able to take a county or a region or a country and be able to ascribe resources in a way that's perhaps more just and more ‑‑ and more effective for the people, so that's also ‑‑ we're looking forward to new models but looking to the past to see how some of those new models could be applied together with some ‑‑ basically with the power of computers.

     I would like to talk to Jaime and with the ideas that we're trying to share. 

>> Yes.

>> Yes, we're open to new uses of algorithms for decision‑making, for AI, with different perspectives and, you know, in a broad sense.  I mean, from indigenous perspectives, decolonizing perspectives.

     Also, from marginalized groups' perspectives.  We are looking for proposals that are created from groups that are multidisciplinary, where approach is working together from technology and from people from humanities and also from people from communities.  This process should be a dialog of these three parts to create new approaches of technology.

     It's not redoing the same technology in a different question but doing completely a new tool.  This is what we are trying to approach, and this includes lots of possibilities as Paola mentioned.  We have, like, different possible people that could be interested.

     One of the possible periods we could work is like gathering data because some of these tools need data to take decisions.  AI needs data, an algorithm that makes decisions needs data and there are lots of people who are invisible in the world.

     For example from people from informal settlements, they are not visible because they do not exist in the data sense of the governance and some ‑‑ there's some technologies like open‑source and open data that could able communities to manage their own data so that we can reuse that data in algorithms that could work in the ‑‑ in the way that the communities would like decision to be taken.

      We're looking of this kind of projects that work with technologies, people from human science and communities and looking to create tools that improve the quality of life from the people.

>> Thanks, Jamie.  I'm going to bounce over to Soraj for a second. 

     Soraj, you're a philosophy professor.  We didn't mention that in the opening, but you opened sort of Buddhist AI and indigenous AI.  What do you think the complements between Buddhism and AI?  Where do you see?

>> Yeah, that is a thing that I have been working on for quite some time and ‑‑ you know, coming from Thailand where 95% of the people are Buddhists, we Thai naturally think of Buddhism when we come up against, like, intellectual problems or normative questions or, you know, difficult questions, philosophical problems and that kind of thing. 

      So when there is AI and when AI has become, you know, the buzzword and the thing that has become very, very powerful and has so many potentials, it is natural for me at least as a ‑‑ as an anesthetist and a philosopher how Buddhist philosophy and Buddhist ethics has anything to contribute to the global decision on the ethical aspects of AI.

     And I think what my colleagues in the same area and have been doing is part of the contribution of the nonwestern intellectual traditions so to speak to this global dialog. 

      Now, you see philosophy has been traditionally a rather exclusive discipline.  I mean, it tends to harbor the traditional white and male, you know, privilege.  It tends to focus I think more than other instances on the west, and it stands in need of being expanded.  It is good for everybody.  It's good for philosophy itself. 

      So when ‑‑ when there are these ethical problems, it is natural, and I think it is proper for philosophy and ethics to become expanded to be more, as you said, you know ‑‑ as you said become more inclusive in the sense it takes intellectual resources from other areas of the world, and Buddhism has been, you know, the dominant intellectual resource in Asia, so it is natural of Buddhist ethics and Buddhist philosophy, you know, become a part of this global dialog, and I believe that it has a lot of, you know, substantial stuff, substantial answers to contribute to this global dialog. 

      One of those is about the need for ‑‑ is about the emphasis on interdependence.  That's a keyword in Buddhist thought, interdependence, which means nobody is left behind, so everybody is dependent on everything else, so we are, all of us, in a web so to speak.  So any action that takes us in one corner of the world has repercussions in all other areas. 

      So with that key idea, we can draw some ethical conclusions from that including the role that AI needs to play and the role that AI should play in creating a more equal and more inclusive society where everybody is, you know, taken care of which, you know, concretely expresses the idea of interdependence.  That is one of the contributions of Buddhist ethics concerning AI that it can offer to the world, to the global dialog. 

      Another thing that Buddhist ideas is founded upon not harming others the idea of a Himsa, which is an Indian word for nonviolence and the idea of not harming others and also, you know, providing benefits to others, they come together, and it can be an anecdote to an exclusive emphasis on the kind of technical development of AI that I think we don't want to see happen, the kind that instead of creating a more society it tends to do the opposite, which is to create and widen the gaps and on economic maximization, you know, benefiting only groups so Buddhism can contribute I believe quite significantly in these aspects.

>> We're very much looking forward ‑‑ looking forward to that. 

      I'm going to now move over to Raya, who ‑‑ I mean, who perhaps it's unfair, but we've like made you our youth spokesperson and activist spokesperson there in a very extremely youthful region.  I love to hear your ideas about how ‑‑ how young people need to be included or they're going to be able to change the dynamic of the trajectory of the ADM and the algorithmic decision that we now have.

>> Thank you, Caitlin.

     Well, you're right.  We're a very young population.  I think, for example, in Jordan, more than 60% of the population is under 30 years old, which is quite the population. 

      So actually I will echo Paola's words on a need to put an end on the colonial effects because it's one of the ways it's had an effect on young people here in the region, and for that I will draw on some issues that we as a civil society and the MENA have been working on. 

      One of the most pervasive algorithmic decisions in AI technologies is that of content moderation actually.  That's present in most social media networks and a lot of young people in the MENA region use social media on a daily basis, and so they are affected by these algorithmic decision‑making and how their content is being moderated.

     Facebook is one of the most useful websites for many people Facebook represents the entire experience online, and for many young people, Facebook is the internet. 

     And when I say content algorithmic ‑‑ a Palestinian man wrote good morning in Arabic, so good morning in Arabic and the AI tool that translated their good morning translated it into English as attack them, which categorized this person as a dangerous person, therefore, Facebook had moved their account.  This creates a whole uproar because we as people from these regions especially young people, we're quite tired of being called dangerous or categorized in these typical ways.

     Another thing that happened a page that belonged to a young group of children refugees, and the page had marked a historical event that had made them refugees in the first place and the page was taken down by Facebook's content algorithms.  We had to go to them this is not creating anything dangerous, but these are young voices that are trying to promote a message to ‑‑ they are trying to use this online platform to also express themselves, and they deserve to have it available without any restrictions.  So entering into conversations with these social media companies are necessary.

     What's even more necessary is drawing AI policies to take us into account, us as young people, us as people living in this region, domiciled in this region, and this is just ‑‑ these are just examples of many I could give and what I really hopefully personally see come out of our alliance is our voices being heard in the first place.

      After 3 years, we're no longer ignored after 3 years of this project and hopefully it will last longer.  We have a more sustained platform.  Thank you.

>> Thank you, Raya.  Thank you.

     As I love the specificity there that you have.  I'm wondering where I could go ‑‑ 'cause you had reacted to what Paola had said also.  If, Paola, you want to talk about some initiatives that are either positively working or ones that you would like to highlight as potentially dangerous, whichever way you want to go in your region in terms of ‑‑ I'm going to actually start with one and then let you ‑‑ let you talk about the ones that you want to think about because I think one of the things that we also need to be aware of as researchers, as makers is unintended consequences.

      For example, in your region we know there was some good work that tried to be done to talk about identifying young women and girls who are at risk of unwanted pregnancies, and so that was great, the intention was to help them.

     But what happened was that that data has been weaponized this collection of the data which obviously was not collected in a very thoughtful manner has been used by antiabortion groups to target those very same women who are ‑‑ were said to be at risk and to kind of harass them into thinking that they have no choices, and I think that's also something that we need to be thinking about.  It isn't only this historical data where that, you know, comes to us through hundreds of years of assumptions, but it's also somehow the way that we're also making our own restrictive algorithms that are ‑‑ with only the best intentions but still perpetuating or being weaponized by other groups to do exactly counter to what our good intentions were, so I'm going to turn it over to Paola who's the expert on all of this. 

>> Thank you, Caitlin.

     Well, you are speaking about the complexities of ‑‑ of developing and thinking about technologies, so that they could not be used to harm those populations who are intending to help, so one of the reasons that we have to ‑‑ to evaluate is exactly what you say.

     These populations, for example, are vulnerable.  Is the data that we're collecting going to put them at a different risk because as I said at the beginning, in our countries, in our region, we are facing many obstacles.  We don't have proper regulation.  We don't have ‑‑ we don't have, like, much of technology structure to secure the data, for example, to give to keep them safe, so considering all these obstacles is that technology we want to build, a technology that we can guarantee ‑‑ if used is not going to be harmful for those communities.

     And, for example, I don't know if Soraj was mentioning or Jamie was mentioning that there is ‑‑ there is lack of data, of course, there is many data ‑‑ there are lots of data collected by social media companies and corporations but data for crucial problems is lacking but when we want to think about collecting data, we also have to think about the problems that are associated with that data collection because, for example, if you have a very small indigenous community or as you said a group of women that can be put in danger by collecting their data, then that technology can be ‑‑ can be harmful for them, of course, and probably they wouldn't like to have that technology at all, so those issues are the issues that we have to think about when we ‑‑ when we say that we want to develop feminists and decolonial AI.

     The technology, the data we are using or collecting ‑‑ the models that we are deploying, the way the actors that are using that technology for what purposes, for what goals? , so all these dimensions, like, from the whole AI cycle should be considered not only like the deployment that was said but also the data collection, the whole lifecycle of AI should be considered if we want to really guarantee that this technology is not going to harm the people that are usually harmed, so it's a very complex problem because if we want to think, for example, as Raya said, the effects of the moderation is one of the many examples, but there's another example.  They worry that technology is ‑‑ are using labor of people from marginalized populations to work, so that's another issue like the labor rights, so there are many, many issues that we consider if we really ‑‑ if we truly want to develop AI that is feminists and decolonial. 

>> Here‑here.  Going beyond a better app should all this tech be used for.

     Jamie, do you want to add to this.

>> Yes, I think in the process of creation technology, there is ‑‑ or there have been like an approach of doing things and see what happens but see what happens is like other people see what happens and when we see what happens, it's like someone is at risk, and we have to run and look and how we could mitigate creating ‑‑ I think the key element here is to ‑‑ given technologists are very creative we they want to use all the side effects of creations and not only see what could happen if we do this ‑‑ we have to change that approach and make more responsible technology, to think more on what could be done with this technology before giving that technology to people. 

     I think it's something that we have to include in the universities or in the ‑‑ in the places where people learn how to create technology because it's ‑‑ if we to use this current technology ‑‑ think more, talk more about the collateral effects and not only on the project that we are trying to create, yes. 

>> Yeah, I couldn't agree more.  I mean, I think that we know ‑‑ the research shows well as we've discussed before, that people who are drawn to technology are often up to the hard ‑‑ I don't even like to call them the hard sciences.  Right away we're saying they're tougher, stronger, more durable. 

     People drawn to technology are drawn to think that appear to be neutral, appear to have an answer and are not in those disciplines often trained with the kind of the nuance, the kind of critical analysis and the kind of 360 approach to a problem that perhaps a philosophy students or anthropology students might be exposed to and trained to think about, and that those types of problem‑solving needs to be ‑‑ come closer to the center so for social sciences to think about the logic because the technology does need to have this sort of structured form of some sort but maybe we'll change the forms and for the technologists the critical analysis that social sciences naturally ‑‑ naturally explore.

     But I'm going to now actually ‑‑ just for a second go back to Paola because we had this call for proposals, and we had a lot of very interesting response, and I'd love for you to talk a little bit about what you've seen and heard and what you're excited about from the call for proposals?

>> Well, I'm really excited that first we're going to have the opportunity to ‑‑ to imagine these new technologies for specific communities, and we're seeking many proposals from around the world especially from Latin America we received some proposals that are, as I said related to actual problems of specific communities.

     For example, in the Mexican criminal justice system, they use, like, Spanish as the, like, official language and as you know Spanish is a colonial language and many communities in Mexico, they do not speak Spanish because ‑‑ well, it's ‑‑ it's a long story, but they don't speak Spanish. 

      And when the ‑‑ when the indigenous communities go to judiciary process, a judicial process, they don't understand what these people are talking about and usually as we know marginalized communities are convicted because of this violence of the state, and there's one project that is going to address this problem of languages ‑‑ so they're working with an LMB tool to make sure that communities can participate and can understand what the process is about and what the problems are ‑‑ are they being accused of.

     So I think this is one of the examples where we can put in place these values.  We're trying to help a specific community that has not been paid attention but by the state or by companies ‑‑ by AI companies in particular and in general by technology because communities that speak a different language that is not a colonial language, they are completely excluded from everything so in this way using specific technology with a specific language, we are going to try to see if this community can, like, have better conditions to defend themselves against the state, for example.

     So I think this is one of the projects that I'm most excited about because it's working with a language that's not usually taken into account by the Mexican state, and this group of people ‑‑ most of the disciplinary group is working with the community to address their specific needs.

      For me this is one of the examples where we can actually see it's possible to join forces to address one ‑‑ one problem of one specific community.

>> And I think we're also super ‑‑ well, all of us are really excited about this one because the methodology and its box office and how it approaches the problem is something that we think can, so it isn't going to be a solution for all the other languages of the world, but it might be a way one little crack in thinking about how these other languages, these other indigenous languages, these other multispoken languages could be interfaced with this sort of ‑‑ the language of the state, so you yeah. 

     I have ‑‑ Ingrid, you turned your camera off, maybe it's a little fun fair to say high and ask you to sign on.  If you wanted to talk a little bit as a digital anthropologist ‑‑ if you want to ‑‑

(Laugh.)

>> And talk about the call for proposals.  There's been a call from colleagues for you to explain that.

>> Hello, everyone greetings from Cape Town, yes, happy to jump in to explain a little bit about the call for proposals which was carried out in the last two months and the selection has just happened and is underway. 

     But what was really, really exciting about it is that it is one of the first call for proposals and research collaborations across so many regions of the world in an interdisciplinary capacity that really brings together so many different fields that, otherwise, are completely siloed even in the context of a particular university or a particular context just don't have the opportunity often to come in to dialog, and, so it really creates a new space for imagination, for dreaming, for thinking, for coming into new logics, new languages, new ‑‑ really new languages because what we see often is that, you know, we talk about language translation but even within the one language there's many different ways of explaining to concepts especially relating to technology whether one is talking of the social side or the technological side or even from an engineering perspective.

     So we had many, many wonderful submissions from the call to proposals from across the region and colleagues who have been engaging in the region, everyone that you've just heard from of the wonderful networks they're convening, and we will be announcing the awardees very soon and there will also be the opportunity to apply for this call for proposals on an annual basis for the next 2 years.

     There will be a total of 9 papers selected on average per annum and from the papers there will be a process of developing a prototype, which will allow really these designs and concepts and experiments to come in to a real life actionable, tangible model to ‑‑ that will then go to a pilot stage so from those papers and prototypes and models that are created then there will be the opportunity also to pilot those models in communities and.

     What's also very exciting and looking from the feminist decolonial perspective in doing this in collaboration and in cooperation with communities, so I will ‑‑ I'll stop there, but I just wanted to emphasize just how incredibly exciting and imaginative this process will be, and I think everyone is really excited just to ‑‑ to see what will come because it's never been done before, and it's incredibly necessary at this time in history. 

      So thanks, Caitlin.

>> Well, thank you, Ingrid.

     And I think we also ‑‑ we're going to put out a targeted call in the new year to colleagues in MENA and Asia to look for more of these applied models.

     So the idea as Ingrid so eloquently said is that these are not projects that describe the arms of AI.  These are really ‑‑ it's applied research that is meant to go really ‑‑ really to go from an idea a prototype to a pilot, if possible, and for dreaming.  I love the -- you said of dreaming of machine learning that may not have the data at the moment ready in which case we would be ready to help support collection of the right kinds of data in the right kinds of collection of data in order that machine learning models could utilize.

      We're really trying to change the trajectory of the system. We hope a little tiny bit and really are very excited to have you around the world join us in this 'cause it's going to take all of us changing the way we think to bring it on home.

     In the last moments, we don't have ‑‑ I know it's been hard for a lot of people to sign on.  There's been a lot of technical difficulties, but I think maybe we'll do sort of a tour de top here of colleagues, and everybody can give their sort of ‑‑ their final words, what they're either hoping for or, you know, wish they had brought up in the first rounds of talking.

      I'm going to start with Soraj.

>> Yes, thank you. 

      I think there are some challenges for us in the region because to start with not many people submitted their proposals from Asia, and those that were submitted kind of did not make it because of various reasons, and I hope ‑‑ we do hope that come next round people will get more informed about what we expect good proposals to be, and we'll do some more outreaching programs, so that more people become interested.

     There are a lot of interests, no doubt about it, in AI in Thailand and in other countries in southeast Asia.

     A lot of groups are talking about AI, but they tend to have a rather ‑‑ what should I say narrow view of AI because there are ‑‑ most of them are technical people and all business people, and they want AI to, you know ‑‑ to do the businesses to make them richer, and there's nothing wrong with that, but I think we need a more kind of expansive perspective of what AI can do, and we need the kind of AI that, as Paola said, very aptly, both feminists and the colonial.  I like this word because it fits into what I have talked about later on.

     Also about the need of inclusion of nonwestern perspectives in the global dialog, so I'm optimistic.

     And since there are so many groups in the country and in neighboring countries talking about AI, it would not be too difficult to link these people after that a more interdisciplinary teams and proposals can be ‑‑ can be submitted so, yeah, we are hopeful what will be coming next year, yeah.

>> Thanks, Soraj, thank you.

     I'm going to go to Raya it doesn't have to be about the calls of proposals.  It can be.  It can be about your dreams, your wishes, just whatever we've left out of the conversations so far.

>> Yes, I will list ‑‑ I will tell you my wish list and what I hope that will come out of this alliance and materialize through the alliance.

     I mean, I hope that we see the technologies and prototypes will understand it better, they will understand Arabic better, many dialects better and the minority languages like Paola said.  We have minority languages here.  I hope to see technologies that when I see something as simple as I am a doctor, it doesn't translate it he is a doctor and will take into consideration she is a doctor because currently this is what Google translates as, so this is my wish list.  I hope to see more people involved also in policy‑making.  I hope to see whatever ID systems are being deployed across the region because they are being deployed, and they are being deployed on very vulnerable populations and refugees.  I hope it will be a more just system.  They will take into account various parts of our populations and real this is what I hope to see come out of this alliance.  Thank you. 

>> Thanks, Raya.

     Okay.  Who's going to follow that, Paola?

>> Well, of course, I totally agree with the wish list of Raya. 

      I would like to see a world where technology is not used as an oppressive tool and, of course, I'm excited with this project because this is the first time, for example, at least for us we are going to have the opportunity to ‑‑ to challenge ourselves and see how our dreams and our imaginary self‑technology can be materialized in pilots, so I'm super excited about that.

     And as Soraj said, it's not only about the technology, it's not only the material that is important.  It's about the endemic hard times and the world views that are embedded with technologies, so we want to be part of that process.  We don't want to be, as Raya said ‑‑ we don't want to be users that are not considered for anything, just to give away our data.

(Laugh.)

>> So, yeah, that's what I'm excited about.

>> Yeah, I'm going to ‑‑ you know, bring up Marshall McLuhan here who said we shape our tools, and, therefore, our tools shape us and here we are making the most powerful set of tools that probably we're going to ‑‑ will be as transformative as the plow, which actually only worked for men as we now know from a lot of feminist research and the telegraph and the ‑‑ and the telephone and all the things that sort of killed the sense of tyranny of distance.

      We're making tools now, and we have to be very thoughtful about who can use those tools ‑‑ who can use those tools also good and bad and also we haven't talked about any political dimensions of this.  Who can use those tools in terms of the marginalized and the powerful and who can use those tools in terms of they're actually being transformative and really helping us till the soil, plant the crops and growing things that are nourishing and feed all of us?  It sounds like a closing statement, so I haven't talked to Jamie yet, and I turn it to Jamie, and I'll wrap up for us.

>> I'm also very happy at this point because we are beginning ‑‑ and we are like having really great conversations.  I think that at the end of this journey ‑‑ but it's -- if you feel challenges, we are going to have a more precise knowledge of how we think technology should be approached in order to generate a better quality of life for people.

     I think that's one of the important elements of this point that we are in this kind of dispute of what ‑‑ how should technology work?  It's really important to have the opportunity to include in the distribution a lot of perspectives from the global south and also I think maybe Caitlin could add a little bit ‑‑ because we also have a sister network in Africa, but it's another perspective that will be included, and I think the network as a whole in every region and also at the global network, it's going to be a very nice opportunity, not only to create theory about how we should shape our tools but also to create the tools in action research projects, so I am very excited to see the final results that we are going to ship at the end of the next three years.  Thank you.

>> You're welcome.  I'm going to ask, and then I will talk about our sister network in Africa as we wrap up.

     But just as a ‑‑ 'cause Ingrid haven't had as much of a chance to contribute as a digital anthropologist, do you have some words to add?

>> Yeah, sure, I think this is a really important moment actually for anthropology and the field of digital anthropology to be coming into existence and particularly in the context of anthropology's own reckoning of its own history of decolonizing and its role in the context of colonization and really what the learnings and lessons are from the histories of various disciplines that have been used and the context of structural violence in the context of extraction and the context of suppression, and I think a really important time for this cross disciplinary learning and global learning in the context of world views and how people are represented and how people are able to self‑represent in systems that are so ingrained and embedded when we're constantly mobile and changing and evolving people, and I think that's really what's exciting is looking at this time of change and cooperation for us.

>> Yes, I echo the idea of excitement.  I echo these this moment.  I do think it's a historical moment where we're creating this technology at great scale where our notions of democracy and inequality are being sort of rocked and re‑investigated. 

     Our nations of race, of caste, our notions of feminism ‑‑ I think of colonialism being patriarchy.  State‑sanctioned patriarchy that's my point of view.  I think that we can rethink almost every system that we're involved in, and this project is one small way ‑‑ one small corner ‑‑ many small corners of a very large globe of how to rethink how some of these things are going to be.

     We do have a system network that's part of AI‑D that's in Africa.  We're really super, super excited about their joining us.  They just could not be with us here today, but they're based in Senegal, in Nigeria and Uganda.  They're technologists from IPAR and sunbird IA and SESAG, which is an economic think‑tank, and we're going to be working with them as ‑‑ they have a gender and inclusion network that's part of this larger IDRC project that we're also involved in, and we have great many thanks to our visionary funders who have brought us together and allowed us to really imagine and to just go ‑‑ yeah, just to imagine and to dream and to create together, so we're very grateful to them and actually I'm going to shout out to our program officer, like this is the end of an academy award speech but, yes, very well deserved.  We thank you for being with us today.  For those of you who are able to get on and join us.  Those of you are going to see this on YouTube who want to come and join us we're at A+alliance.org.  You'll be seeing more and more of the work of the supplied research, this action research that's ‑‑ we're embarking on just starting next week in a very large and very interesting kind of way, and we want you to join us and to hear your ideas and to help make new systems and to make technology better and work for all, and with that I'm going to sign off from my colleagues and thank you, and it's such a pleasure to be amongst my colleagues.  I just, like, had the best time and the best job at the moment in the whole world so thanks, everybody.  I'll speak to you later.  Ciao.

>> Thank you very much.