IGF 2021 – Day 2 – Lightning Talk #32 AI vs. Our Bodies

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

     >> JUAN PAJARO VELASQUEZ:  Hi, everyone.  My name is Juan Pajaro Velsquez.  The UMUT organization is based in the United States and studies topics related with artificial intelligence.  I'm from Colombia.  My background is communication studies, cultural studies and soon I'm going to start masker in AI in ethics and society.  That's my main topic.  That I'm trying right now. 

     To know who I am, I'm a nonbinary person.  I go with they/them pronouns.  In every queer in AI event, we have a code of conduct we like to follow.  We want everyone to feel safe in these spaces. 

     So, please, if you follow these roles, I think you will feel safe here.  Or speak to that, please. 

     So, something about Queer in AI, a volunteer‑run organization for advocacy, including queer scientists and queer people in STEM careers in the United States.

     Queer in AI started at a conference that occurs every year, about the methodological networks with artificial intelligence, starting in 2017. 

     We realize in the moment we need more representation for queer people in this field.  Our mission is to build awareness of queer issues in artificial intelligence and machine learning, fostering a community of queer researchers and celebrate the work of queer and trans scientists.

     What we do right now is pretty much workshops and socials at most AI, machine learning, neuro linguistic conferences.

     This is the first known AI conference that we are, it's like an organized goal to be participating more in non‑AI conference.

     We've seen that we can reach more people and the way I may know what we are doing right now.  We are fighting for investigation that we are developing in this moment. 

     One of the investigations is the results of this ‑‑ oh, it isn't there.  The demographic survey is to identify and share the future of programs we're going to develop in Queer in AI. 

     Certain people are not seeing this successful.  20.5% of people are black or people of color.  10% are mentioned that they have disability.  5% say they are graduate student and 20% are saying their names. 

     After that, we realize that about 80% of the people respond, demographic studies, they say they don't have any representation in the normal fields or in the scientific fields on Academic Affairs. 

     Now, I want to explain something, what we're going ‑‑ what we are going to share, what I want to share today.  Is one on the investigation that we call AI versus (?).  This came out from the question, from the pandemic times, when we realize that some queer posts on social media were blocked. 

     We started a question about configuration tools on how these is a threat for queer people and why this affects queer people and women. 

     During our investigation, we see that the spread doesn't have many configuration tools.  We started to ask who they are regulating?  That's when we realized that they actually, the social media platforms are not regulating for women and gender diverse people.

     They're mainly regulating for male, white, cis persons.  They're only regulating for them. 

     So, from that moment, we started to think what we can do about it.  What we have to do to change that. 

     So, in, like, two months of conversation and trying to find what we do, we came out, probably, a new epistemology that combines socialize science with some technical backgrounds, can be something that could face all the barriers, that are towards gender diverse people and women in developing of the artificial intelligence. 

     So, we said, like... how to achieve this epistemology.  The epistemology is, we said the most‑important thing is to gain trust in this artificial intelligence because we need to trust that it's, the bias will be less as possible.

     So, with better modification, monitoring and advocacy.  Are the three main points to gain trust that the ‑‑ on this kind of technologies.  Especially when we're going to talk about gender diversity and women. 

     So, we came up with a participatory design, not only included here in the design part, technicals also include social scientist and especially people that are, that identify as gender diverse and trans and women.  Because it's totally necessary that, if, from the data science stage, we start to think how these artificial intelligence are going to look.

     The third point that we think about is what's most‑like gender approach that we say that this kind of solution, including the AI.  So, we tried to think and we realize that pretty much the way we know the work is based in science that is produced and developed by men. 

     So, everything, everything that is hard is understanding science and everything done, itself is not standard science and as so, it's compared to female. 

     So, we're letting our development on the difference, artificial intelligence, the subpar, that also should be included in this kind of technologies. 

     That is when we, let's say, in a really risky way, decide the queer epistemology for artificial intelligence.  We envision this queer epistemology for artificial intelligence like trans analysis where we combine the colonized theories and neuro linguistic processing.

     We do those four things.  We decide it could be the way to resolve some bias towards gender diversity and women. 

     So, we see that means not only the people that are technical, should be the ones that are doing these things, such as scientists must be part of this process and should communicate with technical part.  How this technology should be developed, we have impact in real life.  People is affected by that.

     We see every day, especially right now with the face recognition, for trans people, some examples, pretty much everywhere in the world. 

     In India, for example, during the pandemic, some trans access to Health Care because so far, the recognition face software, there's a problem we need to face.

     We think, besides constructing trust with better monitoring and advocacy, we need to think this from the design stage, development stage and implementation stage.

     We need to be doing this all the time.

     Bias are always going to appear.  If we want to achieve better technology, we need to be constantly monitoring and constantly addressing those barriers. 

     Finally, this is more‑related to artificial intelligence and governance.  Could be a solution to address the problems of having barriers towards gender‑diverse people and women. 

     And that was pretty much it.  If you have any questions about it or want to make some comments, I'm here to hear all of them. 

[applause]

    

     >> Hi, my name is Veronica, I'm part of the initiatives here in Katowice and first I want to thank Juan for this initiative and for this work and my comment involves a question.  Since also, in the European Union, we use this new kind of metrics, like privacy, by design and I don't know, center, human‑centric AI.  Could we also talk about inclusiveness, by design, in this case?

     >> JUAN PAJARO VELASQUEZ:  Yes, this is like the first resource we're developing.  We included that part because it's important.  And when we mention the design part and development part, actually, the major objective is that inclusiveness by design.

     >> VERONICA:  Okay, thank you. 

     >> My name is Catarina.  My question is, as I understand, you're a nonprofit organization, are there any data scientists or developers in your organization, so that you could also develop the tool that you could be satisfied and in case you find some system that it's totally disappointed for your values, you could compare with something you developed and show, for example, in case of the, some disagreement that there is some solution that you could accept and that's better, or I mean "better," at least some other way of solving such conclusions that arise.

     And anyway, they will arise at some point. 

     >> JUAN PAJARO VELASQUEZ:  We're developing, like, tools for name changing with the quotation systems.  Because, when you are trans and you change your name, it's really hard, some scholars in this, to change your name really fast.

     So, we are trying ‑‑ we're conducting, right now, an experiment to know how fast this scholar, to change your name, when you want to change your name.  We're developing that tool right now.  It's one of the solutions we're making. 

     >> MICHAEL:  Science and Technology Agency, good question on what you mentioned.  Would you consider, from the policymakers' point of view, would that be beneficial to introduce to pretty much all AI‑related products?  A system that could continuously update the, well, obviously, the machine learning on the face recognition.  As a whole and that would solve the problem. 

     Hopefully, for everybody. 

     >> JUAN PAJARO VELASQUEZ:  That's a harder approach.  If you ask me, I don't see that facial recognition is a technology that we should use.  Because, it's always going to have barriers for gender diverse people.

     Even with data, there'll always be barriers.

     >> MICHAEL:  At the moment, in the current state of the pandemic, where we are, we're trying to get away from being approached and have to touch something, you know, whatever.  We're trying to do and use the face recognition as a very‑efficient, in general, very efficient method.

     But, what, what, from point of view you consider as NGO, you could come with to take and tell the policymakers what would you expect them to implement as the AI strategy for the country?  To actually include ‑‑ so, is it a continuous learning that you use ‑‑ I don't know, every now and then, your phone to take a snapshot, it's up on loaded? 

     You know, I'm talking very broadly here, but where we are at the moment, from my understanding and AI universe, is really, we're designing policies for the countries.  So, this is the time when you want to say about trying to avoid the transgender inequality, by trying to make policymakers actually implement a system in place for them to be able to update your legal document or your legal records by using the technology that is, at the same time, supporting the, the reduction in the pandemic spread.

     It's just basically that, what I'm trying to say, so, maybe the outcome from this should be that, you know, maybe there should be a document that will try to help them understand what the issue is and how to make sure that even if people get older and older, you know, you need to update your documents anyway. 

     My grandma looks different than she looked 40 years ago, since she had last had her picture taken.  Just coming from this point of view, you know from this standpoint.

     >> JUAN PAJARO VELASQUEZ:  Yeah.  The policy making point right now, I think it's a good idea to start thinking about it.  I would take your suggestion and surely I'll share it with my colleagues.  It's really important to think in a way as you're saying. 

     We are, right now, not seeing what is going on, but we are pointing to the policies.  So, thank you for that.

     >> MICHAEL:  You're very welcome.  I'm engaging and designing the policy for Poland now.  It's a crucial thing.  I think it was overlooked in the U.K., so, we here in Poland are at the stage where we can actually put that in place ahead of time and China's got their policies, Russia's got their policies, but obviously, you can make that happen ahead of time.

     >> JUAN PAJARO VELASQUEZ:  Thank you.

     >> MICHAEL:  You're welcome. 

     >> Hello, my name is Amelia.  I'm here as part of the Projective Summit.  Thank you very much for this presentation.  I think there was a lot of very, very interesting and sometimes eye‑opening points made. 

     So, I would like to ask you, because recently, one person told me about the situation because when you are born here in Poland, you get a personal number assigned to you. 

     One number, this number indicates the gender that was assigned to you when you were born.  So, this person came to the situation, they had to indicate their name and this person number and the firm said that their name doesn't suit this person number because based on this, this person should be female and their name is not female.

     So, I just wanted to ask you, what would be your advice on how to react in such situations?  Because, you know, this person couldn't fill this form because it just said that their name doesn't fit their gender.  What's the problem?

     >> JUAN PAJARO VELASQUEZ:  That's personal to me.  I've dealt with that a couple times.  It's a regulatory problem, mainly, not taking into account what are the names of the people. 

     I think, actually, they had to take into account, people change their names, probably assume that names don't use gender as a solution.  I don't know how the policy government is willing to do that.  It's something that is hard to deal with the government, that has a solution.

     For the person, it's hard, it's showing that, actually, you can fill out a form, it actually happened a couple times.  I'm nonbinary and only have two, male and female and male and female doesn't have the same name.  I had to like, what I should do with this form? 

     So, I think, not only people in some private sectors should be aware of these realities.  And stop making them feel that way.  They have to get up with the times.  We're in 2021.  It's like that.

     >> AMELIA:  They don't have to follow some rules that you know, they don't have to divide names to be female or male, it's just something ‑‑ I was just wondering, like, how we could advocate?  How we could persuade this to companies.  They don't have to do that.  Okay... I see there's other questions.

     >> JUAN PAJARO VELASQUEZ:  You want to say something related to that or comment on other?  You have to have a really good sample of how this affects people.  And actually, for me, for us, it really works very well to show examples in real life, of how this issue is affecting people.  And how this is important and how this is something that gives value to the company. 

     >> Thank you very much. 

     >> ALKA:  Thank you very much.  I was actually also triggered by the previous question.  On how we can, for example, on, how to maybe remove gender, in the Netherlands, Dutch ID cards are, within now, five years time, will have no gender on the ID card anymore, because, in some cases it is just irrelevant. 

     And I think more countries should follow the example of what information should we collect and what information should we process and implement in those AI models and do we need gender in our training data?  That should be a question. 

     Because, for some, and quite a lot of purposes, you don't need that.  So, I hope other countries and companies are following example and try to yeah, thank you.

     >> JUAN PAJARO VELASQUEZ:  We have a question online.  Eileen?

     >> EILEEN CEJAS:  I think your research about AI is interesting and I agree that so far, during the pandemic, related to health, affecting human rights, especially for gender diverse people.  In regard to Veronica's comments, it's important that we should have a draft and think about these different policies from the beginning to be human centric.

     So, in this way, we consider from the very beginning, the gender diverse people and not as a means to fix when policymakers discover they are affecting these communities. 

     And finally, I'd also like to thank you for bringing this important conversation to Poland IGF 2021.  Thank you very much.

     >> JUAN PAJARO VELASQUEZ:  Go really fast because we only have one minute.

     >> Hello, thank you for your presentation.  My name is Ron.  I'm cybersecurity analyst, so, I understand that AI is a tool to extract data patterns and you said that when doing facial recognition, the bias is, it makes these the trans readings very inaccurate.

     I think this is a pattern of the technology because the borderline cases that the AI is due to find patterns and if you, in the cases that the patterns are not so clear to the technology, it will be wrong. 

     So, I think this actually happened in a lot of other applications of AI.  So, how can you choose, like, what weight, AI to these kind of situations, doesn't matter.  How to distinguish between things that can be, can use AI and things that cannot?

     >> JUAN PAJARO VELASQUEZ:  You have to ask the people affected by that.

     >> RON:  The main idea, I'm asking you.

     >> JUAN PAJARO VELASQUEZ:  Most of the people affected by AI are trans people.  So, I think that we have to include them more in all the stages of developing these technologies.  If we are, we'll have more trans scientists behind developing AIs, probably we'll have less bias.  That will be one of the many outcomes you could have with AI. 

     Yeah, they probably will address, better, how to embed this kind of data, not to be so bias towards them.  And some trans theories.

     >> RON:  Thank you.

     >> JUAN PAJARO VELASQUEZ:  I think we're finished here.  Thank you very much for being online and on‑site in this lightning talk.

[applause]

    

     [Presentation concluded at 5:32 a.m. CT/12:32 p.m. UTC]