IGF 2022 Day 1 Lightning Talk #12 Talking about facts: how to regulate ai from a queer perspective

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.



>> MODERATOR: Welcome, everyone, we are starting the session.  Hello, everyone, I am from the Internet Governance (?)  I will present ‑‑ we are going to talk about AI regulations.  Specifically on bias and gender.  So we are going to address some of the what are the current policies about artificial intelligence in the global world.  Why they are not common, maybe some of the gender bias and different things that are happening.  So Umut, the floor is yours.

>> UMUT PAJARO VELASQUEZ: Thank you, could you share the slide, please?  The next one.  This lightning talk is a question I had last year in Poland exactly.  The lighting talk was about queering artificial intelligence.

Someone during that talk asked me if how we can apply the queer perspective into the different regulations that are being done right now around the world.  At the moment I didn't have an answer, probably right now I didn't have a full answer but I have a better idea of how, on how to do this.

So that's why I present this lightning talk and trying to explain these processes that I found quite interesting in a really quick way.  But before we go into how to queering the different regulations around AI, I would like to see some content related to this topic.  Next slide, please.

We are starting with the basics of artificial intelligence, is a field that combines different technologies trying to imitate the technology from a person and facilitate the process of decision making in general.  Actually, it's right now to make a lot of classification based on input data.  Here the word disparate is important when it comes to regulating artificial intelligence because the data there is going ‑‑ has a relevant role in all these conversations.

Perspective is a way we see the world.  It's a way that we understand the different ways of knowledge, and the different validity and scope and distinction between justified beliefs or an opinion.  It's more a way to see the things.  Next slide, please.

Thank you.  Queer or queerness.  I'm going to simplify this, just saying that queer or queerness is something that is not considered normal.  So something that is outside what is, can be regulated.  So this, when I try to propose something inside the regulation is became like a conflict.  But actually more like a solution to a lot of the cases of discrimination and bias we face right now, artificial technologies that are being developed in different stages of deployment around the world.

The final ‑‑ it's not the final.  Another concept I want to remind is regulation.  The definition as a noun is an official rule or act of controlling something, and as an adjective is what we understand according to the rules or the usual way of doing things.  So we see that regulation actually pretty much goes against queerness, but actually we can make it like a spin inside of all that concept to start to include some ways of giving that touch of queerness inside the regulation.  Next slide, please.

Well, this part has to be with data.  And it is one of my favorites in all of that because this is where we see how social constructs can actually be applied into mathematical models and at the same time can be applied to the different regulatory models.  One of the examples is this one that I put here, that the formal definition of queerness.  The formal definition of queerness is related to the prediction with the same ground truth on the different attributes must be equal at all times.  Which means if you put into a specific data set, the information female or male or a nonbinary person it will almost be equal.  If it isn't equal, well, it's inferred.

So that's why I wanted to bring that here, because that's one of the things that we had to take into account when we talked about the regulations of AI.  Fairness part.  And so it's probably the attribute we're going to use in order to make all data sets.  Because we should ask all the times if we're going to use gender as one of the variables of the data sets.  It actually is necessary for gender on one of the data sets.

In cases why we do, in that case, regulations to that kind of situations.  Because right now, we don't have regulations systems that actually address that kind of thing.  Next slide, please.

This is the ‑‑ can we make a queer AI?  I think we can make it.  But we have to take into account different methods and possibilities.  In order to make it happen we have to address different biases we find into the different ‑‑ in the different models.  The models actually seem to replicate what a human being, the human being knowledge in general.  What is more common into the different data we use.

So in order to make more queer these kind of models, make more queer these kind of artificial intelligence, we need to recognize the systematic and errors that create unfair results.  For example to give more privileges to a certain group or user instead of others without a logical reason.  That could mean bias.  We need to acknowledge that.  Actually giving more privilege based in variables of gender, race, and others.  We need to be ‑‑ we need to know this stuff in general.

We need all models of demographic groups to ensure there are no biases in this regard.  If we want actually a queer artificial intelligence, we need that the model that we are producing for the artificial intelligence we are developing with people who is queer.  It's something that had to be there.  Not only as part of the data but all the stage of the process.  With this I mean from the design, from the development, from the deployment, to the final process that is affecting the algorithm.  It's supposed to constantly evolve.

Finally, in this part, it is important to report demographic studies for the data we use to train these models.  Because if we don't say for where the data came from, we don't know from where bias came from.  That's one of the more important things in this case.  Next slide, please.

Well, there is some solutions that I ‑‑ probably it sounds easy to do.  Let's say it like that.  But when we go to into practice they aren't easy.  Well, the most obvious in this case is diversifying the data sets in a way that include all the different genders and how gender is being seen in the different cultures.

It must be as we are talking about gender and gender is actually a protected variables, as I said before, we need to have in this different model privacy by design model.  Because we need to guarantee the safety of the peoples that actually are part of the process of creating, being in the data sets for these different artificial intelligence or the different models that we are producing.

All of this has to be made following the FATE model, fairness, accountability, transparency and ethics.  And it must be included, the participation of queer people in all 4Ds, design, development, deployment and detection of bias in order to curate the data sets and get results closer to a benefit than a harm.  This is when I put the approach, for me it's not only for the organizations and the people I really are working on this or academia, but so it needs to include private sectors.  Because, for example, to follow the FATE model the private sector is really important there.  So the government.  Next slide, please.

Well, how to regulate with queer perspectives.  I came up with for this there are actually more questions than answers.  Because our question because actually we need to see in the context of that we are going to regulate because we have the compass.  We don't actually put into practice anymore the regulation or anything, including AI.  Well, here, I put that we need to take into account theoretical aspects, that means which theories are part of the regulation that we are doing.  Because for example, in my case, I try to defend that the ‑‑ if we want to participate in the different regulations in general around the world we need to (?) majority of the world.  And then diversity embedded into the different frameworks that we develop.

Not only have the mission of the global north, it's the mission of the rest of the world.  The methodological aspects, how to apply these theories to practices and actions.  Methods and tools, which ones and how to use them according to the stage.  Data, privacy, design, labor, healthcare systems why and how.  We need to ask why we need regulation in those specific areas.  And how we are going to make that regulation in those specific areas.

Probably we don't need any regulation because already it's something that actually protect queer people from any harm from any different artificial intelligence that can be deployed.  In case it doesn't, how are we going to do that.  Next slide, please.

Next.  Well, I propose here to follow a model that consists most participatory design where it started for interviews with queer people.  Asked why they want to be included inside of these technologies.  Because as we know these technologies doesn't include those in any way.  And actually they assume that probably an account for a specific type of person actually applies for everyone and it's not the case.  As I said before, the analysis I propose these kind of thing that combines the decolonial gender queer support section and majority world approach to it.  Because I think it's more interesting, sorry for the people in the room who are not hearing this, is more interesting the way and the proposal (?) if they come from the minorities of the world.

And policy and framework and others, current possible future model for possible future framework and policy and knowledge.  We need to ask how these structures combine with technology, legislation, and how it could affect us somehow.  It's not obvious that the ‑‑ that something that is said there actually could affect us.  But in the long term we see actually it does.

The loopholes we could allow potentially inequality of power or discrimination.  Subtext or subnarrative in these pieces of legislation including their assumptions of what is meant to be gender diverse or queerness and how the structures could be altered to better endorse justice and equality.  This means how we can change them using the inputs and data that we can have for the gender community itself.

Yes.  Next slide, please.  The challenges on this is probably ensuring in a technical and legal aspects the fair model.  Because it's hard to get ‑‑ it's hard to translate social science concepts into a legal concept and then to a (?) concept.  There are examples of that but it's not easy to make it happen.  But as I said before, fairness is one of those examples that you can see how this concept that came from the associated science can be made into a mathematical model and can be put into practice in an algorithm, in a design.

So probably in a way we're going to find another concept that can actually follow that part of putting into an algorithmic design, and try to get a more, I don't know, more just and more fair, more equal algorithm artificial intelligence.

Lots of policies, one of the challenges and opportunity actually is most of the artificial intelligence frameworks are being developed right now around the world.  There is probably ‑‑ that is an opportunity because we could address all the issues that queer people can have into those different regulations.  And included by design.

For example, we see around the world probably the biggest example we see right now is the artificial recognition technologies.  The artificial recognition technologies has a lot to do with how trans people were affected by it.  Not only by the filing of people, it's because trans people around the world were affected by it and were really active in making happen this implementation, the facial recognition technologies around the world.

This is how to be related with the privacy by design, the last one.  And the researchers that are working with queer people had to understand that they had to do something else on how to implement the data sets and how to protect at the same time the privacy of the people that are actually part of the investigation.  Because we know that the reality of queer people around the world is not the same.  Probably in other countries it's more open, more welcoming.  It's an issue that is happening around the world, it's going to be part of our lives sooner, so we need to start addressing these kind of things.  It's not included in the different frameworks on different policies and regulations.

And next slide, please.  Some possible conclusions is, well, as I said, the implementation of a queer perspective all over the world could be problematic and be considered difficult to achieve because of cultural aspects related to it.  This approach actually could offer a more holistic way of seeing regulations in gender related to artificial intelligence because we actually are thinking more than a bilateral way, seeing all people and the ‑‑ before the algorithms.

Another conclusion is that any AI design, development, and deployment and bias detection framework that aspires to be fair, accountable, transparent, and ethical must incorporate queer decolonial trans, and other theories into this stage.

It's important for fairness ‑‑ AI system.  Finally, this goes whether gender sexuality and other aspects of queer identity should be used in data sets and AI system and how risks and harms should be addressed and which lines should be mitigated without forget the end users.  So AI can significantly improve the quality of life of queer persons.  That would be all for now.

If you have any questions I'm here to answer.  Thank you.

>> AUDIENCE: Are there any implementation status especially in Latin America?  Why I'm asking this.  Today, we have in Brazil specifically, a government that didn't put the sexuality diversity on the census.  We have like a five year ten year census to determine whether the people are ‑‑ what are the type of people, what are the ethnicity of the people in the perspective of the social life of the people.  And they negated it.

We are also having problems about implementing some nonbinary perspectives on the national registration, and the national ID card we are trying to introduce into the system.  It has been like mobilized to the social groups that depend (?) LGBTQI community.  I feel like this is an amazing thing because it can put into perspective that our society is not bringing the queer people, and especially with other methodologies that could help people feeling like they're part of the system.  Are there any other countries that try to implement it?  Especially in this Global South?  Or is it like the thinking and the development of the first basis to develop it to the second one in the near future?

>> UMUT PAJARO VELASQUEZ: Well, I think the question is about how to diversify the data sets in general, right.  Kind of an example of countries that actually are doing that.

Right now we are in the situation, for example, in Argentina and Colombia where the marker of nonbinary and different markers for trans people were introduced into the national documents.  It's similar as the case for Germany.  We copied your system.

But that's bringing some problems in the implementation, in the function of the space.  What does that bring?  For example, I put it into something that happened.  My national ID already had the nonbinary marker.  When I tried to get into the country, I couldn't get into the country because facial recognition tagged me as a male.  There was a dissidence between my document and what the machine says.  Until ‑‑ so a person came to actually verify my identity.  So I ‑‑ that creates problems, you know?  So it's quite annoying that actually all countries are trying to be like inclusive, but others aren't that way.  So we need to say to them, you're implementing this here, but implement it in a way that actually is not going to be with the reality of where you want it.

There is a problem.  Another country I know is doing some effort, especially in the healthcare system to include transgender people (?) is India.  They actually really making a lot of effort to include the special needs of transgender people into the healthcare system.  The healthcare system is quite complicated.  It's really quite complicated.  But they are actually making so many efforts that can allow transgender people to go to the right medical system that they need in the moment that they need it.

Now in a different ‑‑ not having to pass for all this kind of process of verification or their identities over and over and over and over.  So they're trying to oversimplify that and pretty much having the same rights as anyone that has an appointment to a medical doctor.  So that's probably the cases that I know from the Global South.  I know in the global north, they had problems, especially more related with the facial recognition systems.  And some problems also with the healthcare.  The problem with healthcare in the global north actually is always here.  Sorry.

Yeah, I don't think it's pretty much no other cases that I know of.  It would be great actually in Latin America we start to think as a community about this problem.  Because regulation has something in particular that actually pretty much the same in the countries where I'm putting into place.  So it's good to ‑‑ that is something good to us.  It's a shame that we don't have something like the European Union to make things easier, but we actually have similar laws in the countries.

Any question?  Okay.  It seems that the time is up, so thank you.