IGF 2023 – Day 4 – WS #237 Online Linguistic Gender Stereotypes – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> STELLA ANNE TOEH: Good morning everyone.  So to our online participants and speakers online, we'll just have a 5‑minute delay.  We're currently waiting for one of the onsite speakers.

But thank you so much for joining us.  And we definitely looking forward to having this cross‑regional discussion.  And I think it will be a very interesting session.  So please do bear with us for 5 minutes.  Sorry for any inconvenience.

Hi Juliana.  Thanks for joining.  Can we do a quick mic test on your end.

>> JULIANA HARSIANTI: Hello.

>> STELLA ANNE TOEH: Yeah we can hear you find but could you just cheque your camera.

>> JULIANA HARSIANTI: I'm sorry, I cannot open the video and share the presentation.  Because previously this will be ‑‑ I don't know.  Because the connection is not stable or not good, or something like that.  If open the camera and share the presentation, it will be ‑‑ sometimes it will be lost in the middle.

>> STELLA ANNE TOEH: Okay.

>> JULIANA HARSIANTI: So it's okay.

>> STELLA ANNE TOEH: Do you want to just share the screen or no.

>> JULIANA HARSIANTI: No no.  I will explain in the doc show.

>> STELLA ANNE TOEH: Okay.

>> STELLA ANNE TOEH: Okay.  So while we're waiting the last minute for our final on‑site speaker.  Just to let everyone know, we'll be going one round for each of our speakers, where they will have a chance to introduce their work and themselveses.

And then weevil I move onto the second round with an open roundtable discussion.  And please feel free to drop question in the chat box.  Our online mought rater will be collecting these and we'll address the questions in question and answer session at the end.

All right.  I see it is 8:50.  So good morning, good evening, good afternoon to everyone who's joined.  Thank you so much for taking the time to join us.  My name is Stella.  And I'm currently with NetMission.Asia Malaysia youth IGF.  ISOM Malaysiaia and Kyushu university.

I'll be moderating today's session.  First aid like to give the opportunity to welcome our first speaker for the session.  On my right.  That would be Luke Teoh.  Please take it away.

>> LUKE TEOH: Thank you.  So I'll just share my screen.

Okay.  So the topic of today's workshop mainly online gender stereotypes.  And you may wonder what that means.  Basically linguistic stereotypes are, generalizations or assumptions people make based on someone's gender that are reflected in language.  These include beliefs about the role, behaviour, characteristics and also abilities of individuals based on the gender.

These linguistic gender stereotypes can be reflected in different aspects of language.  Gender pronouns, job titles, descriptive language and also the conversational roles.  And to just narrow down the scope and relate it to the internet and my work currently, which is focusing on the adjectives.  One part of language.

So adjectives are one aspect of linguistic gender stereotypes.  And according to Castillo‑Mayen.  Associated with women.  Emotional.  Understanding.  Sweet, submissive.  As you may understand these adjectives reinforce the stereotype that women are more emotional.

On the other hand, adjectives like strong brave, competent or bold can often associate with men.  And this reinforces the stereotype of men being more dominant or logical.

These adjectives create gender‑based expectations and limitations.  And these influence how individuals are perceived and treated both in the online and in the offline world.  Such language as you may assume or understand has the potential to shape societal attitudes and contributes to gender inequalities by reinforcing traditional gender roles and norms.

You may be wondering so where is the online or internet part of this workshop?  We're getting to that.  Linguistic gender stereotypes can also be observed in online advertisements.  And this idea actually came from myself studying my undergraduate degree at University of Science Malaysia.  And I think on the panel we also have Dr. Manjet, my supervisor for that course and really guide me to make this research possible.

In the early 2000s with rise of data driven advertising and targeting capabilities.  Online advertisements became increasingly personalized and tailored to specific audiences, including gender‑based targeting.  Despite increasing emphasis on gender equality and social development goals developed by United Nations and its recognition as the fundamental human right.  It continues to endure.  And according to BUI021, linguistic gender stereotypes in advertising are used in targeting marketing and product positioning.

As a result, to focus on specific group of buyers T producers persuade by using the right choice of words regarding the product.

So moving to my research which was conducted with small respondent group of 43.  They were all age between 21 to 22.  So can be considered as gen Z youth.  And total of 183 Instagram captions were selected from companies that I won't name.

And from those captions, 151 adjectives were short‑listed.  And these are some of the adjectives we asked the respondents their views on.  And which gender ore genders do they feel these adjectives describe the best.

So what did the results show?  The results show that the majority of respondents have similar gender connotations for all of the 15 adjectives.  And most of the adjectives have at least 50% of respondents each answering the same gender association.

So this is a brief picture or view of the results we were able to get.  And participants of the questionnaire hold slightly more gender biases towards these adjectives compared to the qualitative study on previous literature that my team and I read through.

However, there are instances where the respondents were very transparent about their gender biases.  For the adjective, sparkling and floral, which almost all the respondents think those adjectives conclusively represent only women.  So they thought that by using the adjectives sparkling or floral, you can sell them.  Or you don't even use it to describe men.

However there are also situation where is the participants have ambiguous gender associations with adjectives.  For the adjective, sophisticated and romantic.  In which respondents gender biases are about evenly split between men, women and both genders.

What about the way forward?

There are similarly mixed gender adjectives on both Instagram pages which might be because those brands are slowly attuning to more careful approach to gender characteristics.  And right now opening up to the spectrum and different sort of how people would like to identify as.

And as for the perceptions of gender stereotypical adjectives utilised in Instagram captions, the respondents conform to the gender stereotypes.

And they also seem conflicted in opinion for others and repulsed the gender stereotypes associated with the adjectives for the rest.

So seeing how language and culture are inextricably intertwined, it would be great to include the role of language in bridging the agenda dig digital divide.

And the tie of language is perhaps the strongest and most durable that can unite us.  And I think I've taken up my time for now.  Thank you for listening, and I'm looking forward to the rest of the discussion.

>> STELLA ANNE TOEH: Thank you very much Luke for your brief overview and you research.  I think it is very interesting to see how we can, you know, relate to our own experiences.  I think most of us would have seen perhaps different kind of advertisements that you might get from your different gendered friends.

And so that was perspective from our Asia‑Pacific youth.  Now our next on‑site speaker with us.  Arnl Doe de Santana.  Sorry if the name is incorrect.  But yes, your seven minutes starts now.

>> Thank you everybody.  Arndaldo.  I'm from Brazil.  Researching and internationalist.  And I am researching about LGBTQ Y community and some stereotypes we face daily.  At first, I came here to talk about some of the issues that we face there.  Gender can be read as type of module that gives power to some groups.  And put on others in position to be, I don't know, exploited.

Also, as we are talking about this stereotypes, I'd like to bring here the meaning of the scripts of gender that we face daily on our.  So if you are a girl or assigned as a girl when you are, like, born.  You have to do and fulfil some of the development that society made to you.  And if you go all the way into another perspective, you are not great to the society.

The same way goes to the meaning when you are not ‑‑ when you are assigned as boy.  So a linguistic stereotype is a way that people react to speech varieties associated with lower prestige groups and tributing negative characteristics to the speakers.  And it all goes through a gender structure perspective.

So who holds the power?  And who can put this power to impose something on society?

As minorities, we face some problems daily.  And especially nowadays that we face some of society, it is really important to talk about this.

I'm here not opposing any slides because I feel like it is more great to bring the possibility to all of us to talk about the development of internet that does not bring some stereotypes to our days.  And to try to make positive way of building what we do.

Also, I will reference some of my friends that are developing some research about the gender stereotypes, linguistic stereotypes and how does it impact between.  There are people between 8 and 16 years.  There are children and teens.  And how does the marketing influences their perspectives online?

So we have some norms developed by society.  Internet is reproduced.  So if I am a girl on the internet, I have to develop my way to catch attention.  Especially if I am trying to get on the market to influence.  And talks a little about what we had today, with the development of some industries of media that bring children to work as performers.  And we face it daily.

Nowadays in Brazil we have some discussion about how can children that work since really early ages handle their way of having so much money.  And I feel that I'm going a little bit out of the topic.  But talking about this, we can face some ways that our structure and our society and also the reflection of society on internet.  Because talking about the internet, we talk about also power and the face of violence.  But about the patterns, standards.

And when you have low ways of speaking, and when you put yourself on the way that you break this rules that are encrypted.  You go through a way of trying to...   to go beyond these stereotypes.

I feel like this will be the first statement.  Thank you everybody.

>> STELLA ANNE TOEH: Thank you so much.  Arnaldo.  Thank you.  Now we'll be going on to the other end where we have another onsite speaker.  Next from Julia Tereza Rodriguez Koole.  Please go ahead.

>> JULIANA HARSIANTI: Thank you.  My perspective and my narrative will be different.  I will change the scope from male to the female.  And think about the gender stereotypes that are used beyond simple prejudice.  They are weaponized to mobilize pre ‑‑ mostly specific demographic.

mostly specific demographic.

>> JULIE RODRIGUEZ KOOLE:  ‑‑ in attempt to recruit them to radical and terrorist groups.  These memes and jokes and many types of content are used as way to game‑ify hate.  You first propose a game.  As a jockey.  You, the perpetrator, submit that joke in the public venue on the internet.  We have seen lot of activity on younger ‑‑ on younger platforms.

And also more related to the gaming community.  Using those jokes to first spot someone who is prone to prejudice or prone to violence with that prejudice.

Because we all might face and deliver our actions based on prejudice.  But using that to harm others is another step.  And it is another step based on many researches.  Rakesh, psychologist from India of the university, he is specializing study cyber terrorism through psychology and showing that ensuing on, participating on these activities, bringing a reward.  A psychological and chemical reward on the male audience is trying to diminish, demobilise, and attack the female youth mostly.

And also, there is a study in Germany and in university that I fail to recall the name right now.  That when trying to make in‑service teachers aware in the German universities, there is a great ‑‑ there is the majority of audience, it starts by the dynamics and the educational programmes that exist on the subjects that are trying to convey an ethic programme, etc.kic guideline to the teachers.  But there is a minority where the activities have no effect.

They don't even get it that it is an activity to bring awareness to female, to female question, female problems.  And also an attempt to stop misogynistic behaviour and facilities of diminishing the power of women.

And then my research studies test group in the Brazil human rights ministry that tries to typify what is hate speech.  And it is really important movement.  It is a really important action.  Taken by a government to try to categorize what is hate speech.

And they ‑‑ we're trying to see if this test group was seeing this gameification behaviour in the youth.  And it is trying to recognize that movement, that trend, that it is happen amidst our youth, our male youth.  And we found that although it didn't specifically targeted terrorist groups who are sought to radicalize the youth with the internet, linguistic stereotypes.

They can recognize that linguistic stereotypes and the internet can make, can both participant in infrastructure and a design of a platform that facilitates hate.

So we have a demographic that enjoys what they are doing.  That they are aware.  And that they are being coopted to organise groups.  And to try and pass, demobilise protests. ‑‑ people that stand out.  Young female youth leaders.  And this is all connected, leading to a point of rising.  Reactionary.  Rising, reactionary demographic in my country.  Giving us difficult and violent and sad environment.

And I would like to encourage anyone who wish to know better or is dealing with that situation in their country to reach out to me and the many others who are trying to strengthen human rights in the world.

>> STELLA ANNE TOEH: Thank you so much Julia.  Interesting to see how we progress from a more neutral to linguistic gender stereotypes can be in your life to perhaps more extreme cases.

I'd like to open for a short one question if anyone has a question from our online participants who have been with us.  Or from on cite.  Just a quick question you might have interest for our youth researchers for what they presented so far about their efforts in researching online linguistic gender stereotypes are.  Or if you have any general question that you would like to see brought up in the roundtable immediately after our next speakers.

>> AUDIENCE: Good morning.  I am non binary person from Brazil.  And I think I have a comment and a question.

First the comment is how important it is to encourage at linguistic diversity in digital media.  And how especially that queer A plus people.  And to think about the moderation of digital content with both our needs to be based on generalizations of discourse.

In Brazil we have two words (?).  When correlating we think moderation can be mitigate.  But two, the same extent I can foresee violence when it is related to the reformty of concepts.  My question is how (?) narratives from vulnerable communities such as LGBTQ plus people and black people, for example the language of Brazil.

Thank you.

>> AUDIENCE: Thank you very much.  I think very interesting question and thank you for the panel for this discussion.

My point of view, this is very true that you know social media is very much kind of made all of our regular socialization life, social life.  Or what we are thinking.  But probably just even more, you know, reflective on social media.

Let me share this fear.  The fear is that, you know, sometimes we see how social media is being influenced by thinking or my propagate the violence against various gender minority groups.

And I don't know, do we have any information on if we can shed some light afterwards about how would the artificial intelligence also propagating this kind of behaviour.  And the violence against the other sexual minority groups.

One second thing is do you see any possibilities hitting narratives in terms of this kind of head space?  Or this violence?  That actually can be codified.  And I don't know.  So, you know, making some kind of positive narratives.  So what is possible for artificial intelligence, again.

Thank you very much.

>> STELLA ANNE TOEH: All right.  Thank you for comments and questions from the floor.

So I'll let Julia go ahead.

>> JULIE RODRIGUEZ KOOLE: I would like to address Wilson's question.  And same report which would be in English.  Report of recommendations to tackling hate speech in Brazil has a section about hate speech grammatic.

In Brazil, for context, we have a many grammatics.  We have a formal grammatic.  We have a popular grammatic.  And we cannot ‑‑ we do have also a queer grammatic.  And there is targeting of those extremist groups to mimic and ridiculize this grammatic which is highly based on the west African influence of the people who were kidnapped to Brazil in our colonial past.

But their heritage lived on our grammatic and our form of speech.  And this is being targeted as also a meaning a project to mobilise the youth saying having jokes and also protests against the approach or any recognises that this grammatic might have any public venue may be the social media.  May be a television media.  May be a radio programme.

That which they deny the validity of this way of speech.  Because it is true.  It is sincere and the way we found out to identify each other into reorganise itself as community.

And this is being also targeted and weaponized.  Saying, roughly as the "gay way" of saying things, or the gay speech.  But it is much more than that.  And it is much more complex.  Because it is not only us that use that kind of speech.  It is just ‑‑ the thing is it doesn't matter what it is there is a drain of what the content is.  They don't care about the content.  They care about the group that uses such content.

They will ridiculize, they will satirize that to spot people and spot people who might have a sensitivity to hate speech.  And also be more gullible to think that they are changing society for the better by persecuting a group because of the way they speak.

So yes, this report also recognizes this type of ‑‑ this strategy of illegal groups to act and enact their wills and projects for the Brazilian community.  But we are not restricted here to my country.  This is a specific example to enlighten the audience about the many aspects of online linguistic gender stereotypes.

>> STELLA ANNE TOEH: Thank you so much Julia for your answer.  So just for the second question.  It will be addressed by our later speaker.  But first we'll move to our next speaker with us.  We have with us joining us from online, from Malaysia, we have with us Dr. Manjet Kaur.  Please if we could have the online speakers on the screen.  And your seven minutes starts now.

>> MANJET SINGH: Good morning everyone.

Am I loud and clear?

>> STELLA ANNE TOEH: Yep.

>> MANJET SINGH: Okay.  Thank you very much for giving me the opportunity here.  To share some views here.  Regarding the team of today's talk, online linguistic gender stereotypes.  What I would like to focus on is we cannot deny that these are discrimination issues exist online.

But what we need to look into here is maybe what is the way forward.  How can we address these problems?  How can we improve the situation?

Okay?  So therefore we need to focus, for example, on inclusive language.  Okay?  Inclusive language.

So inclusive language have to, how do I say, acknowledge.  Okay?  The diversity that the genders present.  Like in Asian context.  Malaysia.  Talk about male and female and when you talk about the LGBTQ group, there is also issues.  The racial composition of the country.  Malaysia being a Muslim country.  So when it comes to LGBTQ rights.  So in terms of how they are represented in advertisements, for example, online.

There are sensitive issue and also some kind of discrimination that exists.  Okay?

So what is needed is actually inclusive language.  That is actually sensitive to all the groups of people.  No matter which gender category are you in.  And actually for most actual, how do I say, equal opportunity for all of them.  That is basically very very important.

Okay?

So basically, how can we ensure that this can be, how do I say, imposed or implemented in the online setting?

Firstly, it is to choose a gender neutral terms.  It is sometimes very very difficult for us.  Like for example a presentation done by Mr. Luke just now.  When you're promoting products, doing advertisements for perfumes.  Okay.  If you do a survey, you would notice the kind of adjectives that I used.  Okay.  Are they more inclined to feminism?  Or do they, how do I say, promote masculinity?

Okay.  That is also based again who is the product targeted for.

However, how can we come up with a situation A kind of framework that addresses gender neutrality?  Even when you're promoting a female‑based product or a male‑based product.  So these are the issues that needs to be considered.  Okay?  These are very very important issues.

Okay.

Next what I would like to say especially also in the context, let's say workplace.  Workplace context in terms of when you talk specific language, differences between male and female.  How can you reinforce the diversity?

Earlier I mentioned about gender equality.  But to have these phenomenon gender equality to be implemented 100% in the workplace, or to have a situation whereby the gender equality it is totally impossible.

Okay?  It is totally impossible.  So we need to work to the best.

So what is the best in terms of promoting, enforcing, introducing and enforcing diversity?

So if you say, if you look at the whole picture of online linguistic stereotypes, what did we define that cluster that is promoting diversity?

Okay?  That's also one part of the coin.

And one, we say it is actually hardware say it is not fair to a particular group.  So linguistic ‑‑ online linguistic stereotypes in terms of classification of gender can also be considered as something that can be a crime of like harassment.  But at the same time, you can also look at it as promoting diversity.  Diversity through language.  You know?  The use of adjectives, you know, to describe feminism.  Okay, to describe women.  To describe male.  To describe the third group or the fourth group.

Okay?

So at times we cannot say that it is being stereotypical.  It is also promoting diversity.  You need the existence of it.  But how it is used, how it is addressed, that is very very important.  To start with education, to create awareness among the youngsters.  Not to misjudge it.  But to respect the diversity that is presented through the language.  To explain or to label someone.  Is what he or she is.

Okay?  So that's very very important.

So this will actually, I'm kind of like contribute to a sense of belonging.  For all the groups of people.  In terms of let's say I see a product which is being sold ‑‑ sorry, which is being advertised.  And it is described usage of words that I'm happy about.  Okay?

And you have another person also looking at the same product from a different perspective.  Okay?  In terms of how the product is described.

So how do you create here a sense of belonging for both parties?  You know, in terms of I'm going to use the product, but I'm not happy with how it is described.  Okay?  Another person is happy with how it is described.  How do we create a sense of belonging for both parties here?

So it goes back again to early age education, in educating people.  The youngsters.  To how to be more sensitive to how you represent the product, to how you want to teach the youngsters to use what is more responsiblily.  But at the same time to also respect the diversity that comes with each gender.  How each gender is labelled.  How each gender is, how do I say, described.  And so on.

Okay?

But at the same time, we can make it more unbiased towards one gender.  Unbiased.  Towards one gender.  That is very, very important.  To avoid the gender assumptions.  We also have have these gender assumptions.  That when you want to sell woman based product or male based product, you have to use certain words to describe that particular group.  But I mean like diversity can be there.  But at the same time you need to ensure there is no biasness.  Okay?  That is very very important.

Okay?  So what I would like to also focus on here is usage of, coming up with a guideline, a general language guideline.  Okay?  General language guideline on how you can ensure that when you have the diversity in terms of the linguistic aspects used online, you are able to ensure there is no discrimination, biasness.  Okay?  And at the same time you also promote the diversity of using very diverse linguistic elements.

>> STELLA ANNE TOEH: Time.  So sorry to shut you here Dr. Manjet.  But thank you very much.  In interest of time we'll move to the next speaker we'll come back to your points on I think the general guideline, the linguistic guideline which I think is very interesting.  But we'd like to move on perhaps cross‑region to our next online speaker.  Umut, if you would please go ahead.

>> MANJET SINGH: Okay, sure, thank you.

>> UMUT VELASQUEZ: Hello everyone.  First of all I would like to thanks for the invitation to this panel.

Well, my focus during this conversation is going to be more related to the user experience or gender people social media and how gender stereotype actually affect the way the continent displays in social media.  Especially I focus my research in Tiktok.  Because I like to ‑‑ I wanted to understand the intersection of being youth on social media when you are also gender ‑‑ a person that is identify as a gender diverse, for example.  Not minority, queer or other side of binary, male and female.

One of the things I realise making this research is most of the platforms that are using more actually weaponize the people that identify as gender diverse.  In a way that more (?) content than actually.  People that identify as another gender.

So I try to ask to the people.  I came up with 53 interview from different people from Latin America.  That where they telling me their experience using the platform and what they had to do to like align their identities to Tiktok in a way that they can actually express themselves in ‑‑ pretty much in a normative way to follow the expectation of the platform.  The expectation algorithm in order to still be presenting the platform without any problem and without being targeted by or by sensor the content.  Something like that.

Most of the sensor of the content came from the use of specific #related to people online.  Most of the cases that I came to study say something that when they use some specific question.  So in specific words.  The content were sometimes less visualized.  Or just taken down from the platform without any reason.

And when they asked why the content was taken down from the platform, there wasn't explanation of why exactly they do that.  They only say it was against the code of conduct on the platform.

So that is when I came up with the question of how exactly we can do a ‑‑ how we can actually own a platform for an identity in a system that actually is no promoting or identities on site.  And it is a hard one to.  I have to say that.  But most of the people actually excited they never fully online their identities.  And never fully feel accepted inside the platform.  Because of the frustration or the self‑sensor they have to do being like used in every day life.

So probably most of the content that you see about people on platform actually more related to a trend that was imposed by someone already relevant in the platform.  Because when someone is already relevant to the platform and actually create content related to LGBTQ I people or gender diversity, that's when the content became somehow acceptable inside of the platform.

When you try to converse on other things they don't consider.  Could be part of a training or something like that.  It that content it doesn't show as must have as the other.  And the content actually became a way to restrict the way they present themselves online.

These came up with one of the many things they say actually this platform needs to improve, be more clear about what are the community standards in terms of languages.  But also the content.  Sometimes the content they actually portray in the profiles is similar to gender binary.  But somehow that content is not showing in the same way.  Or it just taken down from the platform without any reason.

So that was one thing they said.  Also another thing that lot of people when I tried to convey a general recommendation after the many talks I have with them.  Is that we need to find as a community a way to achieve a space which identification on part of this (?) and want to represent the communities in a way that is actually made more sense.  That made by different government.  Algorithms ensure the data rise and free of expression or they being without censorship.  Or fear of constant like cleaning their spaces or hitting their spaces of situation on the sense of what is based as normal.

>> STELLA ANNE TOEH: Time.  So sorry to cut you here.  But so just in the interest of time.  Right.  Thank you so much.  It is right on time for seven minutes.  So we'd like to move on to our next speaker for their next 7 minutes.  Joining us from online again, Dhanaraj.  Please go ahead.  And as well as with your presentation.

>> DHANARAJ THAKUR: Okay.  Hello everyone.  Can you see and hear me okay?

>> STELLA ANNE TOEH: Yep.  All okay.

>> DHANARAJ THAKUR: Good.  Thank you so much.  Thank you to the organizers for this session.  And hello everyone.  My name is Dhanaraj Thakur.  I am the research director centre for democracy and technology from Jamaica in the Caribbean but based in the United States.  CDT is tech policy organisation based in the United States and focuses on human rights in digital spaces.

So there are two main point I want to make with regard to the overall theme of this session.  First, with regard to the issues around how language can be used for hate and to promote violence, which our previous speaker already alluded to.

And how gender stereotypes can be also leveraged and used in language to promote false and misunderstood information.

Our key aspects of online information environment and contribute to the digital divide.

Second point I want to make is that we often think of artificial intelligence tools like natural language processing tools.  And large models in the way they can be used to address problems.  For example cleaning up hate speech and violence and misinformation that is targeted at women and other gender identities.

And I argue that this actually makes a problem worse.

To talk a bit more about language of hate and misinformation, misand disinformation.  This kind of rhetoric and mis‑ and disinformation is red indicated on gender stereotypes.  Which we heard previous speakers describe in better detail earlier.  But all of those often have disproportionate impact on women.  And many research o to show this to be the case.

There is recess research that focuses on non binary and transpeople.  But the research I've done shows the problem could be even worse for those groups of people.

Intersectional approach and just look at gender but other dimensions of identity.  And when we do that we find that there are subgroups that actually are more targeted with this kind of violent speech and more targeted with this kind of mis‑ and disinformation.

And this leads to several different kinds of impacts.  One is a negative impact on the gender digital divide.  It actually makes it worse.  And again there is research in particularly ‑‑ that shows this.  It undermines political participation of women and other gender identities.  It has serious economic health impacts.  Mental health impacts and it has significant impacts on freedom of expression.  And chilling effects.  In other words, it suppresses speech of the people that are targeted very often women in public life.

One example I want to use is some research we did to help illustrate this.  This was focussed on women of color, political candidates in the 2020 U.S. election.  Woman of color is term in the U.S. to describe women of nonnation descent.  We looked at data from twitter in the 2020 elections and representative sample of all the candidates that ran at the federal level, at the national level in that election.

And we found a couple of things with regard to women of color candidates.  Here I want to emphasise this intersectional approach to how to illustrate how these kinds of hate speech and mis‑ and disinformation are targeted at particular groups of women.  So not women in general.

What we found was woman of color candidates were twice as likely as other candidates to be targeted with mis‑ and disinformation.  Twice as likely as other candidates, including white women.  And white men, and so on.

There were four times as likely as white candidates to be subject to violent abuse online.  Violent speech online.  And there are more likely than others to be targeted with combination of false information and online abuse.

I've use this example to illustrate the problem and the severe kinds of impacts that particular women face online because of this kind of ‑‑ the way language is used in this kind of hateful and violent way.  As well as to propagate gender stereotypes to promote false information about women.

So the other issue I wanted to talk about was the use of AI, which someone in the audience asked about.  Large language models like ChatGPT.  Are essentially machine learning technique to look at large amounts of data.  In this case text.  And make predictions about what kinds of text the user wants to see.

So if you think of ChatGPT, you might put in a prompt what day is it today.  And based on a all the training data it has available, it can make a guess.  And to be clear, that is all large language models do.  They make guesses.  Very good guesses.  But all they are doing is making guesses.  Or predictions.

They are not thinking.  They are not human.  They are just making guesses.

The challenge for us is when large language models are applied to non English languages.  Most models like chat gapt and many other models are based on what ‑‑ on data that is available online.  So they look at the entire internet and web and draw data from that.

As we know the majority of the web is in English.  So, the vos majority of world does not speak English.  So this is this kind of paradox, problem.

So what does that ‑‑ what that means is that there there are many languages in the world referred to as "low resource" languages and I use that quote because I'm not sure that is appropriate term to use.  But among scientists they refer to as low resource languages.  In other words there is not enough data available for the languages that can support the use, the training of these large language models.

Examples include Hindi.  Slovak.  Cherokee.  Zulu and Telugu.  And so on.

Because that he has languages are low resource.  The use of large language models in those cases won't be as effective.  This is critical because it has implications for the use of these models to address some of the problems I mentioned earlier.  The violent speech targeted at women, noon binary and trans people.  And what happens when we're using non English languages.  The models as a tool to solve the problem will fall sort.

And final point I want to end with.  In many of the country where is we talk about low resource languages, many countries global assault where the digital divide exists, that is fewer people are online.  There is a significant gender digital divide.  Which means that men are more likely to be online.  Men are more likely to be online.  They are producing more content online.  Which is the content that large language models use.

So we have a vicious cycle that is happening.  The models are using content in these language contexts that are produced by men, to propagate further stereotypes that can undermine and create further problems for the addressing problems like violent speech and gender mis‑ and disinformation.

So I will stop here for now.  And then we can talk further in the subsequent discussions.

Thank you very much.

>> STELLA ANNE TOEH: Right, thank you very much Dhanaraj for your sharing.  It is really interesting to see.  We addressed the question earlier by the floor on what it looks leak now about usic AI to address this issue.  And I really like that you mentioned how the gender linguistic stereotypes essentially is a vicious cycle which the issue about the majority of content being English.  But also relate to the global south, global north divide.  And that also ends up repropagating, reproducing that female and male traditional gender stereotype of which would be more likely to be online.

So we have time four final speaker.  Just for the first round.  7 minutes.  Also joining us from online.  Juliana.

>> JULIANA HARSIANTI: Hello.  Thank you, for this interesting conversation.  My name is Juliana.  I'm from Indonesia.  This time I really use my a head as a translator for the global voice, platform.  But also working on multilingual.

Okay.  I will start with the fact that language has such important role to build.  Because language can be seen as form that impacts the world.  What we say and how we use language effects our thinking and imagination and our reality.  And my language, Indonesia, doesn't have again like Spanish or French.  So I grew up with knowledge about how the gender language has impact to build perception.

But where it change when I starting to learn in France and then Spanish.  At the time I realise that the gender has some crucial impact on how do people who use the language have perception of themselves.

In those languages, gender automatically changed into masculine when the plurals subject has the misgender.

When I talk to French or those linguistics, it make them to think that masculine or man is, has a better position in community or more superior than feminine or women.  Beside from the gender, language has also nuance.  I will type the example in English.

There is some words which has negative connotation and applied online to women.  We'll see for example, as a pejorative meaning and is targeted to women who want to try to leave the community or the group.  And yet that makes people think they, the women and girls act like boss.  It never happens men who wants to try to leave the community or group.

The pejorative meaning is also in other language without gender.  Indonesia, for example, (?) has quite use in quite online when targeted women go at LGBT group and is using this kind of certain stereotype.

And what the meaning in was?  As I mentioned before, pejorative.  Has been used to affect LGBT.  Several studies have been taken.  Less active in internet.  And it is, this mean (?).

So women girls and LGBT people will be less active and afraid to expect (?) in digital world.

Another case is function of the internet.  There is some work where it translates from other language to English.  It is automatically translate with the masculine subjects.  For example, if I want to translate subject is a doctor.  And result in English is he is is a doctor.

But when it comes to nurse or secretary, the subject the feminine of welcome.

Later today we now have the ChatGPT has with presentation.  And other language model use AI to scrap and source from the internet.

Why is become the problem?  Because hope in the future decriminalized race, gender and language.  If we could start to promote more gender neutral and language.  As community we can provide input, discussion and make the language more inclusive in digital and real life.

I appreciate (?) some words or working our job vision with gender.  And results from community input constantly give the feedback into the translation mission company.

As the closing, I believe the language is dynamic.  And I believe still grow during the time in real and digital world.  It needs constant work from the communities and to also give input about make the language more gender neutral, more inclusive.  As it could more fair for everybody.

Thank you.  And waiting for the discussion.

>> STELLA ANNE TOEH: Thank you so much Juliana.  Very interesting to hear from your perspective and the industry of translation.

So we've heard from all our speakers for the first round.  And I think I'd like to go to perhaps our second round, begin our second round of roundtable discussion.

So looking at perhaps I could start with Dhanaraj for a question for you.  Regarding what you feel on what measures can be taken to improve.  Or any questions that you feel coming from your perspective having researched, you know, the impact or the potential that your case is against the potential of using AI.  What do you think it needs to be discussed more particularly?

>> DHANARAJ THAKUR: Yes, thank you for the question.  I think this topic is precisely what needs to be discussed more.  Particularly within the industry around the, what I'd call the gender gap in training data.

I mentioned the problem in the global south of the gender digital divide.  There was a recent study from the university of Pittsburgh that looked at the training data use forward ChatGPT.  Let's take that as an example.

The training for ChatGPT generally.  And found out only 26.5% of that training data contained data that was authored by women.  So the vast majority, almost three quarters of data, was authored by men.

I often think about the implications of this.  If we think about how ChatGPT is being considered now and incorporated for example in schools and education system.  Or models like ChatGPT and kind of gender gap in training that exists and what implication that will have for youth going forward.  I think many of the other speakers have already pointed to some of these kinds of problems.

I think there are it is the question, particularly a ‑‑ level education and industry about gender gap in training data.

>> STELLA ANNE TOEH: Thank you very much Dhanaraj.  Since you mentioned schools, I'd like to hop over to Dr. Manjet for your thoughts.  You mentioned about thoughts ab the general language or linguistic guideline.  How do you foresee this can be related to what Dhanaraj mentioned about the data that is being used to train these large language models?

>> MANJET SINGH: Just now what I mentioned was on the learning guidelines.  So basically coming from country in southeast Asia.  Malaysia.  Very, how do I say, governed by religious rules.  Islamic country.

So there are lot of things when it comes to this kind of biasness.  Language usage.  The stereotypical issues.  Are swept under the cupboard.

So what is actually needed is coming out visibly.  Being more open, explicit.  In terms of learning.  And development.  That is very very important.  So it starts with education from the beginning.  Because this is an issue which is ‑‑ that exists.  But it is not being addressed in the context of (?) for example.  So it has to start with education.  And has to start with the educators themselves.

The educators do not believe in, how do I say, having equality, and then at the same time promoting diversity.  It will be a failure.  So educators themselves must be trained on how they are going to learn and how they are going to make their students learn and develop on this.

So next one is there are no clear rules and regulation of policies.  Set by the government.  On these matters.  Okay?

So it should be led from the top.  That's very very important.  It should be top down.  So when they are visible, clear rules and acts on how language is used to represent a particular group, so there are some rules that we fall back on.  For example, if you go to advertising companies nowadays, just know all the thing then there is no clear rules informed to them.  Nobody will care.  Okay?  So that's very important.  In terms of leading from the top.  Having some rules.  Some policies in place.

And next one, at workplace itself.  Workplace, for example in industry, for example.  And all the other industries.  Product related to particular groups.  You know.  They should have, how do I say, training at workplace for their employees, for their staff.  To talk about openly.  Discuss method.  Have people of all groups sit down together and deliver on the matter openly.

There should be no criticism against one particular group.  Okay.  One particular group is neglect order marginalized.  Shouldn't dominated for example by the male only.  Okay and so these are things very very important.  The training and awareness at workplace.  If learning and development did not work at the school.

So this is where it happened.

>> STELLA ANNE TOEH: Thank you so much Dr. Manjet for your perspective.

And hopping to on of our onsite panelists.  Arnaldo, what do you think on the question of what was mentioned earlier about the potential negative case against the use of AI currently to‑address‑this‑issue.  Snoop.

>> ARNALDO DE SANTANA: Also reflects on the people that develops it.  So if we have some restrictors that are restrictors of power and we reproduce it to impose what is correct or what's not correct, it might mean ‑‑ it might not be something that is quite appliable.

I was thinking about also the perspective that the question that Will sent us.  Of all the marginalized talking and languages such as Pajubar.  And remind me about the variation of Portuguese that came with the people that were colonized.  And also about the indigenous people talking in languages that were like borrowed from the world and especially in Brazil.  That we don't know how to talk.  How to teach.  And how to keep it going.

So I feel that we have to develop something, especially in internet that provides our existences to be more positive.  And that we do not erase our memories and our lives just because there is something in the colonialism is there.  To put in perspective what might be and who has the power.  And this is always the cisgender from white male from Europe.

>> STELLA ANNE TOEH: Thank you so much Arnaldo.

So that is enough for question.  We're moving on for our next question.  How can these online linguistic gender stereotypes result in negative experiences for youth both online and in the real world?  I'd like to start with our onsite speaker Julia.  If you would be able to share on the question.

>> JULIE RODRIGUEZ KOOLE: It is hard to pinpoint or sort out where to start.  Because so many possibilities.  And they are not ‑‑ they are never positive, probably.  For my line of study, we go ‑‑ we have like, minimum the manage outcomes.  And also we have outcomes that are, that in...   within groups too.  Because this mobilisation of online linguistic gender stereotypes on the internet can drive away many women.  And many gender diverse people, trans people.  Specifically from spaces.  That is probably the most occurring and negative effect.

The ones who power through, the ones who do decide to move on and face the discrimination and be there.  And do not faint, do not decide that think don't want to be there.  Because there meaning any social media or platform group, study group or a gay community.  If they decided to ‑‑ if they don't decide to move on, they decide to stay that community, they can ‑‑ they will over and over experience aggressive hate, aggregative more violent experiences, which can result into distortion of self image or self worth.  It can result to the ‑‑ to a many mental diseases.  And also, it will always ‑‑ it will always end up building in their minds that that is not a space for them.

And that, or they should be under a specific expectation of what is gender and what, how they should behave and how they should talk.

>> STELLA ANNE TOEH: Right.  Thank you so much Julia.  So you mentioned the perception of self worth.  And I think it is really good to ask.

So ‑‑ for our speakers, how can such online linguistic gender stereotypes affect users perceptions of their self worth and value.  And secondary, what implications would this have on our current gender digital divide.

I'd like to start this round of question and answer off with your opinion from Umut.

>> UMUT VELASQUEZ: First this kind of use gender stereotype language has on the self identity of the people.  Most of the people say to me that they actually never get to fully feel identified with the platform that they are using.

Because most of the time they have to mould themselves to something they aren't.  So they came up with anxiety how to present themselves online in a way that actually they don't go against the community standards.  And so okay with some issues sometimes give some people actually start using the platform because they never feel.  And they feel left behind in the conversation.  They became like issue in the way they socialize with the rest of the partners or rest of community.  Because they actually they can't fully express themselveses on their platform.

So we've seen actually the platforms that's are in the communities standard about the way they moderate the content, seems to be like not harmful.  But actually they are when it comes to gender diversity.  Because people no abnormalities or role expected by the gender minority are affected in way that they can't fully express themselves in the platform.  So that came up with consequences to their mental health.

>> STELLA ANNE TOEH: Right.  Thank you so much Umut.  I think it is very enlightening that you mention that they never feel fully identified.  And I guess a sense of belonging, which was also mentioned earlier by our panel, is really important to consider.

So for the same question.  I'd like to go over to perhaps Juliana.  What your thoughts are on, you know, how can such online linguistic gender stereotypes affect self worth and value.

>> JULIANA HARSIANTI: Okay.  Maybe because I would talk about the slur and the negative words addressed to women and girls and then will affect how they decide to act this in the internet.

So yes, as someone says, is this important to address this kind of online bullying with certain negative words.  Because I know and understand the linguistic differences in different cultures.  Because if some language has to be certain negative impact or negative meaning in some culture.  Maybe we can think it is ‑‑ we can promote more inclusive or more gender neutral in language.  So it can be with people has more safe to express themselves in the internet.

And second one is about the profile, the content in internet with the women.  And the girls perspective.  And profile.  Before the ChatGPT era, Wikipedia has some several organisers with the gap.  The gap is to translate and create the content.  But the women so the internet will be more content and more profile.  But the women and by the women.

So I think this is my opinion.

>> STELLA ANNE TOEH: Right.  Thank you very much Juliana.  Like you mentioned, real life examples of you know what's generally attributed online.

And I think we have an intervention from.

>> What we've Ben seeing the gender decide has been increasing since 2014.  Worldwide digital transformation.  And the emerging technologies and rapid advisements have still resulted in women left mind.  And global statistics in my opinion just do not do this issue justice as the gender digital divide worsens.  When marginalized women like elderly and women in rural areas and other parts of the community.  And according to UNICEF.  More than 90% jobs worldwide have a digital component.  And with the ‑‑ women above 18 what we're seeing is not enough research done for women below or young girls below 18.  Basically my point is that these online linguistic gender stereotypes will most definitely affect their perceptions of what jobs or careers are quote/unquotable able to choose or supposed to be for them.  Thank you.

>> STELLA ANNE TOEH: Right.  So thank you very much from our speakers for round 2 session.

So we're coming up to the last 6 minutes.  I'd like to ask if there are any questions from are our onsite participants and online as well.  Right.  Go ahead please.

>> AUDIENCE: Hey.  Okay.  So thank you.  My name is Hanat, from the youth Brazilian delegation.

First of all, thank you for the panel.  Really, really interesting.  I want to actually make some sort of a tangent comment.  And to discuss a bit about platform algorithms especially in visual platforms.  Maybe something related to Mr. Velasquez mentioned before about Tiktok.  And Instagram.  And we see the young girls using platforms to promote themselves using their own bodies as commodity.  Being vulnerable to predators.

I was wondering what the panel might think that we can do to protect our youth in digital platforms considering this.  And how we can moderate comments and the language and how Civil Society can act in defence of our youth, especially young girls, in these visual platforms.

Thank you.

>> STELLA ANNE TOEH: Thank you for the question.  Maybe Dhanaraj, would you have any comments on that?

>> DHANARAJ THAKUR: Yes.  Thank you for the question.  Maybe three quick suggestions or thoughts.

So one is, there is a lot that the platforms themselves can do.  So for example, you mention Tiktok Instagram and others.  In having a better design of their platform to allow the youth and users of the platform to address 6 ‑‑ to better control the kind of bad content or pushback against bad content they might receive.

There is also the privacy and/or targeted kind of model that all of these platforms use.  Having greater privacy protections on these platforms can reduce the degree of targeting.  And therefore the degree of what are called amplification that you see, that you will observe on the platforms.

So for example, if it is a case of young girls either, you know, being exploited or things like that.  The extent algorithms promote that kind of thing can be reduced.  Particularly platform side if there are changes to the design.  And the incentives are in place.

And last thing I'll say is lot of what happens on the platform is still unclear.  Because as researchers, as governments, society, activists.  We don't have lot of insight.  We don't have sufficient insight into the platforms.  And what's important there is the platform, social media platform provide more data in a safe and secure way for researchers to better understand what is happening.  Because then we ourselves could come up with better solutions to address some of the problems at the person in the audience raised.

>> STELLA ANNE TOEH: Thank you very much Dhanaraj for your comment on that.

I guess we can see that hopefully in the future we definitely do need to have more representation from private sector on such an issue.

So I'd like to move on to question we have from on online participant.  Thank you very much to him.

Thank you for sharing important perspective.  Master student in terms o communication.  Interested in the presentation.  Your question is it is difficult to judge whether some hate speech happens from gender bias because there are many factors in a context.  Under this situation, should we tackle hate speech from gender stereotype?

So perhaps we'll have a youth perspective on this and then hop over to Dhanaraj as well again.  Or anyone else from the panel who wants to take this question.

Maybe Umut.  Would we get your input on it.

(silence)

 

>> JULIE RODRIGUEZ KOOLE: Okay.  What should we do to tackle hate speech from gender stereotype?

Firstly, take care of the youth.  We need to bet on next generations.

Our age group, like 20 years old, to 50, 70 years old already dealing with too many problems that stands from education in the first and the second infancy.

And there are many ‑‑ there are many ways to do that.  And at school should study what are the main problems in the community.  Sometimes girls have problems in addressing their physiological necessities.  Other communities have more problem talking about sexual allocation.  Other communities have other problems talking about the social place of the woman.  And the man.  And other gender diverse people.

And but also you as a undergraduate can study also what does ‑‑ what ‑‑ how to generate empathy in people who are now disconnected from this scene.  What can you do to close, to get them closer to you and to the topic and to the subject?  Because we have also a really perfect population.  In our age and undergraduate in the majors in the Ph.Ds, that they don't want to progress.  They don't want to go any further because they think it is obvious.  Like, everybody reserves equal rights.

But how can we captivate the audience who isn't opposed but not actually involved in the development of a better world?

>> STELLA ANNE TOEH: Thank you Julia.  So we'll just hop off to one of our online speakers, Dhanaraj.  And then follow with Arnaldo's intervention.  Thank you.

>> DHANARAJ THAKUR: Great, thank you.  I fully agree with Julia's response.  And I just wanted to add that, and if I think Julia had mentioned this earlier.  There is a group of younger boys, men, that are influenced heavily in what research is called a monosphere.  This kind of bubble of hate speech and gender stereotypes that drives a lot of hate that come from them.

I think a big issue here then is for young men, men, boy, particularly cisgenderred men like myself, to reflect and consider the impacts of hate speech and/or false information that we might share online.

And as I said earlier, there has to be degree of empathy.  But I think starting with young men and boys is important.

>> ARNALDO DE SANTANA: So I'd like to add also.  That although we don't have any legislation that works internationally, to talk about these patterns and how to directly boot something as hate speech and gender stereotype.  I feel that one can be used to identify another.

And probably in the future, the way to tackle it must be breaking the stereotypes.  But nowadays I feel that it is not necessarily viable.  We have so much rotten stereotypes that we need daily to break and innovate.  They feel that we need more time to talk about it.  To innovate it.  And I feel that one can be used to identify and try to be better in the future.

>> STELLA ANNE TOEH: Thank you so much Arnaldo.  Quickly reading Tut response in the chat.  Probably changing the narrative of gender stereotype under tack to gent responses does not leave with what is actually said.  What is actually said is mate and not freedom of expression and it affects human rights of women or gender diverse people.  To really quickly, Juliana if you could keep your comments under one minute.

>> JULIANA HARSIANTI: I'll be short.  Because others mentioned and Dhanaraj and Umut.  I think yeah really quite challenges to how to beat the hate speech in online space.  Because when some women and girls attacked in online space, some people will be say that it is just feelings.  So just don't take it for grant or just don't take in your mind.  But I think empathy, or some regulation as the law as the country but more community regulation.  How we could be share or how we could be talking in this community.

And takes from the field community member online.

>> STELLA ANNE TOEH: Thank you.  So sorry to cut you here.  Because with we're over time by 5 minutes.  Thank you everyone for joining in our panel.  If I could just get everyone to maybe come in for a picture, if you could have your video on Dhanaraj, Umut, Dr. Manjet and Juliana.  If Juliana is fine.  And anyone else from the audience also okay.

Yeah just a get quick picture with everyone.

All right.  Thank you is much everyone.  Thank you so much for sharing and everything.  I think it is really great that we had an opportunity to discuss this.  We receive lot of feature for this topic is maybe we'll see everyone at a regional IGF or the next IGF next year.  Thanks again and if you are interested to network with any of the speakers after, please feel free to contact the session organisers.  And to look out for NetMission.Asia.  We'll be continuing on topic.  And leading from youth perspective from the youth ourselves.

Thank you once again to speakers joining from across the world and from all our participants and panelists here.