You are here

IGF 2020 - Day 3 - OF31 Safe digital spaces, a dialogue on countering cyberviolence

The following are the outputs of the real-time captioning taken during the virtual Fifteenth Annual Meeting of the Internet Governance Forum (IGF), from 2 to 17 November 2020. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

 

>> TECH SUPPORT: I just wanted to welcome everyone.  Thank you for joining, welcome to this webinar.  This is organized by U.N. Women.  All participants and panelists are reminded this webinar will be recorded under the IGF code of conduct and the rules and regulations for the participants.  If you want to speak you possibly can.  One of the panelists will allow you to speak, yeah.  Alex, are you trying to say something?  You're a little bit low.  Alex is my colleague from the IGF, yeah.

[ Inaudible ]

>> TECH SUPPORT: I guess it's time to begin now.  The organizers can take the lead.

>> EVE KRAICER:  Welcome to our open forum, Safe Digital Spaces. A Dialogue on Countering Cyberviolence.  My name is Eve Kraicer, I'm the gender policy officer at the Web Foundation, so I've had the opportunity to coorganize this session with Nandini and Helene.

First I really want to quickly share a little bit about the Web Foundation's work around online gender based violence and tell you more about why we're so thrilled to be convening this discussion today.  The Web Foundation has three approaches to our work on online violence.  The first approach is research. Our research team conducts a broad range of research around women's access and experiences online.  A lot of this is led by Chenai, and we have a report coming out later next year that will look on online gender based violence and we recently released our report which covers issues of access, privacy and data rights.

Our second approach is amplifying the work of grassroots women's and digital works initiative.  The network aims to drive women's empowerment.  It comprises of women's rights and digital rights groups from across 14 countries in Africa, Asia and Latin America, and they're working to bridge the gender gap on technology, data and policy making.

Our third approach cocreates product and policy solutions.  We're running a series of consultations that are bringing together tech companies and CSOs to hold dialogues around countering online gender-based violence.  The Web Foundation will host several design workshops to provide space for civil society organizations to cocreate solutions using a human centered approach.  I'm about to put a link in the chat, which will give you a document you can follow along from for the rest of the session.  On that document you'll find links with you can find more about the foundation as well as with our other co‑organizers. With that, I'm going to pass it over to my co‑organizer, Nandini.

>> NANDINI CHAMI: Hi.  Thank you, Eve.  Hello, everyone, and it's like, nice to see you all today, coming here to discuss this very urgent and burning issue.  An issue that is as bothersome and immediate as the U.S. election I would say.

Now to begin with, IT for Change is an organization that works on research and policy advocacy at the intersections of technology, social justice, and gender equality.  And we have been working with the Web Foundation's Women's Rights Online Network for a long time.  We've also been working with U.N. Women for quite some time.  It's a pleasure to co‑organize this with the Web Foundation and U.N. Women.

I would like to take a couple of minutes to just like, you know, frame the reason why we wanted to have this conversation.  So let's just start with, like, today's context.  In the context of election day, it's just very important to remember just about a month ago on October 2, Donald Trump contracted COVID‑19.  We all remember that, right?  And within 24 hours of this, Twitter had announced or rather reminded everybody that it would deplatform tweets that wish or hope for death, seriously bodily harm, or fatal disease against anyone.  Of course, like wishing bodily harm or death against anyone is problematic and it is good that Twitter iterated this policy in that context.

At the same time a question arose in many people's minds, especially feminist organizations like us working in the Global South and like most of us here, right.  So there are numerous and billions of times when women, people of color, and nonbinary people face rape and death threats and bodily harm on social media platforms every day.  At that point we know how we have to run a lot of posts to get access to justice, right?

Why is it that only in certain moments platform accountability is talked about?  This is a question that bothers us, like, time and again.  And at the same time, when you look at like, what states are doing, all is not well either.  In the country I come from, India, maybe have you heard of a state that is known to be very progressive and is famed for this gender and development model. At the same time, it's very highly gender conservative.  This month, it announced it would pass an ordinance, which is not a law, but it will have a binding effect like a law, that to deal with what it calls cyberbullying it would make it an offense ‑‑ which means if whether or not a person is complaining, if they find that they threaten harm or they insult your reputation, that person can be picked up and, you know, put in jail for cyberbullying.  We all know where this goes because we have seen these instances time and time again.

Every time we have the conversation on cyberviolence, we keep getting stuck about, you know, platform accountability and the difficulties.  And I'm also cognizant and respectful of the fact that when we talk about social media governance, the sheer volume of content and the amount of content social media companies have to process, there is a very real challenge there because of the public sphere.  They end up usually going about trying to solve that.  When you look at national laws and nation states, how do you talk about, you know, free speech and ensuring freedom from harm for women without falling into the trap of state censorship?

So I would like to really urge us all to be in these reflections to think that maybe it's time for us to think of a new global constitutionalism for social media that we're able to come outside just self‑governance of platforms, which also has its own limitations. Because one is trying to interpret certain standards and when those interpretations will be challenged and completely relying on a national, state level response because as feminists, we always know that states uphold the gender social contract and it is not always ‑‑ and from the experience of our feminist foremothers, it's time to think in the data age what would it mean to have women's rights reclaimed again and how do you address that and reground that in the context of reclaiming the cyber sphere and abolishing sexism and misogynism online.  I hope to learn from your experiences today, I'm sure as everyone on the table.  I look forward to an extremely insightful conversation.  Thank you so much.

>> EVE KRAICER: Thank you, Nandini.  Helene?

>> HELENE MOLINIER: My first words are going to be words of thanks for championing this.  I think I've got a few words of a building up what Nandini has just said.  Looking backwards, looking 25 year ago, we were at the first meeting in Beijing, and at that time we had 50,000 people in Beijing.  I don't think we'll ever have that many people I think.

Hereupon we were making commitments on gender equality.  And at that time the internet was almost absent from the discussions.  There was no social media.  Google hadn't been created.  You fast forward to today, and now we are in a space where we have the U.N. Secretary General who is actually saying that technology and innovations are one of the five priorities for gender equality.  And if we achieve results in this area, we're going to be transforming the world.

So today our societies are shaped by technology and the internet is shaping gender relations across the world.  Much more than we think.  And at that time, in Beijing, we thought the internet would give the opportunity to every woman and girl to explore new forms of self, have more autonomy.  When they use the internet, it's enforcing the sex force stereotypes.  And we see that happening all over the planet.  It's a common phenomenon.

So I joined ‑‑ it's the responsibility of everyone, government, businesses, civil society.  And this is why we want to champion stakeholder initiatives to forge new partnerships on these issues.  Firstly, it's one that many of you know, it's the Secretary General digital cooperation road map.  And it's really a call for qualitative action to ensure that digital inclusion is a reality for all and special for the most vulnerable.

And the second one is the generation equality forum that will take place next year in Mexico and Paris.  The objective is to address the gender digital divide, but also to increase the safety of digital spaces.  And we believe that these two new partnerships and new gathering will be the occasion to discuss how we can change trends, how we can redesign our society, and how we can chart an agenda for concrete action.  I look forward to our discussion today and thank you all for being here.

>> EVE KRAICER: Thank you for both framing and the introductions of the topic.  I'm going to turn it over to my colleague, Chenai Chair, and will be the moderator for the panel.  Chenai?

>> MODERATOR: Thank you so much, Eve.  To start off, I just want to say to you, Helene and Nandini, for organizing this panel, a very needed conversation.  Welcome to everyone who is on the call.  If you have any questions, we'll save them till the end and you can post them on the chat as we move on and we'll be monitoring the chat.

I am joined by Bhavna Jha, who is a senior research associate, Cindy Southworth from Facebook, Helene  Milinier from U.N. Women, Marwa Azelmat, policy coordinator, and Mariana Valente, internet lab director.  I'm very excited to have these amazing panelists.

To start off with, I'll start with Bhavna.  And as you introduce yourselves and tell us a little bit more about your work, I would like for you to respond to this question.  In what ways can feminist framings, for example, help us understand why the online space is unsafe and how can these framings be used in developing policy and regulation?  Over to you, Bhavna.

>> BHAVNA JHA: Thank you, Chenai.  Hi, everybody.  I'm Bhavna, I'm a lawyer and a researcher at IT for Change.  I look at the intersection of technology, law policy and gender.  And I'll just dive straight into this really tough provoking question that you presented before me.  It’s very close to the work we've been trying to move closer to at IT for Change more recently.

So I'll step back a little bit, not too far back, but six months ago in May 2020, during lockdowns.  We did some Google trends analysis and there was an incident in India where an Instagram group found itself in hot waters for objectifying women's bodies.  We saw a 5,000% increase in the search for feminist and feminazi.  It was apparently that misogyny, while always lurking in online spaces, shows itself in amplifying anti feminists.

Platform intermediaries, what are effectively human rights, and the corresponding absence of any liability of them when user reported sexist hate and violence is not taken down.  Amidst this censorship of women's voices, this is exacerbated by the strange fact that in India there are many laws that are against sexual harassment that are applicable to acts that would broadly cover what's happening in cyberspaces.  But they're defiant as ‑‑ in public spaces.  And it suggests that misogyny is manifested in digital private spaces.  And the courts seem to be unable how to protect the victims of harassment in such digital spaces.

A feminist framework firstly recognizes that it is not enough to respond to violence towards a woman at an individual level but sees it as a structural problem.  Secondly, it informed policy initiatives that are not only the distinction meaningless, they're harmful and existing legal interpretations of public place no longer hold true in a digital world.

A feminist review of laws must open its eyes to the reality of spatial fluidity and mutability in a post digital society.  Thank you.

>> MODERATOR: Thank you so much, Bhavna.  I was nodding my head with everything you're talking about in terms of how we look at these spaces from a feminist lens and what it means in terms of the trends when you take on that feminist perspective.  So actually, we'll move on to Cindy, who is representing a platform, a head of women's safety from Facebook and also reflecting on these remarks.

Cindy, my question to you, can you give an example of a time when a CSO, civil society organizations and Facebook have actually worked together specifically to address a political aspect of online gender based violence?  What was the process and outputs and how can Facebook do this more widely with civil society organizations mainly from the Global South?  Coming from that collaborative perspective of how do we work together in these platforms.

>> CINDY SOUTHWORTH: Thank you so much, Chenai.  I am just really honored to join you all today.  I joined Facebook in July of this ‑‑ 2020 as their new head of women's safety.  Prior to that I worked in the gender-based violence nonprofit for the past 27 years.

I'll start off by sharing an example that I know intimately well because it was with me in my previous life as an NGO, and then I'll talk about more recent work we've been doing in all parts of the globe.  At the National Network to End Domestic Violence I founded the Safety Net Project, and one of the things that we brought to Facebook when we joined their safety advisory board we talked about how if technology is meant to be this great equalizer and opportunity for education and empowerment and economic opportunity, survivors of gender based violence, specifically domestic violence needed to be able to safely use Facebook and other technology platforms.  We know that abusers are using spyware to monitor, surveil, to control using it to harass, threaten, et cetera.

So back in 2014, we put together a very lengthy guide on a survivor's sort of guide to using Facebook.  And we walked through all the different privacy settings and things they might want to do -- because there's sort of a myth out there that if you're being harassed or threatened just block the person.  That doesn't work if you need to keep an eye on your ex to see if they're escalating.  It's a way to assess your own safety risks if you can see what your ex is saying about you.  Blocking is not always the best feature for a victim of domestic violence.

We put together this comprehensive guide.  It was not only translated into other languages but we ‑‑ Facebook, I have to figure out which we I am today.  In my past life I was we, the nonprofit, and now ‑‑ so Facebook worked with many CSOs and NGOs around the world to adapt the women's safety content to contextualize it to make sure it was culturally relevant.  Even though the features Facebook offers was the same, wanting to make sure the women's safety content was adopted in Finland and India and all parts of the world to make sure it was relevant.

That's one example.  Fast forward to a more recent example, same concept, we partnered ‑‑ Facebook partnered with NGO in Palestine and they put together a cool video you can see on the Facebook safety center.  They walk through all the different ways for women to be able to use Facebook's features and to know how to report if there is harassment, if they do experience hate, if they bump into impersonating accounts and it's under Facebook.com/safetyresources.  If you change your country to Palestine, it's in Arabic, but it's easy to understand.  They're walking women through how to use this tool while we know misogyny offline and online is still something that women face worldwide. So I could go on and on, but I want to make sure there's time for all of the presenters to speak and I'll save some of my other comments for Q&A time.

>> MODERATOR: Thank you so much, Cindy.  I understand the navigating between who is we.  It's really insightful.  I'm going to piggyback that question and ask Marwa this question.  In your work as a policy advocacy coordinator with one of the biggest organizing society organizations, APC, in your work with diverse stakeholders, what are some of the examples or strategies you've seen emerge from civil society in engaging towards ensuring women's safety online?

>> MARWA AZELMAT: Thank you, Chenai.  I'm also very honored to be here today with this all women panel.  Yeah, I mean, just a bit of background to everyone.  Take Back the Tech is a campaign that takes back ownership and dismantle online gender-based violence.  At Take Back the Tech we strongly believe that technology should be reclaimed for pleasure, consent, to counter the online harm and that kill‑joy conversation that targets those who perceive women as victims of their virtual experiences while normalizing with the way technology has always been designed, governed and deployed.

I guess something we've realized throughout all these years at Take Back the Tech, it was really a breakthrough for us, is that cyberviolence and cyberviolence campaigns, specifically against women, they have a common strategy.  They would often deploy personal attacks with political ties.  So you would find the issue of violence online is mostly a threat against freedom of expression, that bodily expression, that identity expression, that sexual expression. So when highly emotive and value laden content is being deployed against women and people of diverse genders and sexualities and their mind, their right to expression online, I guess we would ‑‑ we're entering that phase where we're talking more about how to counter that speech online more than how to counter violence.

Because at the end, violence is more weaponized to target that freedom of expression.  And we've seen that specifically throughout the Arab uprising where civil society used the online space to create feminist networks and organize both offline and online to resist the oppression and demand gender equality across all spheres of society.

For instance, in Tunisia, where they ‑‑ the civil society has hosted and emancipated cyber activism and tried in online communities, it was due to its supposedly progressive status of gender equality.  And so we've seen how they've been influencing the ways women reverberated feminist discourses.  Sometimes keeping a low profile, other times publicly calling out harassments and violence, exercised by anti‑gender movements, members to silence feminist dissents.  You have found they blocked, tweeted their experiences, spanning digital platforms with multiple identities, languages and cultural affiliations and articulating the revolutionary messages around feminism, conservatism, and that was truly ‑‑ yeah, that was truly revolutionary to witness.

And another milestone in civil society was in Sudan and South Sudan.  We've witnessed how the civil society emerged and organized really well given the long‑standing misogyny in the country's history.  And so specifically women's led civil societies, they enter into significant online mobilization, admitting everyone regardless of their political affiliations.  I guess right now and ‑‑ it's very important to talk about the level and the layers of online behavior and how we can really ‑‑ and how we can really position cyberviolence within the context of freedom of expression, freedom of mobilization, and freedom of association so that we would not only tackle violence, but also as a weapon against how other vulnerable communities try to speak their mind basically.  Over to you, Chenai.

>> MODERATOR: Thank you so much.  Those two examples that you mentioned are quite reflective of the current hashtag approaches that are happening right now and how feminists are really organizing within their spaces, but also in the light of like, the misogyny that are in these countries. 

Moving over to Mariana.  My question to you is how do we actually build evidence around online safety issues that allows for those who experience ‑‑ actually experience these issues to also shape the policy language and the research that's done?  Because one of the most effective ways when speaking to policy makers is the big question comes back is what's the evidence?  How do we know what we know?

>> MARIANA VALENTE: Thanks, Chenai.  I thank you all for the invitation.  It's a pleasure to be here among you.  I want to start answering this by providing an example of violence that's not exactly cyberviolence.

I wanted to tell you a bit about the research we did on the use of ICT by domestic workers.  With 30 domestic workers we developed questions for a quantitative survey and then we gathered again to analyze the result.  30% of the domestic workers were women in Brazil used the internet to find or publicize their work.  70% of them felt unsafe to do so.  One good thing, this has to be with capacities in literacy.  This group of women told us it was not safe to find work online because they all had experiences either personal or of people they knew of people who were harassed in their workplace or were not paid.  The violence they experienced conditioned their use of the internet.

If we were not developing this participatory research, we would probably have reached different conclusions.  I'm using that example just to point out three things, which I think are central to producing evidence on cyberviolence against women and being able to effectively communicate that and using that for concrete results for women's lives.

The first of them is that we really need to take into account the multiplicity of the experiences of women.  And that encompasses not only the intersectionality as a methodology, but in this context of the digital environment, I think it's really essential to highlight that this means that what intersects is different from place to place.  Right?  And I'm glad that my colleague, Marwa, was bringing examples from different parts of the world before.  Because I think it's never enough to emphasize this.

The second thing is most disciplinarity.  I think it can sound obvious, but it's not obvious when we're developing research in this landscape that knowledge from other areas rather than internet policy is really essential for the work that we're doing.  And we should be building bridges and being more transversal in our research.  For example, studies have a lot to add to discussions on hate speech against women online.  For instance ‑‑ and if we're only drawing from research and references from the internet policy space, I don't think we're going to be able to really shape the policy environment because we need those bridges to bring those messages more effectively and to really understand the whole landscape.

And the third one is that methods must be participatory and inclusive.  Otherwise ‑‑ I wanted to show by the example I brought ‑‑ we can bring assumptions into the table which add little to women's concrete lives.  And we are going to have results which are less powerful in terms of convening for policy.  I would have a lot more to say, but this is just a start.  I'm looking forward to our discussion.

>> MODERATOR: Thank you so much, Mariana.  I think those are really key ‑‑ a key illustration of that offline/online, it's not a binary but it's a continuous experience.

So then I'll move on to our last panelist as we get ready for questions, please get ready to ask questions.  Helene,  touching on that, like, experience that Mariana has talked about, my question to you is that it's been 25 years since the Beijing declaration. The online space has evolved in that period, but how can we use this declaration and others to advocate for safe spaces online as well as hold governments accountable to put the necessary regulations in place.

>> HELENE MOLINIER: Thank you, Chenai.  Even though the context has obviously changed a lot since Beijing, the principles that were formulated at the time apply offline and online.  The declaration says that human rights and women's rights should be protected.  And it applies whether you're working down the street or you're on the internet.  Some of the participants put very interesting comments in the chat section about the continuum of violence online and offline.  It's not two separate things, it's the same thing.

And I think the major shift since Beijing and what we'll see with the generation equality forum next year is that it's not going to be only about government.  We need to hold everyone accountable and everyone has a role to play.  We need a strong civil society.  We need voices that address the challenges face women online.  We need to be vocal on how women are being pushed back in online spaces especially in developing countries.  We need to use the forum as an open dialogue with social media companies.  Just like the one we're having today.

But really to bring to the table the companies that are the most impacted by this and make sure that they are addressing online gender-based balance and priority.  Social media companies have ordered that we need to better understand what triggers online attack.  Whether it's on women, but also on young girls and adolescents, and so they have to use the data to figure out how to better protect them.

I mean, obviously, it would be a space where we want government and law enforcement to get better at.  Where we need to see concrete actions, to see them prevent and finish ‑‑ rights and online activity.  And, obviously, it requires regulation and accountability.  But I think overall what we need to call for is a check and balances for all behaviors.  Whether we're talking about companies, governments, or just individual putting an unsafe comment online.

>> MODERATOR: Thank you so much, Helene.  It's true, like, we have to hold everyone accountable, not just governments, this requires everyone to actively engage on it so I'm going to switch my view.  Are there any questions?  I do not want to become an extra panelist, but if there are any questions I can actually ‑‑ yep, so there are some that are coming up right now.

We have a question from Ellen Walker, Ellen would you be okay with asking your question live?  Eve or Nandini can adjust the mic for you to ask your question.  Eve, are we okay on that?

>> EVE KRAICER: Ellen, you should be okay to talk now.

>> AUDIENCE: Thank you for organizing this, it's been really informative.  It's such an important topic today.  I think more than anything else, as it becomes more and more space that people can occupy.  It's so important that women take up that space equally and have all the options open to them to use that space for education, for employment, to better their economic situations.  You know, for everything.  So thank you all so much.

So my question was if there are any good practices you like or would like to highlight, if there are good examples where a company has improved its anti‑cyber monitoring law or policy, rather and if there's a good practice of a government adopting a change for better anti‑cyber monitoring laws?  That was one question.

My second question is at the individual level ‑‑ I heard from the discussion people have different ideas about how much sense it makes to engage at the individual level or not rather at a policy level.  I think that's a very interesting discussion in itself.  When you do talk about at the individual level, are there, for example, you know, handouts people can easily share?  You know, actions to take?  You know, that's one other part.

I want to add that I'm with Rights Tech Women and we work to combine human rights education with technology education.  So we work to train girls on robotics and programming but also train them about their rights at the same time.  So it's kind of something exciting that we're doing.  I just want to share that.  Hopefully, we can increase the practice of educating people, not only on technology, but on the rights that are attached to technology.  So I hope that can also be part of the efforts going forward.  Thank you so much.

>> MODERATOR: Thank you, so much, Ellen.  I think we're going to take a couple of questions and the panelists can choose to respond.  Next we have Daphne Stephens.  Her question is in the Q&A. Daphne you might want to turn on your mic and the question.  I'll go over to Madeleine, Elizabeth and then Tara.

>> AUDIENCE: Thank you for the sessions, it's very interesting, I feel like I learn a lot from it.  I was wondering what individuals can do to participate more in stopping cyberviolence?  Thank you.

>> MODERATOR: Thank you, Daphne.  That was a very clear shaped to the point question.  I'm going to ask Madeleine to speak.

>> EVE KRAICER:  Last name on that one?

>> MODERATOR: Madeleine Mayat. (?)

>> EVE KRAICER:  You should be able to ask your question now.  Do we have Madeleine?  Maybe not.  Should we read that one out loud then?

>> MODERATOR: So Madeleine's question was from your experience, to the panelists, is there one national strategy that is an example of a multi ‑‑ approach.  Yep, so maybe we can answer these questions and then we can have another round from the Q&A.  Panelists, you can pick whichever question you'd like to respond to.  Anyone?

>> BHAVNA.  I'm not going to directly answer the question, best practices, I think we've got so much built up, some amount of, you know, say frustration about how much time it takes for a response to actually take place.  For instance, since we're talking about intersectionality, the fact that it took several years of campaigning for, you know, bodies like equality lab to get costs to be added as a factor within the Facebook's India policy.  It took them three years of campaigning to make that happen.  Even though ‑‑ and there's some strange phenomenon where even though the Hindi translation word, caste, it was already a part of the Hindi version of the policy.  It wasn't present in the English version and therefore didn't present as an option for, you know, reporting, being there on the platform.

So I think the fact that platforms are responding is great.  We'd really like them to respond faster, to step up their game, especially in the Global South, where a large chunk of the user base exists I think there's a strong need for greater accountability and responsibility towards users in these areas.  That's what I have to say about best practices in terms of content moderation responses.

As far as ‑‑ yeah, I think that's the only one I want to take a response for.  Someone else would like to join in for these other questions?

>> MARIANA VALENTE: Many people are muted now.

>> MARWA AZELMAT: I just want to jump in on that specific question around content moderation.  And I mean, I don't really have a best practice right now, but I just want to point out three important dimensions that are really and often overlooked.  Is that first one, I guess questions around content moderation, they should be tackled in a way that is multifaceted because it takes so many forms.  But they should also be contextualized.  It's very context specific.

We would find these responses, they would often really entail so much bias towards communities in different contexts.  But at the very same time, we would find that we would need to really mobilize this local expertise when it comes to cyber policies.  The best practice would be to hear from these experiences and to deliver these experiences for us to understand what can fit and what cannot be done basically.  Over to you.

>> MARIANA VALENTE: There's an interesting conversation going on in the chat in terms of examples.  I'd just like to quickly address the question on what individuals can do and how individuals can engage.  I think there's a lot of discussion right now on the counter speech strategy, right?

I think there's something that on this very individual level can be done, that's the space for counter speech, right?  I mean, there are limits to it, of course, and I think the panel has been addressing that somehow, that private policies, public policies are really important to address it.  But there is value, great value in counter speech only spaces are really important for women.

If you talk to ‑‑ I mean, any person who is involved in feminist activism, understands, I think, the role of being able to be online, being able to campaign online.  So I really think that the connection that Marwa was also making between freedoms and cyberviolence and how cyberviolence affects freedoms is really, really important for this discussion.  And I think joining campaigns and driving discussion online is perhaps what's most at reach when it comes to the individual level.

>> MODERATOR: Thanks so much, Mariana.  We have about 15 minutes left.  Helene, I can see your mic is off, do you want to respond, or should we take the last two questions and then we can also embed the response in the closing remarks?  Okay, great.

So we have two questions in the Q&A.  We have a question from Elizabeth Sutterland.  Eve, do you want to unmute and she can ask her question?

>> EVE KRAICER:  Elizabeth, I'm not seeing you on.  Give me one second to try to pull you up.  Okay.  Elizabeth, can you say hello if you're there?

>> AUDIENCE:  I think the unmute button worked.  Thank you all for putting together this panel.  It's been informative.  Thank you.  I had a question on the Facebook front.  You know, it's really great that, you know, we're connecting women with these resources to report their harassment and to kind of, you know, find the proper resources after experiencing it. Given that that process can still be so traumatic, you know, I'm really curious what kind of efforts these companies are going through before women get harassed to prevent that from occurring and to make these spaces safer before the harm is done?

>> MODERATOR: The last question from Tara?

>> EVE KRAICER:  Can you say hello?

>> AUDIENCE:  Can you hear me?  I want to echo what some of the other attendees have said.  I'm appreciative of everyone's time in putting this together.  It's an important conversation.  Thank you all.

My question relates on the fact that we're focusing a lot ‑‑ and obviously rightfully so ‑‑ on the content moderation piece.  But I'm curious working for a tech company myself, what recommendations would the panelists have with regards to kind of other aspects of technology that can be addressed in this conversation?

You know, one of the questions that my company and others are struggling with is just how do we make sure that the products we're developing is attuned to these issues and making sure that they cannot be abused in a way that perpetuates this online gender based violence.  I'm curious if there's best practices or recommendations you might have for the private sector that may not be in the content moderation space.  Thank you.

>> MODERATOR: Thank you so much.  We have about 12 minutes left and I would invite my panelists to take this opportunity to respond if they have anything else to respond.  I think Cindy, there was some direct questions.  Helene, you had something you wanted to say, and also use this opportunity to offer some closing remarks as we wind down this panel.  I'll start with you, Cindy.

>> CINDY SOUTHWORTH: To Elizabeth's question about what could we do to make sure that abuse is not even occurring on platforms so we don't have to worry about taking it down. There's a couple of things.  One of the things that I'm excited about is we're using more and more machine learning and artificial intelligence to try to identify potential harmful content.  One of the ways we do it is when people are typing a caption or a comment, it will get a pop up that says what you're about to post has been reported by other users as violating our community standards, do you want to still want to post it?  It works, a surprisingly number of people rethink what they're getting ready to post when that is brought to their attention.  Which the whole point here is we're trying to change culture so it is not even fathomable you would say something harassing, threatening and misogynistic offline or online.  We want to change that entire mindset.

One of the other things we're doing, especially with some of the coordinated attacks and the cyber mobbing, as a couple of people have mentioned is using technology to look for the spikes and immediately investigate so when something is starting to go viral or it looks like there's a pile on attack to get human intervention.  Technology can identify it and flag it for teams to go in and look at it so it isn't up to one individual victim to have to say, look, I'm being attacked by hundred plus people.  Ideally, we want to be able to catch that quickly and shut it down.

One of the ways we need that machine learning and the artificial intelligence is by people reporting things.  So please, if you see harassing content, even if it's not directed towards you, report it on any platform.  It feeds into the artificial intelligence and will help our communities get better because context matters.  If we don't have that contextual piece, it's hard to action on it.  And I'll pass to whoever wants to go next.

>> HELENE MOLINIER: Thank you.  I'll go next and then I'll address this question and the one before.  I think it was Tara, it's a very good question on how do we ‑‑ we have worked with a couple of companies to try to see how we can integrate gender in the design phase in the pilot phase when a new platform is launched and afterwards.

And it's complex.  There's no silver bullet I think to the technology that is safe that is from the get-go.  But I think what's important is to at least during the development phase try to take as many steps as possible, try to have diverse teams, also test your technology on a diverse panel and on people coming from different countries, different religions, different context. I think it's very important to look at intersectionality because it's bad enough to be a woman online, but if you're a woman of color or if you're LGBTQ, you have much more aggressive behavior against you.  This is something to take into account.

One thing I also used to say a lot to people, you have to educate your team, especially your tech team, about gender, about social science.  It's not the job of the one woman or the two women on the team to tell their colleagues what it is to be a woman or what is gender based violence.  It's something that everyone has to be aware of.  We all come with our own biases.  I think it's important that you address it not as a woman issue but something that the whole team is responsible for.

I would just also say a couple of words on the present discussion.  I think it's really important when we talk about regulation, legislation, policy that we really have focus on prevention and on legal aspect and punishment together.  It's not one or the other.  I think it's important to have an ecosystem that is inclusive so when you look at policies, you also look at what you can do for prevention, for changing social norms, for trying to educate by standards because it's as important as looking at the policy.  And for policy, please keep in mind that you have to contextualize.  We have all different privacy law situations, we all have different histories.  It's important you contextualize what works with your authorities and what works.  Thank you, it was fantastic to be with this amazing group of women.  Thanks.

>> MODERATOR: Bhavna, Mariana, closing remarks?

>> MARWA AZELMAT: I can go ahead.  Perhaps I would just like to briefly just mention the issue of gender disinformation like on news, on harassment, or biased AI or other interventions.

So from this remark, we can really conclude that it's just about like cultivating that herd immunity online.  We really need it.  In this process, all stakeholders should work together.  Even if we try to really build the awareness of the tech teams or corporate or whatever, we would see the need to really leverage that together.  Because even if we work on the design, we would fail when it comes to moderating the online behavior.  We would fail to fight this self‑censorship online.  I believe this sort of herd immunity is very much needed.  And to do it, we would need to bridge the conversation just like we did today.  These discussions are really rare, but they're really needed, yeah.  Thank you.

>> BHAVNA JHA: Thank you. I'll take off from your last point about this being absolutely needed for us to have these conversations.  I'll start off by responding to Tara's questions about other aspects of technology, broadly your question about what sort of technical design measures can tech companies take.

I think it is absolutely vital for companies to do algorithmic audits before they launch into new areas.  These need to be sensitive to intersectionalities in cultures that they're going to be unleashing this technology into.  Not doing this has had some devastating effects across nations, across the world, and has really hurt the social fabric of several societies and led to, you know, like death counts that can be directly ‑‑ can be directly notched up to these lapses.

My second point is that, yes, content moderation gets talked about a lot, but not enough.  And it is very important for us to move beyond just speaking about agreements that, you know, that tech companies can sort of sign off on and move closer to something that ‑‑ because we've really been able to see how nets DG (?) to their responsiveness to a lot of ‑‑ transparency in their reporting of content in Germany.  We'd like to see that across the world.

And, finally, we really, really need a global consensus around online civility to echo what Nandini started off with.  And we need to, you know, get all of us and everybody who recognizes the necessity of this conversation together until we have been able to create this consensus and we're able to recognize, not only ‑‑ we need to ‑‑ there's a need to redesign, but we need to transcend conversations of safety of online spaces to making sure that online spaces are spaces of, you know, what Marwa called pleasure and consent.  And two words that, you know, I raise my hat figuratively.

>> MARIANA VALENTE: I agree we should speak more of moderation and just as a last remark, closing remark, in that conversation of let's say policy versus individual action, I think one thing that's important to add to this conversation is that even when policy is at stake, culture drives enforcement.  We've seen a lot of laws not being enforced or not being properly enforced because of how law enforcement agents see these issues and there's a case in Brazil that's huge right now that sort of talks directly into that. So I think all these things are more connected than it may sound from some conversations that sort of drive a very ‑‑ a distinction, a strong distinction.  Yeah.  It's been a pleasure, thank you.

>> MODERATOR: Thank you to everyone to our panelists for such an amazing conversation.  I think we've all taken away some key points on how we can respond and how we can build up all the work that we're doing.  And I think you can find them all on different social media platforms and you can reach out to them.  And thank you to our fantastic organizers and to our audience who have been part of this conversation.  We do hope to continue to engage with you on this work on trying to make the internet a safe space for everyone, and in particular women and girls.  Thanks, everyone.  Have a good day, evening, night, wherever you are.

 

Contact Information

United Nations
Secretariat of the Internet Governance Forum (IGF)

Villa Le Bocage
Palais des Nations,
CH-1211 Geneva 10
Switzerland

igf [at] un [dot] org
+41 (0) 229 173 411