This is now a legacy site and could be not up to date. Please move to the new IGF Website at https://www.intgovforum.org

You are here

IGF 2016 - Day 2 - Room 5 - WS146: Honey, You Are So Not In Control Decrypting Sextortion

The following are the outputs of the real-time captioning taken during the Eleventh Annual Meeting of the Internet Governance Forum (IGF) in Jalisco, Mexico, from 5 to 9 December 2016. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

RAW COPY

2016 INTERNET GOVERNANCE FORUM

WS146:  HONEY, YOU ARE SO NOT IN CONTROL ‑ DECRYPTING SEXTORTION

JALISCO, MEXICO

ROOM 5 

7 DECEMBER 2016

12:30‑1:35 P.M.

>> Welcome to the workshop.  We would like to thank the Internet Governance Forum and the Williams and cybersecurity Forum in the Netherlands.  They have organised this truly international and multistakeholder approach workshop.  Now, as panel moderator of this workshop, allow me to introduce myself.  And shortly present the subject followed by the presentation of each of our panelists and afterwards we will have a social engineering sort of demonstration exercise.  And, finally, well start with the panel answering the questions that we bring today to the table in relation to sextortion.

So, my name is Catherine Garcia, Digital Governance and Cybersecurity Law at the Centre on Cybersecurity in the Hague University of Applied Sciences and also member of the Women in Cybersecurity Foundation in the Netherlands.  Online sextortion is based on nonconsensual pornography and refers to sexually explicit videos and images disclosed without consent and for not legitimate purposes.

In both footage obtained by hidden cameras, stolen or leaked photos and recordings social media manipulation, and blackmailing training communications, computer hacking, use of malware and ‑‑ it is a violation of privacy and a form and often gender and intersectional sexual abuse for an objections based on negative perceptions of nudity or displays of sexual conduct.  The current architecture of Internet and ‑‑ these vulnerabilities lead to scalability, replicability and searchability for private information.  Although existing private for mechanisms have been developed and improved over the years.  They are still not helping users in distinguishing self‑disclosure behavior that might put them at risk.  In the confidentiality paradigm, privacy as the right to be let alone and our aim to create individuals autonomous fear free from intrusions.  The confidentiality and paradigm places a strong focus on security but not too much attention in enabling transparency and online identity construction.  This has been already raised in several workshops in the course of this week here at the Internet Governance Forum.  An all the privacy contributes in a big part to the construction of one's identity, both in individual and collective level, this is the notion of privacy underlying self‑disclosure activities.  Therefore, we believe technologies located in the practice paradigm shares such an understanding of privacy and pursue making information flows more transparent through feedback enhanced awareness.  Considering the Internet law as a Forum for public discourse, it is clearly undisputed that cyber harassment such as distortion interferes with expression even if it's perpetrated via expression.  Given that it is profoundly damaging to free speech and privacy rights, sextortion is growing concern and needs a coordinated multi stakeholder efforts to bring about greater level of Internet safety.  Consequently in this inter146 sextortion we aim to restore order on the one hand alternate and behavioral approach which is going to be the first tract of course we have for you today for our panelists and for you to commend and react on.  And on the other hand, adaptable and legal policies approaches and good practices sextortion that can be harnessed to keep up with technological advances.

This will be addressed through the social engineering societies that we have and therefore after through our panel discussion.  Thank you.

Now we will pursue to introduce ourselves.  I already have done it so I will pass the floor to my colleagues, members of the panel please introduce yourselves.

>> ALEJANDRA CANTÓN MORENO:  I'm security director for Mexico system engineer and expert in end user behavior and awareness programme.  Thank you.  Alejandra Canton Moreno.

>> Hi, I'm Su Sonia Hering.

>> I work at Google here in the Mexico office.  Oh my bad.

>> Hi, good afternoon, everyone.  My name is Hanane Boujemi, government manager with Hivos.

>> My name is Nicholas.  I'm member of the recent training group and user centre social media in University of Pittsburgh and my research topic is self‑disclosure in social media and privacy and social media.

>> Hi, good morning.  My name is Ramila, researcher in Rio.

>> Hello, good morning.  My name is Arda Gerkens from the Dutch parliament and President of InHope, the network of all hotlines.

>> All right.  We will start with Alejandra that will explain a little social engineering she has done over the course of the last two days.

>> ALEJANDRA CANTÓN MORENO:  Hola, buenos tardes a todos.  I want to say who we are, what we are or better what we are in the Internet.  The answer, we are dozen, hundred, thousand regarding of our banking information, our health, our work, our friends, our vacations, our networks and maybe our sexual activity, relations and even our preference.  It's a lot of information that needs to be protected and needs to be protected because it's valuable.  And it's valuable on money and it's valuable in power.  And not only for us but also for a lot of people for good or for worst.  There is a lot of risk around information.  And social engineering is one of them.  Maybe the most important because, one, sometimes intruders get the chances when there are general gaps in the security.  Well, they use it.  But more often we can guess they get through because of human behavior such as the natural human inclination to be helpful or liked or because people are reckless about the consequences or being careless of the information.  I know that most of the ‑‑ know about social engineering.  After all, we are experts in so many subjects related, right?  Well, during the past two days, we did an exercise with some of the participants of the Forum.  53 to be exact.  What we did is that some of us work around the workshops, work around the different rooms without but any badge.  We take out this.  And without any IDs.  We ask some of the talents to register in some regular white sheets with some printed column asking their names, their employees, where they work and their email address.  46 didn't ask anything and gave us their information.  6 asked a little bit more but with some and useless answers, they gave us information.  And just 1 said no way.  I don't want to give my information.  I already gave it in the station area.  Imagine.  Almost the 8 percent of the people gave us their information.  The conclusion of this is we are so expert and so protective with antivirus firewalls, complex passwords that we forget the basics.  Do not trust the people you didn't know.  So if this happens to us, imagine what happens to teenagers and not tech people.

It's important to tell you that all the information that we collected was used just for this workshop and just for statistic purposes.  It's already destroyed.  And also all the emails accounts.  So if one of you gave us the information, please, your information is safe.  Well back to our business.  Social engineering gains access to any systems despite the layers of the security that we have implemented are now in hardware or software.  The ultimate security wall is the human being.  And if that person is instructed, the gates with wide open and computer text control.  And when computer gets control, some of the things that he or she can do is to get ‑‑ access to your local information, access to the cloud information.  Access to send Emails.  Access to the social networks information, also ongoing support services post in your social networks.

So your personal information could allow criminals to open bank accounts, to get credit cards, passports, launder money or to make fraud or even to make you sextorted because they have your information, your personal information.  And these could be even worst in many ways to lose all the money in your bank accounts.  So unfortunately there are many cases of socially engineering cases.  The only way you can do is promote awareness.  Thank you.

>> MODERATOR:  Now we will start with the panel announcement.  The first is technical and behavioral track.  And for this we have specific number of questions that will be addressed by in the first place Nicholas and he already introduced himself.  But a little reminder.  Nicholas Diaz, fellow PhD fellow at use centre ‑‑ University of ‑‑ in Germany.

>> Nicholas:  Thank you, Catherine, for invitation, I'm very happy to be here.  And so basically my research is on online self‑disclosure and also to build a connection on what online self‑disclosure means and how it can be with sextortion.  Basically social network sites that people share diverse content and information in order to get some benefit like maintaining friendships, blog, sharing photos, music, articles, et cetera.

So the process of making the self known to others have been defined in 1958 by ‑‑ as self‑disclosure.  So this is basically in an offline context like we are now.  So in the online context that would be redefined as online self‑disclosure.

So there are benefits of self‑disclosing.  We maintain friendships.  It's the key way of building social relationships.  So when I'm talking about my interests, my what I like, what I do not like, I can build an emotion a tie with another person.  However, when we do it in social media, we forget that these places are not free.  Thief, et cetera.  Unfortunately engaging in self‑disclosure activities can bring negative consequences for users that go beyond the problem of image.

In the case of sextortion, perpetrators that get access to the profile of the victims can get their address, phone number, they can know about their interests, habits, their workplace, and therefore they can build a social profile out of their victims and start seducing the victims and engage with them.  Unfortunately these consequences are not part of the user's privacy concerns until they live it in the flesh.  So why do we disclose more in an offline context than online context?

Basically I know how many people are in this room.  Well, approximately.  So I can regulate my speech.  I can self‑censorship and I can follow basic rules of information sharing.  Unfortunately these rules and ways of behavior is not the same in the online world.  So we talk about our personal life, we talk about, for example, what I had for breakfast in the morning and that I had a party yesterday and got drunk, and this information, it's often very sensitive and very private.  The thing is that we, as users, are emotionally detached to our private data when ‑‑ support.

For example, if I lose my passport, my reaction will be visceral.  I'm losing something that's very important for me.  However, like Alejandra just showed in her experiment, we are very happy and willing to give away our data constantly over and over again.  And that's actually because computers are social actors and therefore they moderate the way we perceive our own privacy and the way we perceive our private data online.

So we are responsible as designers of interfaces, of hardware, in the way people understand their privacy and live their every day online privacy.

We believe raising awareness is very important for users to know how important their data is for them.  There have been a lot of approaches on trying to do this.  For example, some researchers of Carnegie Mellon tried to do the privacy notches.  They tried to apply sentiment analysis to their posts on Facebook.  So they were warning them if the post could be in a negative or positive way.  For example, if the system was okay, your post could be understood in a negative way, the users had a chance to hold it and not post it at the end.

Many of this approaches that are very interesting rely on very static approaches for awareness.  This is each one of us are different, each one of us understand privacy in a different particular way and each one of us have different privacy needs and concerns.  So in order for these approaches to succeed on engaging the users, they need to be adapted.  And we believe that if we consider users as individuals who could significantly benefit from learning about online privacy, this is like taking a more pedagogical approach and consider the user as student of privacy literacy, we can get closer to raising awareness in the right way.  So we have defined an instructional awareness system that its aim is to provide personalised feedback messages when detecting user disclosures.  And consequently we have designed what we call software architecture that we said we have described the architectural components of such instructional awareness system.  And this as I said before that has a pedagogical, it's a pedagogical approach.  Resembles ‑‑ intellectual ‑‑ systems have been used before.  It's not something new.  But they have been used for academic purposes in academic environments.  And these systems provide a learning environment to the student and try to give him feedback and assessment on learning particular teaching topic.  For example, sequel, the language for databases, and in order to generate this, the tutoring system is contains a knowledge base divided into basically extorts the main knowledge about a topic that he's teaching.  And database, being a part of database user performance.  So if we think about it in terms of privacy I can teach privacy to someone but at the same time in order to adapt feedback and give the right guidance, I need to measure in some way the progress of how this person is learning this topic or in this case privacy.

So we think that can be a way of reducing excessive online self‑disclosure.  And we are, if my research training group, we are actively trying to make this happen.  And this is our big goal for the next year and for the next years to come.

However sextortion has other requirements, I mean we are not as I said before, this is more for online self‑disclosure.  But a very important part or aim of fighting sextortion is the generation of consent because basically the perpetrators that are disclosing these pictures, videos do not have any concerns on doing it.  And it's a debt in software architecture and we as developers on how to generate right mechanisms for detecting consent.

If some ways about disclose a picture, we need to double‑check, triple check if that person has consent over that information.

>> CATHERINE GARCIA VAN HOOGSTRATEN:  So now we will break out to comments or discussions.  Who will like to start?  The software panel and then participants, I'm sorry.  Any questions.

>> Hi, can you hear me?  I know we don't have enough time.  My name is Hanane Boujemi just for the record.  I would like to maybe ponder the question relating to the terms of use, the existing terms of use that we have at the moment in various platforms.  Not only Facebook.  Facebook we usually give it heed.  But videos can be exchanged on what's up and other services.  And of course even Google Plus, maybe.  So I think it would be very interesting to hear from Google, for example, since you're representing that segment of the industry, on what kinds of terms of use do you have at the moment to deal with this phenomenon?

>> I'd like to stipulate two things, first of all the social engineering which is really the way we work it as sextortion at our help line.  Because what they will first do is through all the information they get from you on the Internet contact you and build up a bond, the two of you together.  And they will suggest that you will have a conversation, for instance, on Skype.  And where the actual sexual contact will take place and then sextortion will lead out of that.  So it's very good that you just showed us that social engineering is a very important part of this terrible crime, actually.  And I'm really looking forward to the results that Nicholas will bring us.  One of the problems with sextortion is also sending material you don't actually want other people to see.  And what we know is that very young people, they don't have the ability to think about what they do.  We older people always tell them think twice, but they don't.  So it would be really good if there would be a tool if it they were about to send it out saying hey, do you really want this to go online?  Do you know what happens?  And I think it might be useful to in this sextortion especially, I think there's a great challenge on the other fields because it's very hard to detect what kind of material you want to send out.  But it's very interesting to see this technology arising.

>> CATHERINE GARCIA VAN HOOGSTRATEN:  So we have a space for a couple of questions.  So please raise your hands.  Yeah.  First here and then.  Yeah.  So first we have our panelists, then you and I think you're up.

>> Okay.  I want to follow Arda's comments by mentioning research done in October 2015 about sexting and comparing youth behavior to adult behavior and also how preventive measures, how it worked with the behavior of youth and how they ‑‑ did they stop sexting or they didn't?  So I'm just quoting from the research that letting people know the negative consequences is not very effective as discouragement.  In fact, it's recently been shown to have the forbidden fruit effect.

And so focusing on negative outcomes campaigns is communicating to youth that sexting self is wrong.  We can replace the word sexting here with maybe even, well sextortion or excessive self‑disclosure is wrong without recognizing the positive aspects of it.  So this is likely to be infective because youth would not engage in sexting without any potential benefits, although it may not be obvious to us or adults in the first place.  Instead, prevention campaigns should recognize that there are some positive aspects to it and make it clear that what needs to be prevented is only the negative aspects of it, not sexting itself.

So the reason I'm quoting this is, well, as, I mean, as parents or people who interact with younger people, sometimes it has the exact negative effect to tell someone that, no, this is very bad and I ban you, I forbid you from doing this.

So I think maybe a more lenient approach instead of telling people no, this is bad, don't do this, can be followed.

>> CATHERINE GARCIA VAN HOOGSTRATEN:  All right.  So now we will give the floor to the first participant.  Do you have a question?

>> Okay.  Thank you.  My name is Marcelo.  I'm ‑‑ I work with security in Brazil on university.  We work with different kinds of malware, virus and different programmes, similar for that.

I think the principle problem is the case try to extras Tate the persons, the users to understand the problems with these technologies.  In the following years we have different problems with Internet of Things on other technology that you will use.  And the principle problem is that the user accept or not understand how to use specific security.  In this case I think the problem is Facebook but the other problem is the application in Android, the problem of zero dates, problems in the system, malware, the bought knots that try to keep your information.  I think it's a very big problem.  Not only with sextortion.

>> CATHERINE GARCIA VAN HOOGSTRATEN:  Thank you.  That was exactly what we were talking about awareness, the importance of the awareness programmes not exactly behavior but the use of the technology and also the online and offline culture because sometimes you aren't aware of what are you doing offline that will be harmful in your own life.  So it's basically that to be aware and to be educate of the technology you are using.

>> CATHERINE GARCIA VAN HOOGSTRATEN:  The second question, please.

>> My question is directed to the second speaker.  A lot of the research that we have done in India shows that young people when they self‑disclose people of any age, particularly when they are using the Internet for sexual expression, including sexting, have a pretty good idea of what they are doing.  And so I was very curious about the concept of excessive self‑disclosure that was up on the chart because my sense is that people, when they use the Internet for sexual purposes, which is pretty of ‑‑ is very, very common nowadays, do assessment of the risks and that perhaps the solution is to actually use safer sexting practices, et cetera.

So I'm not clear what excessive self‑disclosure means.  Because I'm a little worried, to be honest, because it seems to blame the person for sort of disclosing too much of themselves.

>> Nicholas:  Thank you for your question.  Basically as I said before, the rules of interaction in the online world differ tremendously from the offline world have and there's a very interesting paper from researchers of Carnegie Mellon.  It calls "I regretted the moment I press shared."

So this is empirical research where people had reported a bad experience online for having disclosing this personal and sensitive data.

So these bad experiences can be certainly mapped to a certain attributes that the person had disclosed in that moment in time.  So when talking about excessive online self‑disclosure, it's that when you start revealing and revealing more and more and more and more and more, the likelihood of some harm, it's increasing.  So we have to find a way ‑‑ that's my research about, so I'm spending a lot of time trying to decode it, to find out how to do it ‑‑ how can we provide the right assessment?

And of course this should not be about a censorship for these activities.  It should be in sync on the privacy needs and expectations of his or her online experience.

>> I think it's also important to distinguish between the two ways of sharing consensual ‑‑ sharing sexual information on the Internet.  One is with consent.  Debate on how to do it or whether you should do it or not do it.  This could be a tool to help them.  Then you have unconsensual which is when you get into the sextortion.  And this is I cannot emphasize it more a very, very serious crime.  We don't at this point we are not even thinking of how serious this is.  There are people killing themselves by the fact they are being sextorted.  It is huge.  We have no idea the things we get to our help line they are only the tip of the iceberg because most of the people, they will just pay the money that they are being asked.

And I think that we have to also realise that the perpetrator who gets people to sextort themselves, they use what you just called the heat of the moment technique.  It's not something you can really think about.  It happens to you also because there's a lot of dopamine in your bodies.  It's within minutes it has happened and the video is out there.  The perpetrator is sextorting you.  And I think, therefore, it is something that we should really look on how we can stop this because it's, like I said, really serious crime.

>> CATHERINE GARCIA VAN HOOGSTRATEN:  Next question, yeah, please.

>> Can you hear me bay an tries from Brazil.' an anthropologist and I study online security especially connected to gender ‑‑ yeah, okay.

I wanted to ask Nicholas but I think I could ask anyone.  How do you see sextortion as being a form of gender violence?  Because we have at least in Brazil, statistically it happens to girls and women.  So we're not talking about any kind of violence or any kind of exposure or the consequences are not the same.  They are very different.  Depending whether you're a woman or man.

>> CATHERINE GARCIA VAN HOOGSTRATEN:  I may take this question if any of my colleagues want to also add something shortly, briefly.

I do agree with the fact that it is definitely not only a gender sort of phenomenon, but also and most importantly we have discussed within our group intersectional.  Because we have to be aware that most of the studies usually refer to women, which is true.  There might be more statistics, statistically speaking, women that are usually victims of these situations.  But, in fact, there is also the non‑reported cases, which mostly concerns to members of the LGBTQR community, so on and so forth.  Religious minorities.

So there is an intersectional, actually, and not only a gender‑based situation underlying the problem of sextortion.

So I would like to move quickly to the legal and policy track.  And for this part, we have the pleasure to have Jamila of the Society FGB Rio and also member of the coalition on platform responsibility which actually recently, only yesterday, I have been informed, has launched a book which I hope is easy to get access to but is basically focused on terms of services.  So here we will have Jamila giving some comments on what has been the findings of this Dynamic Coalition in the IGF.  And also, then, we will open for the panel discussion probably then you will be able to answer Hanane's question.  The floor is yours.

>> Thanks, Catherine.  Thank you for the opportunity to talk about this topic.  So, yes, as Nicholas was saying before, it's interesting to hear this conversation because I remember when we started interacting the Internet, the biggest concern we had was with our security.  And at some point the economy ‑‑ okay, thank you.  At some point ‑‑ do you hear me?  Okay.  At some point it seems like the economy of platforms or how things on the Internet work changed in a way that we got used to giving our data to have access to a lot of information that are available online, right?  And we do that especially or guided somehow through the Terms of Service.  Despite the fact that sometimes we don't have to explicitly accept those terms, the mere use of some web services imply the acceptance of some rules and conditions that will regulate behavior online, right?  So considering that somehow what we analyzed this study, considering these Terms of Service are defining how we interact online and how we exercise our Human Rights online.  Because Internet is as we all know a tool for the ‑‑ privileged tool for the exercise of Human Rights, particularly access to information and Freedom of Expression.  And also they also define the conditions under which we will exercise these rights, in this case affecting the use of our data, for instance.  We decided to analyze the terms of use of 50 online platforms.  We only analyzed the Web‑based platforms.  We didn't analyze mobile platforms.  To understand how they deal with Human Rights, Freedom of Expression, privacy and due process.

It's interesting that we find that there are several rules that apply regarding privacy.  There are lot of commitments regarding the protection of user data already present.  The privacy policies, they are detailed, they are long.  They usually give somehow a notion of how this data will be used.  But it's difficult to understand how this may impact on users' rights, right?  How the collection of all this data may impact on users' rights.  And that's because we have a language that is technical and it's not just in a legal way.  It's all a technical language regarding to computer science and the mention of several technologies that may affect our privacy and that we don't know how they really act.

So it's more complicated to understand these terms than reading a common contract for other types of service.

And at the same time, these terms are usually written in a way that they are broad enough to allow companies or platforms to use data in different ways without having to ask for a new consent, right?  So they usually say they may track users' activities.  We found that that's almost all platforms we analyzed, we analyzed bigger services, smaller services, so most of them say they pay to track users' activities.  They may allowed third parties to track users' activities.  They say they may share users' data with third parties without specifying which are those third parties, under which conditions these data will be shared.  So it's mostly difficult to understand how this may impact on our privacy.

There are excellent practices we observed, especially in the biggest companies.  For instance they are mostly directed to informing user about the impacts of their privacy regarding other users, right?  And it's exactly what it's more related to what we are talking about today.

So for instance there are some tutorials, help pages, videos that try to inform users about the impacts of the information that they share.  And these can be a very good practices.  However, when we look at the relation between the users and platforms, that's more complicated.  And the guarantees that are present in the terms are fewer.

On the other hand, when we talk about Freedom of Expression, what we find is that the terms are smaller.  They are not that long.  And they are not that detailed.  Usually you have several guidelines regarding which is the content that is allowed or not in the particular platform.  And then most of them, 80 percent of them have some mechanism for flagging abusive content.  They include that in their terms.  But a great part of them, it's related to the DMCA.  So it's just about Copyright content.  And that's what you have on the Terms of Service.

Regarding other types of content, you usually have general clauses saying that they may be taken down.  But there is no transparency regarding how that will be done or under which conditions.  And I'm talking about only the Terms of Service, privacy policies, community guidelines and the documents that we consider that are binding to users.  We are not talking about other pages or other mechanisms, right?  So it doesn't necessarily mean that these mechanisms do not exist.  There is just no commitment in the legal document that users have to accept to interacting the platform.

So there is no transparency regarding these takedown mechanisms, how they will be implemented.  Not even to the potential victim of these type of abusive content but also not to the users.

So what you find is that few platforms or I would say on the contrary, most of the platforms say they may take down content with no notification, no justification to the affected user.  And more than that, ‑‑ percent of the platforms, that's 44 platforms out of 50, say they may end‑users' account without notifying users.  So it seems like there is a problem that affects both victims and creators or users that are sharing their content.  And it seems like what we need is more information and more transparency regarding that so we can understand better how that can impact on our rights.

>> CATHERINE GARCIA VAN HOOGSTRATEN:  Thank you.  That's super helpful, actually.

>> Thank you for having me here.  I'll be brief because I think what Google has to say here, we're not here as a representative of the whole industry, unfortunately.  I just know one company.

But I wanted to highlight that we do have a consistent approach across our platforms and across our products.  So that's the first thing.

And the other thing is that we didn't operate about this particular issue in ‑‑ without information.  We actually went and sat down with advocates, with victim groups and we really tried to understand where this problem is coming from.

When the first terms of services were created, revenge porn wasn't really there.  So the phenomena that comes over and over over the Internet has to necessarily feed back into the product.  So we try to take the time to be here, to talk to groups like the ones you lead to understand how the product can be adapted.  So we have no tolerance whatsoever for revenge porn have we take it very seriously in our platform.  So we have a system where you can ask for that content to be removed.  So that's on our removal policies.  Actually you can find that as one of the sensitive contents that we consider.

And we don't allow it to be shared in the rest platform.  So blogger, Google Plus.  Once this is asked by you or by someone to be taken down, it's reviewed and it's taken down and it's not allowed to be shared across the platforms.

I will highlight, though, that removing it from Google search does not remove it from the Internet, okay?  So we also have a mechanism to try to make easier for you to contact a site's webmaster.  Because if this image or this video lives in another person's web site as a search engine, we're responsible for indexing that information.  But we can't really go out into the Internet and remove things.  We don't have access and we really can't do that.  But we do try to give you information on how to contact the website's webmaster to follow through on that particular request.

I do think that we all agree that search and these types should reflect the Web.  And these examples of what gets taken down should be considered really carefully by communities like this one and by dialogues like this one.

So I think personally identifiable information, so if you find that your credit card information was somehow posed in a website that's indexed, that can be removed.  And the same way that's how we got to the revenge porn.

We also not only review the requests but we also demote on search.  So on the ranking on search the websites that have been known or been denounced to put revenge porn on there.  And I think that you should also think about the rest of the way we think about this is we don't allow pornographic content in general on our platforms.  We don't make money from it.  We don't allow advertisement for pornographic webs.  And we take sexual and violent content out of complete function in search.  Because we do believe that there is a role here to play.  Obviously you talked a lot about how much information we disclose.  Then there's a responsibility of those that distribute some of that information without consent.  So those are two sides of the problem.  And then the platforms sort of in the middle receiving or letting things through because that's what we were created for, to let the flow of information get from the creators to the users and vice versa.  And I think what I wanted you to take away is a responsibility to be here, to talk, to gather the information, to take it in and to try to improve all the time on these practices.

I don't know if you're going to talk a little bit about sort of the legislations because you mention crime a few times, right?  And I don't think it's very clear yet in all the countries across all the jurisdictions what the crime is defined.

So on the platform side, I guess what I wanted to highlight was the importance of having very clear definitions of what you're considering revenge porn in this case, if you're penalising the content creation, the lack of consent to distribute it, how to handle content for commercial purposes.  Those types of things would be really helpful to have in domestic legislations or in an agreement because that really puts the sort of crosses the T and puts the dot on the I's of what we need to be looking at in those community guidelines in the end.

And obviously I think all of you would defend Freedom of Expression and sort of making sure that these definitions do not conflict with that right.

So I will leave it there.

>> Thank you.  I just want to highlight something relating to violence against women and the issue of revenge porn or sextortion.  Men are also subject to that.  And I think we have to bring that up somehow here so we're not too concerned about only one part of the problem.

When it comes to legislation, I think legislation is going to help in some context but not others.  In the Middle East, for example, we definitely cannot aim for legislation against revenge porn or sextortion because the perception of the society, the cull culture, the religion and a lot of things.  There are a lot of elements in the mix.  The terms of use, I'm going to go back to that.  And I think the terms of use or the conditions under which Internet companies usually operate I don't think are sufficient in the current state.  So let's not focus on the regulation now because we need to find more solution that we can scale to different regions.

The terms of use are a little bit neutral.  If you look at them very quickly, you don't see ‑‑ I mean you see things like oh if you're a sex offender, you can't open an account on Facebook.  But you make the assumption that you only have the clean people on Facebook.  And that's not the case.  This is applicable to many other services.  I think we have to get more serious when it comes to, well, taking down content from these platforms because I don't agree 100 percent with that.  Because taking down content again in the context of the Middle East will be completely different.  We will be talking about censorship and so on.  So it's a little bit a sensitive issue, as well.

As I said, it would be really more useful if mat forms are more efficient filtering through that content more seriously.  At the moment, we see a lot of disturbing videos that are not being kind of addressed.  I'm part of a mailing list where all these cases are actually being disclosed from victims and also from other communities like LGBT community, like political, all kind of people at the moment are trying to see a better to moderate this kind of content on Internet companies' platforms.

So I really call for a more comprehensive rules or guidelines for people to use these platforms.

>> If I can add to that, one of the biggest sources of sextortion is your platform YouTube, as you probably know.  YouTube is very good in removing the material as soon as it's flagged.  But the chances of it being uploaded again are always there.

Now, we have perfectly very good technical possibilities like follow DNA or hashes to prevent this from being uploaded again.  And I would really, really ask Google and Facebook and every other platform to start using those techniques because like I said, this is a really serious crime.  It threatens people.  It costs lives.  And I think you should take your responsibility here.

>> CATHERINE GARCIA VAN HOOGSTRATEN:  All right.  So just to wrap up, is there any question?  Okay.  First here and then ‑‑

>> Cláudio, State University.  Another Brazilian in the game.  I'd like to build on the argument that was made by the senator here.  I think we're pretty if not solid, we have an idea of where we're going with awareness.  We also have an idea where we're going with the search for legal remedy.  And now towards the end of the panel, we discuss something which is really, really essential, which is the ecosystem technical response.  We can't rely on ‑‑ this is not humanly traceable.

What happens here is not humanly traceable.  And we have, again, building up on the argument extremely interesting and effective tools, content ID, for example, has an interesting mechanism to filter Intellectual Property breaches.

So we're discussing now how to automate responses to hate speech.  We're discussing how to make responses to the search of the truth in the last days.  So are we not working a little bit more in automating because time matters and it matters a lot here.  And this is destroying the lives of not only but also of women and girls and women all over the world.

>> Hi, I serve on the ICANN board.  But my question is more to my previous incarnation as a lawyer and a politician.  Because as Arda said, it's a serious crime and it's costing lives.  But I understand it's not a crime in many countries or it's not clearly defined.

So are there any plans, from your network, from all the people here, to very concretely start working to make it a crime, clearly defined prosecutable crime in every single country in the world so where there's this cross‑boarder effect that the victim is in one country, the perpetrator is in another, we don't get caught up in endless jurisdictional discussions and we can very quickly make sure not only to help the victim or to prevent, if possible, but to go to the next thing, which is to ‑‑ and I see a member of the European parliament is going to fix that for us.  Thanks.

>> I can't fix it everywhere but I can raise my voice.  I'm on the committee in the European parliament, I'm a British labour politician.  And I work a lot on this issue, the issue of online violence.  And actually we do have an instrument in Europe and it's called the Istanbul Convention.  And even my country, the UK, has ratified this.  And I just heard ‑‑ has not ratified this.  I just heard yesterday we have a more right wing Polish government.  They are going to actually pull out of the Istanbul Convention.

So I think it was 2014 it was agreed.  Some countries, some unexpected countries, have signed it and are doing really, really well.  And some countries that ought to know better are doing really, really badly.

A friend of mine in Slovenia who is a women's rights campaigner said her government said they couldn't sign it because it was too expensive.  And our response to that has to be that the cost is women's lives.  I know, but the Istanbul Convention is specifically about women here.  I'm on the gender equal committee, so we are working with gender and violence issues across a whole lot of different platforms.  But I just wanted to say there is an instrument.  There is an instrument.  And we are now still in the 16 days of the campaign to eliminate violence against women and girls.  So you need to raise your voices and you need to be in touch with your governments and tell them that the cost is women's lives.  So they have to do something about it.  They have to do more than just sign it.  They have to ratify it.  And then they have to implement it.  And then you have to hold them to account.

>> CATHERINE GARCIA VAN HOOGSTRATEN:  Okay.  Thank you for more the remark rather than a question.  However to also add to this that currently the Budapest Convention, as well, could be a framework of protection as long as obviously there is a criminalisation of these behavior in the national level.  So it has limitations.

I also want to add that there has been concrete criminalisation norms started in Philippines, Australia and moved towards mostly Israel, Canada, recently U.S., many states in the U.S. have criminalised sextortion or as they call it revenge porn.  And recently Ireland has added its own legislation to the list.

However, as most of the panelists have already asserted, the criminalisation of sextortion also has challenges and limitations.  Flows such as the narrow or very broad definition of the behavior that is criminalised can be also a situation that arises a challenge.  But also do not forget that as sextortion was or revenge porn was defined, it was only mostly focused on the ex partner.  And that has been also reflected on the recent upcoming regulation, criminalisation of this behavior.

I think I have final comments on these from my panelists.  So I will give the floor to them before we finish this session.  Thank you.

>> I just wanted to react very quickly.  I guess you said most of what I was thinking.  But I believe there is a great challenge.  And it's not a simple criminalised/not criminalised takedown/do not takedown type of solution.  And we have to ‑‑ we can't forget that any type of action or regulation that we have on the Internet may affect the whole Internet environment.  I mean, we do believe that we have to discuss how intermediaries may be responsible for these type of things.  But we have to be very careful in drafting or thinking about solutions because they may affect our possibility of communicating or accessing information online.  That was my final remark.

>> I'd also like to add on the remarks and talk about also enforcement of the law.  I'm coming from Turkey.  So it's real problem that the one fact that it's know the defined in law.  But even when it is, it's almost impossible to have it enforced.

And we are discussing removing punishments for rapists in certain countries.  So it's very, very difficult to have law enforcement take seriously when someone goes to them and says oh, someone is sharing a video or an image of me that I'm not consenting to sharing.  So I think we really need to remember the problem of enforcement on this problem.  And also, finally, I'd like to say that creating more public awareness in these regions is very important because it's really not being taken seriously and not being considered as a crime no matter what the law says or doesn't say.

>> CATHERINE GARCIA VAN HOOGSTRATEN:  And we will five the floor finally to one of the participants that has been raising their hand for a long time there.  Thank you for waiting.

>> Hi, thank you.  So I think when we are e speaking about sextortion, we need to also speak about rape videos.  It is a major you in India.

Actual rapes sold in shops and further used for extortion.  This is violent of content on three levels.  It's the rape itself.  The filming.  As well as the distribution.  And when we speak about ‑‑ and as you can see it's not self‑disclosure per se, which is leading to sextortion.  These are the factors which lead to that.  And I think it is a very important point to remember have and it's such big issue in Inda that even the Supreme Court has issued a report into this.

To respond to this, yes, men are subjected to revenge porn but I think it's remember most of it is different for them.  The frequency is different.  The extent is different.  And it's also important to remember what type of men are subjected to gay porn, it's usually gay men, transmen, young boys.  That's my opinion.  That's all.  Thank you.

>> The last two years we saw a rise from boys up to 50 percent of all the reports we get.  Men and boy being sex toward.  And we're talking about 14‑year‑old kids to 70‑year‑old men.  So this is also a very serious problem for men.

The video, if they don't pay, so it's financial sextortion.  If they don't pay, the videos are sent to their bosses, they lose their wives, their families, friends.  It is very disruptive.  I think last week, the report came out that three men already killed themselves because of this in the UK.  So I think we should really consider this is a crime which is very serious to women and men both.

>> Quick reflection.  Maybe the solution to the issue is not on the Internet or with Internet companies.  We have to look at the subjects also from the sociological point of view.  And I've heard something recently in the Netherlands that basically about sex education.  It starts from there, in my opinion.  It's not about opening an account on Facebook or Google Plus or so on.  The kids who are very close to their parents, even though they engage at sexual activity at a later stage, I think people that engage in revenge porn and these activities, they are, I think, personally there is a percentage of them that are not sexually educated or they went probably through traumas and so on so we have to bring a stakeholder from the socio, even cultural point of view to discuss these issues because I see some kind of connection with that, as well.  So maybe we have to look outside the Internet fora for the solution, as well.

>> CATHERINE GARCIA VAN HOOGSTRATEN:  All right.  So finally to wrap up before we close the session, we have the last question.  Or remark.

>> Nicholas:  Yeah, related to what you said several moments ago.  Of course, raping videos are not self‑disclosure, but my point is that perpetrators very often get in touch with their victims using their social media profiles.  So they have all the information there just to get in touch with you and engage with you.  And therefore committing this crime.

And I was hearing today and the day before a lot about connectivity, how can we generate, promote connectivity in the parts of the world that do not have Internet access and so on.  I think on one side this is excellent.  But we have to prepare new online members, new members of the Internet regarding their privacy.  And I think that I'm trying to fight in that direction very seriously.

>> I'm sorry.  Maybe I wasn't clear when I was saying ‑‑

>> Nicholas:  I totally understand it.

>> They don't meet the survivors through social media.  These are actually ‑‑

>> Nicholas:  I totally understand the concept.  But those are two different things.  That one leads to the other in some cases, but what you just said, it's very close, but it requires other measurements.

>> No technology plays a role here only in the filming and the distribution.  It doesn't play a role in the perpetrators meeting the women where I was involved.  I wanted to make that clear.

>> CATHERINE GARCIA VAN HOOGSTRATEN:  We want to thank ones again the Netherlands intergovernance Forum to for bringing us together.  As well as the women's in the neats he Netherlands.  Please follow up the discussions on Twitter.  But also follow up the report I'm going to submit after this IGF session.  So we will have a report online.  And you can see the conclusions that will be raised there.  Thank you very much.

>> Also, a report of the session will be on the digital watch observatory at the end of the day. 

 

Contact Information

United Nations
Secretariat of the Internet Governance Forum (IGF)

Villa Le Bocage
Palais des Nations,
CH-1211 Geneva 10
Switzerland

igf [at] un [dot] org
+41 (0) 229 173 411