Welcome to the United Nations | Department of Economic and Social Affairs

The following are the outputs of the real-time captioning taken during the Tenth Annual Meeting of the Internet Governance Forum (IGF) in João Pessoa, Brazil, from 10 to 13 November 2015. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 


>> MODERATOR:  Good morning.  So I think we can start now.  Can we start the recording, please?  Is it on, the recording?  Yes.  You can see it on the screen.  Very well, good morning to all of you.  My name is Marie‑Laure Lemineur. I work for ECPAT International, an NGO, a network of NGO's based in Thailand.  We are dedicated to combatting sexual exploitation of children.

Let me welcome you to the session of the dynamic coalition on child online safety, and I would like to introduce you to our distinguished panelists.  On my left‑hand side we have John Carr.  John today is representing the European NGO Alliance for Child Safety Online among other things.  He wears many hats.  We have Katia Dantas. Katia is the policy Director for Latin America for International Center for Missing and Exploited Children.  We also have Susie Hargreaves, the CEO of the Internet Watch Foundation and we have Carolyn Nguyen, I'm sorry if I pronounce wrongly your name, Carolyn is the Director of technology policy for Microsoft.  So today the idea is to discuss the issue of databases as a tool to combat sexual exploitation online.

We, what we are going to do basically is ask ‑‑ I'm going to throw questions and the panelists are going to answer the questions, and then at the end, towards the end of the session, we will open the floor so that you can provide inputs, comments, criticism, whatever you feel to add to the debate.

So to frame the issue for the session, I would like to first start by asking questions, and so maybe you can better understand the link between data repositories and child exploitation.  That may not be obvious to all of you.  I would like to ask the panelists to start describing the type of databases that are repositories that are available on the market and how, and describe how they operate so that we can get a clearer picture of what is it we are talking about.  So I don't know who wants to start.  Maybe Susie or John.  Susie, do you want to start?

>> SUSIE HARGREAVES:  Good morning, everyone. I hope you can all hear me.  Thank you very much for inviting me to speak today.  I represent the Internet Watch Foundation, which is the U.K. hotline for reporting child sexual abuse content.  We are one of the biggest hotlines in the world, and we work closely with a range of partners, other hotlines around the world through INHOPE with law enforcement and specifically with industry as we are an industry-funded organization.  So to explain some of the databases we use, and one of the things that Marie‑laure said is what databases are available on the market, and I think it's really important to note that this is a very specialized area.

So they are not particularly available on the market, but increasingly databases have been developed for specialized use by some of the partners fighting child sexual abuse online.  I just want to also mention that in terms of the value chain, different organizations are doing different things, so you will have the police at one end who will have a child, a victim ID database whose job is to identify and rescue victims, and you have the work of the hotlines whose job is to go out and remove the content from the Internet.

So we are all kind of doing quite different things and that's really important.  So we have a very specialized approach to it all.  In terms of IGF we have our own database in terms of the images that we assess, and we keep a record of those images, and we also input into the INHOPE database images on a daily basis, reports of URLs which go into the INHOPE database which is then sent out to hotlines for them to take action accordingly.

The big story which I think we are going to be talking about today is that the majority of content that you see on the Internet, child sexual abuse content are duplicates.  So if you take, for example, the IGF took action on just over 31,000 URLs of child sexual abuse.  Majority of these will be duplicates, the same images of children being sent out again and again.  And as a result, those children have been revictimized every time someone sees them.  So one image may be out in circulation with thousands and thousands of copies of that image.  So we have been working over the last two years on a hash list, which is going to be a really important database, and something that the program started really by Microsoft which Carolyn will talk about photo DNA.  I will talk in more detail in a moment when asked about the hash list, but the hash list is a digital fingerprint of images, unique images go onto a database and they are given a digital fingerprint which can be sent through the Internet to look for duplicates and bring duplicates back. That's simplistic terms, but I will talk about that in a moment and give other people a chance to frame their argument.  Thank you.

>> MODERATOR:  Thank you, Susie.  You mentioned INHOPE, can you explain what is INHOPE because our participants may not be familiar with that?

>> SUSIE HARGREAVES:  My colleague, Amy Crocker, is here.  I represent the International Association of Hotlines.  The way it works with hotlines is there are 51 hotlines in 45 countries.  Not every country has a hotline.  This is online reporting of online child sexual abuse.  So it's not a help line.  It's somebody sending a URL to investigate, and if it's child sexual abuse, for you to take appropriate action.  The way it works is where there is a hotline, we all will assess under the law in our own country.  We will then trace where that content is hosted.  We will then submit that report into the INHOPE database, and INHOPE will do two things with it.  One is they will push it out to the hotline in the relevant country who will then take appropriate action to get that content removed.

They also retain that data in terms of statistical importance.  So they will actually pull together all of the statistics.  Now, in countries where there isn't a hotline, we send that URL, that information directly to law enforcement.  So we will actually work directly with law enforcement to get the content removed.  In the U.K. because we are pushing so much content out to INHOPE, we also have, what happens is we issue a notice in takedown of content in the U.K. and until such time as it's removed another country will place it on a URL list which is deployed across the world and stops people from accidentally stumbling on the content until such time as the other hotline or law enforcement remove it.  Thank you.

>> MODERATOR:  Thank you very much.  John, do you want to add something?

>> JOHN CARR:  I will jump in.  Susie's second point is very important.  Historically the database that has been used has been really a database of URLs.  Well, URLs will always be there, there will always be a space and a place for databases that inform companies or different agencies, different parts of the value chain of URLs known to contain child abuse images.  But as I say and Susie just made clear, we are moving more and more to a world where hashes are going to be perhaps essential database technology that everybody is going to be using.  Should I just add a word about this?

Now, there is no reason to suppose that the United Kingdom is any worse or any different from any other country in the world which has got very high levels of Internet usage linked to high connectivity speeds and so on.  There is no reason to suppose that we are any different from anybody else.  So that's an important qualification because here is what we did in Britain to try and get some better understanding of the volumes of images and the scale of the problem that we are actually talking about today and why these databases are so crucially important to the future.

So we have some legislation in Britain, I'm sure many countries have the same thing, it's called the Freedom of Information Act.  So under this law, we can write to any public agency, Government, police, hospitals, local authorities, municipalities, and ask them questions about what they are doing.  So we, that's to say the British children's organizations, we wrote to the police and we asked them to tell us how many child abuse images they had seized in arrests they had made between March 2010 and April 2012, so that two-year period.

Now, in the time scale we were working to, only five police forces replied, but between those five police forces it was 7.25% of the entire population of Britain within their boundaries so that's a very, very big sample.  So a professional statistician, not me, took the data and extrapolated from the sample of 5 based on 7.25% of the population into the population of the country as a whole.  Just the raw numbers, in the five police force areas that actually replied to our question in time, they had seized 25 million images, 25 million.  One man who was arrested in Cambridge, he had 5 million images on his machine.

Now, the point Susie makes is very important.  The vast manning or the of these images were duplicates and repeats of already known images, but the point is what we are looking at here is a scale that makes it for practical purposes impossible to imagine how the police or anybody else could look at every image because the numbers are just too big.  And by the way, when the statistician extrapolated those numbers, what he came out with, and some people say this was a conservative estimate, by the way, was a volume in excess of 300 million.  In fact, the number was 360 million images of child abuse material circulating simply within, well, actually within England and Wales, not even the whole of the United Kingdom.  So a system that relies on human beings looking at these images ain't going to work anymore if it ever did.  And that's why these databases these technical approaches to dealing with this problem are so vitally important for the future.  And the big news is the big tech companies are up for helping, Microsoft, photo DNA we have heard about, we will hear more about Google have an equivalent product to deal with videos because more and more images are coming in video, not stills, photo DNA only deals with stills.

And there other products out there as well, but they are unlike Microsoft and Google's product given away free.  Those are commercial products and we know less about how well they work and cost and so on in relation to those.  Anyway, I thought I would throw that in.

>> MODERATOR:  Thank you, John.  Carolyn, do you want to add something?

>> CAROLYN NGUYEN:  Thank you very much.  Thank you for inviting me to be a part of this panel.  From a technology perspective and to reinforce some of the points both Susie and John has made, from a technology perspective, we do understand that the issue here is one of scaling, and to be ail to develop the technology in a way to enable identification, first of all and then removal of these images.

So from our own numbers, what we have is in terms of approximately 1.8 billion photos are uploaded every day and about 720,000 of them are child pornography.  So that's a bit of the scale that we are trying to deal with.  So as you know already, just a bit of history if it's relevant here, in 2003 it was a lot, a police officer in Toronto approached us to say can you please help us address this issue so that we can recover missing children?  And that was sort of the genesis of photo DNA.

For those of you who aren't familiar with the technology, I will just say a few words about it because this was sort of the thing that leads to some of the databases.  So in 2009 Microsoft working in conjunction with what we call a digital crime unit, so we have a unit that goes out and tracks the digital contents out there along with malware.  We partner with Microsoft research, Dartmouth and then also, the International Center for Missing and Exploited Children to develop this technology called photo DNA and as both Susie had described before, essentially what happens is we look at a photo and create a hash value unique to that photo.

So it's really information about the photo, but it's not the content itself.  And then we share this information outwards.  And the algorithm works in a way that if the photo is slightly resized or cropped or the color is changed slightly, we can still detect the image as being the same image.  Even if people change the name of the files, et cetera, it doesn't really impact the hash value.  So what we did was in 2009 after the development of the technology, we also made the information available for free to ICMEC and any organization that is really focusing on addressing these issues can get access, as part of ICMEC initiative can get access to this technology for free.  In 2012 we made the same technology available to law enforcement worldwide freely and then recently earlier this year we made it available as a Cloud service.  So until it was a Cloud service, a company, and there is about 70 such companies around the world who are using this technology would actually have to have the technical expertise to maintain the software in‑house as well as upgrade, et cetera.

So now, by making it a Cloud service, we make it available to many other companies who don't have to have the technology, but can just use the technology online.  So that's a little bit from our perspective contribution into addressing this problem.  So how we work with it is once the technology is, once the hash is available, either through reporting, et cetera, in the U.S., we have to get that information and we can only share that information with NGOs or other organizations, not directly with law enforcement.

So we share that information, we report that information, and when we get information about that, we try to go through our own software, our own content either through the search engine, hot mail or our database online to try to identify and remove such content online.  We are continuing to work on developing this technology, so now as was mentioned it's not just still images, but also video.  And that will be available shortly later this year.

>> MODERATOR:  Thank you, Katia, did you want to add something?

>> KATIA DANTAS:  Just to build up on what John mentioned about the law enforcement.  It is also important to have consolidated hashes.  One of the things we have noticed in time is that there are several hashes or hashes unlike photo DNA where you will have unmodified images, let's say as Carolyn was speaking about the really strong hash that no matter what I do to the photo, that don't change.  And the importance also of identifying the image as child pornography or child abuse materials as we prefer to call instead of other materials.  So there is the International Center for Missing and Exploited Children who has also partnered with Microsoft in the past.  We have developed a project called project VIC, which is teaching law enforcement how to identify those images and helping specifically in sifting through all of these images with a focus on the victim and not the perpetrator anymore.

So focus on the victim, you will find the perpetrator as opposed to focus on the perpetrator you will find the victims.  And it has helped significantly sifting through these information through the process of making the hash very strong through those databases and hash values, the photo DNA, video fingerprinting, and also making sure that the hash values are peer reviewed.  So to insure that the worst of the worst is there, and to insure that the images that we are labeling as child pornography or child abuse materials are in fact child abuse materials.

And this model has been replicated internationally to try to train law enforcement on victim identification processes and trying to keep children safer as a preventive measure as well as helping law enforcement in identifying key targets and tee suspects.

>> MODERATOR: Thank you, Katia.  So would you agree with the statement that those technology bring to the forefront, a victim identification to the forefront of the work that is being done by all of those organizations and agencies?  I mean, based on what you just said?

>> JOHN CARR:  At the end of the day, I work for child protection organizations, that's why I'm here.  So the whole focus of everything we do is on children, children's welfare, children's health.  So I mean, police officers, God bless them, wouldn't hear a word said against them, but historically what they like doing is arresting bad guys and putting them in jail.  So historically that has been, there is no question about that, that has been where their energy and their focus has been.  And I'm glad in a way that that's true, but they have historically lost sight in the fact that in that picture is a child.

And whatever the story is, whatever the explanation behind how that child ended up in that picture or that video or movie, it can't be a good one.  It can't be an innocent happy ending to a story that ends up with a child in a picture being sexually abused, being raped, that's on the Internet.  So it's obviously of the utmost importance to find the child and do what we can to help them recover from the abuse.

And just, let me just say another thing.  This is stuff that is going to have consequences for that person for the rest of their lives very likely.  There was a terrible and famous case in the United States, the case of Amy which I'm sure many people in the room will know about.  She was raped by her uncle when she was 12 or 13 years of age.  He was caught, he went to jail for a very long time, Amy, it's not her real name, but Amy was given a lot of counseling, a lot of help, a lot of support to deal with the fact that her uncle had raped her.

And she appeared to make a very good recovery from it and she was leading what looked for all intents and purposes to be a pretty normal life.  In her early 20s she found out that pictures and videos of her being raped by her uncle were on the Internet.  She had a complete and total meltdown.  Every time she walked down the street, every time she went into a shop, she was wondering, I wonder if they have ever seen that picture, I wonder if this he have seen that video.  That person smiling at me or being friendly, maybe because they think I'm sexually available.  Did they think I was complicit, I was smiling?  She ended up in drug abuse and alcohol abuse.

(Internet technical difficulties).

I don't intend to be, but one of the reasons, one of the biggest issues that we have in convincing law enforcement to have this child victim approach as opposed to the suspect approach is helping sift through whatever materials they have as John has mentioned in the beginning, they have to analyze thousands after materials.  And some countries are obliged by law as in the case in the U.S. to have a real victim as opposed to an image. So they have to go through the identification of the victim to take them to court.  So helping consolidate all of this information, and the way the photo DNA works with a system called net clean, which is part of our project package so to speak is that they consolidate the photos into series.  So alongside with the child abuse images identified, they also bring together other photos with the same child.

So let's say I have naked photos or victim of child abuse materials alongside with that, I will also have other images that are similar to that many where the child is present.  So I can help identify the suspect through the images as well.  So not only it minimizes the job that the police has to go through thousands and thousands of terabytes of information, but it also allows them to identify both the victim and the suspect faster through a process of gathering the information.

So, again, I'm not law enforcement, but that is what we have seen and heard from our law enforcement partner.

>> MODERATOR:  So we have the saving time elements but also don't we have, isn't it true that it avoids having law enforcement officers having to watch images and go through possible trauma, analysts as a matter of fact, do you want to elaborate on this.

>> This is an important image of the hash list and we are going through this at the moment because we are creating the hash list, and one of the, one of the end results is going to be that our analysts don't have to view these images again, again, again.  Now, we have excellent welfare in place, but, for example, at the moment, last year we graded 165,000 images for the U.K. police as part of the new national image database in the U.K. and we are building a list a rate of thousands per week.  In order for an image to go on the hash list, a human being, one of our analysts has to analyze and grade that image.  So in the U.K. they are graded ABC according to severity, and a person has to give them that grading, now, unlike a normal report that would come in from the public where you get a URL that comes in, the analyst will open it up.  It may be within remit, it may not, but they fill in a whole form.  They look at the whole thing.

With image grading for hashes, they are simply looking at thousands of images a day.  We had one analyst grade 10,000 images in one day.  And actually, they just had, you know, up on the screen and they are having to click A, B, A, B, grading them all of the time.  You can imagine, we don't know actually what the impact is of people to do this in the long term.  They will be protected from that because once the image goes on the hash list, it goes on a hash list.  When it is returned they won't have to look at that image again because they will know it's on the hash list, but, and a hash list can't be reverse engineered so nobody can look at the actual image.  But we really don't know what the impact is going to be of those analysts looking at the images in the short term to build up the hash list.

So we are having to be careful about changing the way they work on it.  One of the things we have been working on recently is Google, we have had a Google in residence, so Google have developed specialized software for them to enable us to crawl the Internet to seek out these images but it doesn't take away from the fact that a human being actually has to look at each image and assess it before it goes onto the hash list in the first place.  And this would apply exactly to the police as well.  So the impact of looking at images day in, day out is already something we have to be careful about.

But actually the hashing presents a huge number of new threats and challenges to us.  Thank you.

>> MODERATOR:  John?

>> JOHN CARR:  Just a quick point about this human element in this process.  It is incredibly important, unfortunately that human beings do look at these images because here is the thing.  If ever, if ever it is shown or proved that images that end up in these databases are in there incorrectly, let's say it is a picture of a kitten dancing on a TV set, or it is an innocent picture of a child on a beach or something of that kind, which is more likely, I suppose, if ever that was shown to be happening, it would undermine confidence in the quality of the database.

And that would play straight into the hands of people and forces who are not very happy about this sort of stuff anyway.  So unfortunately, you know, we haven't yet invented software that's good enough to get it right every time.  Maybe that day will arrive, hallelujah if it ever does, but unless and until that day does arrive that human bit of the process has to be in there.

>> MODERATOR:  Thank you, John, Katia, do you want to add something.

>> KATIA DANTAS:  Just to maximize on what John is speaking.  One of the reasons we have decided to do a peer review is exactly that.  So it is important to have a human element, specifically when you consider that this images are not exclusive to one country and legislation on the definition of child sexual abuse materials vary.  So it is important that the idea behind the database is to minimize human exposure.  So instead of having 300 agents investigating terabytes and terabytes of information, now, you can have one or two.

And it is very important within this process, and specifically I really appreciate you bringing this up.  It is very important to provide some social attention and mental attention to those law enforcement and to those analysts investigating at ICMEC which is our partner organization, our sister organization actually.  They do have, the analysts do have therapists and mental health assistance and some law enforcement are required to ask, who investigate these kinds of crimes are also imposed so to speak to have to attend those sessions.  But the beauty of the database is not to eliminate completely the human aspect of it but to minimize the amount of trauma that can be pushed onto those law enforcement.  Thank you.

>> MODERATOR:  What you just said gives me a good excuse to move onto the next question I wanted to ask you about data sharing and you already sort of mentioned it, but one of the, maybe the challenge that we are facing is that different individual agencies or organizations, they retain silos of similar data, and, of course, this data would be more useful in the aggregate.

I just wanted to ask you if you could provide an example of good practices or an initiative that sort of try to promote cross feeding between databases?

>> KATIA DANTAS:  And I will, self‑promotion and a little bit, excuse everyone I'm not a technical person on the project so I might not have all of the details of the project.  But if you have any questions in the end, you can absolutely talk to me.  If I can't answer, I will point you to the right direction.  But in regards to the ‑‑ there are several databases as you mentioned, Interpol has a big database which is the image ‑‑ I forget, international child sexual exploitation images database.

We always say ICSEID.  So I forget what it means, Interpol has the databases.  Each police agency has the databases, so one of the things that I will speak about is project FIC because you will get tired of listening about that.  One of the things it does is cross collaborate between different agencies trying to gather all of those materials to be shared across those agencies to exactly minimize the amount of duplication or different numbers or consolidating it to making it more useful to other partners as well.

But there have been several efforts internationally speaking and I think that's where we have to be going and I just quoted to who will collaborate with law enforcement internationally in gathering the data on their database as well as project FIC.  So a couple of samples.

>> MODERATOR:  I believe there are other initiatives between law enforcement agencies if I'm not wrong, UKCOP is sharing or about to share its database with Interpol database?

>> JOHN CARR:  Maybe Amy, maybe Amy should answer that.  You are more up to date than me, probably.

>> AMY CROCKER:  Well, actually, that would be Susie's answer at U.K. hotline.  My name is Amy Crocker, I work with INHOPE, the International Association of Internet hotlines and Susie Hargreaves mentioned the organization ALIA.  I wouldn't speak to the U.K. because that would be for the U.K. Hotline Internet Watch Foundation to do.  However, I can speak about the work that INHOPE is doing in terms of picking up on sort of a number of points made by the panelists in terms of recognizing the need to migrate towards the creation of hash values and not purely focusing on URLs.  There is still a clear need and there is still a clear need to have exchange mechanisms for URLs with child sexual abuse material and those can be removed and that's one of the key elements of the international exchange mechanism that INHOPE facilitates in 45 countries around the world.

But one of the things we have been developing in the last two years, last year we had a pilot phase and now we are at the process of launching a tool, is a tool specifically designed to close the circle between what hotlines are doing and hotlines can be NonGovernmental Organizations, Government agencies, industry, industry association bodies.  So there is a huge range of models.  We shouldn't think of a hotline as one thing in particular.  But one of the things we have been doing is trying to close the circle by finding a way to add value to the information the hotlines are seeing, the digital content that hotlines are seeing and sending, making that available to law enforcement at the international level for use in victim identification processes.  So we have been developing a new tool which is now going live in Member States as part of our network, and then we will be rolled out and made available to all of the hotlines in our network globally.  And this tool is a content categorization and hashing tool.

So what this means is in addition to processing URLs, hotlines are now processing individual URLs of images and videos.  They are classifying that material based on the international, one of the international standards that has been laid out and has now been approved by the enter follow General Assembly last week which is called baseline.  This is categorizing content based on what is for any country in the world that has legislation pertaining to child sexual abuse images or child pornography, this is what people call worst of the worst.  This is categories of images who involve a real child who appears to be prepubescent to under the age of 12 and where there is a focus on an explicit sexual act or child involved in an account and there is focus on the genital area of the child.

So it's very, very explicit sexual content involving under aged children.  And what the INHOPE hotlines are doing is categorizing according to four categories, but there are two specific categories made available to Interpol, the crimes against children and team at Interpol who confirm the category of the content and can decide to place the material into the Interpol's international child exploitation database, which is a specialized law enforcement tool.  And also picking up on some of the points made, this is a teal specifically designed to support victim identification processes.

It is not a tool designed to be a catch all, a repository of images.  It's an investigation tool, and I think it's important to kind of pick up on that point that there is a need for both.  There is a need for mass repositories of certainly hash values so that we can insure that we are sort of reducing duplication.  We can insure that in industry terms, big companies are able to remove access to images that are known and confirmed by law enforcement to be illegal under the relevant jurisdiction.

So there is a need for those sort of big sets, and then there is a need for very specialized sets that are allowing specialized offices who now don't have to view big volumes of information to try and use all of the tools and the techniques available to them to identify children and in the vast majority of cases identifying a child will also enable the identification of the offender because such a high percentage of child sexual abuse is committed by someone known to the child.  This is a very private, intimate crime taking place in all countries.

So that's something that I think it quite important and certainly we are pleased with the tool we are producing, but it's one, as Susie said, it's one tool and one form of data collection and sharing that is being seen now on the wider landscape, and I think it's really positive to see everything that's being done.  And as I said, I won't speak to the U.K., Susie can do that.

>> MODERATOR:  Susie, do you want to add.

>> SUSIE HARGREAVES:  Thank you, yes, so in the U.K., the police see the child exploitation online protection unit has been working with the home office to establish a new national image database, and that's taking up an awful lot of time to get that up and running.  We have been assisting them on the analysis of those images, and as part of that we are taking category A and B images from that database.  We are reassessing them and reanalyzing them and we are creating a hash list database for U.S. industry.

So actually, we are in the first phase of a project which has started with Microsoft, Twitter, Facebook, Yahoo! and Google where we are actually sending out hashes to see what matches there are, a small number in the first place.

 So that's one phase of what we are doing.  And the second phase is we are creating our own hash list that will go out.  I'm not aware that CEOP are doing major sharing with the National Image Database yet because it's still being developed.  One thing I do want to say which is a real issue for the hashing is that there are going to be, originally, when the whole issue about hashes started, there was, you know, a lot of vying for position in terms of who is going to hold the hash list and what's clear is that actually there need to be a number of hash lists now with different purposes and different needs.

There are some countries who won't take a hash list unless it's directly from law enforcement and yet there are other organizations like industry which won't take a hash list directly from law enforcement. And under the Fourth Amendment in the states, you know, companies need to work with an organization like ours which is independent, and there are other organizations like ours that they can work with.  So it's really important that we recognize that we can't just have one hash list and one type of hash list that will go out across the world.  The other thing I want to say is we are not actually participating in the INHOPE database at the moment.  One of the issues for us is the baseline categorization because it's actually different to the U.S. and U.K. categorization.  But we have committed to a mapping exercise because we need to bring those categorizations together.

But what's really important is that any categorization that happens actually can be, you know, shifted from one country to another, but obviously you will recognize that the U.S. and the U.K. markets in terms of the categorization of images are just huge, so we need to insure that if it's in our legislation, and currently the worst categorization in the U.S. and U.K., A and B do tie in with each other, but not with the Interpol baseline categorization.  Thank you.

>> MODERATOR:  John.

>> JOHN CARR:  Just to go back to your original question, and we I refer to Amy first.  There is this other thing going on at the moment called We Protect, which is a global initiative which the Government started, I'm happy to say, and is now working with UNICEF.  And under the statement of action which last year was signed by 50 different Governments, and there is another meeting next week, the second Conference of it.  So I guess there will be more than 50 Governments now.  One of the things that signatories to that statement agreed to do was set up their own national database of images.  And I think either in the statement or perhaps it's explicit is that there would eventually be at least one big global database that different police forces around the world would be able to access and use because, you know, the Internet is a global medium.

What happens in one country potentially is happening in every country.  So there is a need to do that.  But at the moment, as far as I know, only about 15 countries, 12, 15 countries are actually involved in feeding data into this. It is still a trial thing the Interpol is developing, yes, the baseline.  I think, I think it's about 15 at the moment, but maybe it's growing.  45.

>> AMY CROCKER:  That's the number of countries connected to the database which is distinct from the baseline project.  I can't give figures at all, but it's something that's now been formally kind of accepted, I understand, and it's something that moving forward, but I take Susie's point and it's absolutely true what we are looking at now is a way that we can all sort of work in collaboration because there are also different needs for different types of categorization.  There is categorization purely to determine illegality and legality.

(Internet technical difficulties).

>> MODERATOR:  I wanted to ask another question because we heard a lot of how it works, and we also heard that it's a free tool that is being donated by Microsoft.  So I would like to ask you, Carolyn, are you familiar with the rate of its take up for third parties and if everybody that should be using it is in fact using it?  How can we, let's say, NGO, Civil Society, how could we encourage take-ups?

>> CAROLYN NGUYEN:  The easy question could there be more uptake is absolutely yes.  And I think that that was behind our effort to make the software available as a Cloud service so that it can be much easier for everyone to really access and use the software.  With respect to the uptake, I think I mentioned earlier there are currently about ‑‑ we are aware of about roughly 70 or more organizations across the globe that are employing the technology, and we are continuing to work with organizations to evangelize the, first of all the risk, the issues, and on that front I want to mention one other thing which is a lot of the discussion up until this point has been focused on child sexual exploitation, what's being done to a child.

So, for example, last year we funded the Internet Watch Foundation on the studies of explicit selfies.  So this is the issue for those of you who are not familiar with it, in 2012.

>> MODERATOR:  The sound is still on so even if we don't have light, we can keep on talking.

>> CAROLYN NGUYEN:  Absolutely.  The show must go on.  So in 2012 report IWF found 12,000 nude photos and videos that young people send to a single recipient, more than like almost 90% of that migrated to parasite websites.

(Internet technical difficulties).

Yet another issue in terms of being, that's being exacerbated by the proliferation of the Internet, Susie can talk about that a little bit more later.  But with respect to the education and creating awareness, it's education on the risk of this new type of risk, but we also believe that there also needs to be education and awareness on the current techniques.  That's why we are continuing to work with organizations around the world, and also working to contribute to the multiple global databases that's being out there.

>> MODERATOR:  Thank you very much.  I don't really want to go into the self‑produced materials because there will be a specific session tomorrow and I will mention it at the end of the session, so if you want to attend it, you can.  So I would rather keep on discussing, even though the issue is very interesting and it's relevant, but we will have a full hour, an hour and a half to discuss it. 

So I would like to keep on discussing the issue of business liability because if you think about it, tools like photo DNA are available for free.  So, and many online platforms, online businesses who are not using it and could and probably should use it, they are not using it.  So could we, could we sort of envisage that somehow they could face some kind of a business liability for not using it and sort of distributing to having pictures located on their servers and not doing anything about it?

>> JOHN CARR:  I'm glad you raised that point.  So under European law, and I think it's the law in most countries around the world, we have this thing called the mere conduit status for service providers.  An online service provider cannot have liability for any illegal or unlawful content on their server unless they have actual knowledge of its existence.  And this has also had the perverse effect, by the way, within Europe of giving companies an incentive to do nothing.

Because if you don't go looking for illegal content or unlawful content on your servers, you can never be held liable.  Because if you have no knowledge, and you have never tried to find it, you can never be liable.  So now I don't think there is any possibility of that changing, that law changing in the near future, and in a way, it's probably right.  It would be unjust, wouldn't it, if a company could be held liable for material on their servers if they couldn't have known that the material was there?  That would be unjust.  That's not, that cannot be a just system.

But I do think now that tools like photo DNA exist and Google are producing a similar thing for video and so on, here is a question.  If you are a company and you are providing a service that you know or have reasonable grounds to believe is likely to be abused by bad guys out there and just look at what's happened, why wouldn't you use these tools?  What explanation would you offer for not using photo DNA or Google's equivalent?

And I'm not saying that that would necessarily raise an issue of liability, but it could.  It could at some point in the future, I guess.  If you are doing this, if you are in this space, why wouldn't you take reasonable steps and use reasonable tools that are proven technology to try and stop your service being misused and abused in ways that harm children?  I guess that's my fundamental point.  But here is another thing.  We need a great deal more clarity about costs.  I can tell you I was in Egypt two weeks ago at a meeting convened by the ITU.  And there were ISP and Governments there from the Arab region.

One of the things that they said was we are really interested in this whole thing about photo DNA and so on, but nobody will tell us ‑‑ we know the software is free, but we know that that's the beginning of the story, not the end of it, because even though the software might be free, which means the licensing is free, we have to implement it.  That costs money.  We have to train people to use it.  That costs money.

Then there are costs, ongoing costs as well, what are they?  And there is computer processing costs, what are they?  And by the way, these questions arise for Governments and the private sector because Governments have to fund their police service who have to use these tools and then the private sector have to make a buck, have to make a profit, so it's equally important to them.  So I'm not sure we can solve the problem here and now, but in the process of evangelizing, in the process of persuading companies to use these tools, somebody is going to have to tell them what it might cost.

And my impression is at the moment that is not happening at least on a good enough scale.  It's certainly not happening in the Arab countries because they are all saying exactly the same thing, we have no idea.  If we agree to do this, what is it going to cost us as a Government or as a private company?  They all say they don't know.

>> MODERATOR:  Carolyn.

>> CAROLYN NGUYEN:  Thank you, John, for those excellent points.  I want to take up two things, one is with respect to your comment regarding the cost of implementation and also the skill sets that's needed.  I agree with you on that, and I think that we are trying to address this in one way by making the service available as a Cloud software.  So that takes away part of the implementation cost.  There are still skills necessary.  So your point is incredibly well taken and I think that is something we would continue to work with the various organizations around the world to make that information available, but that was a step to address the issue that you brought up.  With respect to your question regarding liability, I agree with many of the points that John made, and I think that there is another point that at the end of the day, it is a question of when an image and social responsibility for the companies that are involved.

In particular, if companies are notified that inappropriate content exists within the system, it is very much the obligation of the company to develop the appropriate technology because there again, there is a technology question to identify the image where it's stored, how is it to, you know, what is the appropriate response?  So, for example, for Microsoft, what we do is when we do identify such an image or when we see search terms that are on our so‑called black list, which now there are 2,000 terms on the list and we are continuing to work to expand that to 50,000 terms, that there is an image that comes back that says, that kind of blocks the result, and also redirects the searcher to an appropriate website for further information.  It's likely one of the NGO's websites so that we redirect them to that.  So it's responses like that that we think is more appropriate rather than sort of a strong hand, you know, arm of the law.

>> MODERATOR:  Would this be a splash page?

>> CAROLYN NGUYEN:  Yes, it is, so if the searcher is hitting one of the terms that's on our list, that's what would come back, yes.

>> MODERATOR:  Anyone want to add something?  No.

>> SUSIE HARGREAVES:  I think about 18 months ago the IWF introduced a splash list for a URL blocking list.  It took us ten years to get it up there and prior to the splash page when implemented, if you would try to access a URL that was on your list, you get an error message.  And there were really interesting discussions about liability in relation to the splash page for industry because the first, the first iteration of the splash page which was written by law enforcement was so damning and strong and we ended up having a number of specialists involved in the wording for the splash page because we wanted to give across the message that if you had an advertently stumbled that it was okay to report this, but that if you were someone who was repeatedly looking at this content there were serious implications, but you had rights if you were blocked in error and you needed somewhere to go to if you felt that you needed help.

So trying to get the whole balance was absolutely essential, and from the company's point of view who were implementing the splash page, the idea of maybe some 16‑year‑old looking at an image they shouldn't and reading something that said you are about to go to prison or whatever, and the potential implications of that were huge.  So it was a really interesting discussion and it was a really intense debate within the legal departments to get to the wording we actually got to, which I think is the correct term now, but everybody had to be involved because there were serious messages, but we also wanted to provide help if it was needed.

>> MODERATOR:  Very well, before I open the floor, I would like to ask one last question about security standards.  I think we all understood from what has been said that those date it repositories do gather very sensitive information, so I would like to hear from the panelists how is the issue of securing, of avoiding the systems being hacked being handled by the organization you work for?

>> SUSIE HARGREAVES:  Okay.  So obviously this is a huge issue, very big issue in the U.K. at the moment.  We just had a very public situation we have been hacked into.  The IWF, obviously our reputation is based on the security of our data and our information and on the quality of our lists.  We have incredibly tight security around the way that we work.  We are ISO27001 accredited.  We have regular penetration testing of our systems.  We also ensure that we have, as said before, hashes can't be reverse engineered.  We have very, in terms of our URL list and everything else we do have incredibly tight security around all of the people that work with us.

We have licensing agreements in terms of industry using our lists, and we put in place every conceivable measure to insure that our system is secured.  We have direct feed into our office.  We have a unique feed.  So we have all of the security that can be in place, and we do everything we can and hopefully it's going to be enough.  And actually that's what we need to do so we are constantly testing our own systems and we are confident within them in as much as you can be.

>> CAROLYN NGUYEN:  So with respect to the data in our system, we believe in encrypting all data in terms of while it's at rest as well as in transit.  So the data that is stored in our data centers are encrypted as well as during transmission from the end points into the data centre.  So that's what we see in terms of encrypting at rest as well as in transit.

>> MODERATOR:  John?  You don't run a database but you might have an opinion.  You don't.  Very well.  So I think we are, we have 15 minutes left, so I would like before the end of the session, so I would like to open the floor and for questions and I hope that the issues and the points that we raised were interesting enough so that it triggered some thoughts and interesting questions, so feel free to please.

>> AUDIENCE:  My name is Huta Krawl. I'm representing the Center for Child Protection on the Internet and I would like to pick up on the point that John Carr made on the eCommerce directive which kind of puts disincentive to the companies when they implement the tools for discovering or detecting images.  And it needs to be mentioned that it's in the hand of the national Government to have in the process of implementing the directive, they have the opportunity to have a good written rule that would say the company would not be held liable in case they miss an image.  So they have the opportunity, but then it's in the hand of the national Government to have that Good Samaritan rule, and I have to admit we don't have it in the German law as well, but it's the possibility.

>> JOHN CARR:  I didn't know that.  I honestly didn't know that.  I thought it was in the EU wide rule.  This whole question of who the eCommerce directive works in this area is up for review at the moment, but I don't see that fundamental rule changing because you cannot make people liable for things they don't know.  But I do think we should make it completely clear at the EU level that you can never be liable, even if you try to find material and miss it by accident or negligence, you can never be liable unless it can be shown that you have actual knowledge, but lots of companies rely on it.  I can think of ‑‑ well, I mean, in the Google case in Italy, you know, Google relied upon the eCommerce directive as their defense when they were prosecuted for the video of that child, you know, the child that was being beaten up in the playground and the kids, the Google employees were prosecuted because they didn't take the video down fast enough.

And Google said, look at the eCommerce directive was their defense.  So it's not that these are just small companies that sometimes use this law.  Even big companies like Google have been known to shelter behind the protection that the eCommerce directive can give or the mere conduit law.

>> MODERATOR:  Thank you.  Yes?

>> AUDIENCE:  Hello, my name is Mohamed. I am representing two NGO’s from India.  I want to get to know how much we are advanced in the darknet arena, how much we are advanced I want to know.  Thank you.

>> AMY CROCKER:  On behalf of the interests we work they don't work on darknet and this as an area that is navigated and investigated by law enforcement identifications. I don't know if the Internet Watch Foundation is doing anything there, so I wouldn't want to speak to that, but what I am saying is there are high challenges on the horizon, well, they are with us now about how we are dealing with migration of content to more difficult to access and highly encrypted areas of, you know, the open and the dot web.  So I don't have an answer for you, but we don't, I can't speak to that on behalf of my organization.

>> SUSIE HARGREAVES:    COP did a recent analysis and found that content was increasing on dot web.  So it's remaining very static.  I have to say we do work in the dot web, but only in relation to what's in the public domain, so anything that's out in open Internet.  You will often find content in the dot web that links to an image in the open Internet.  So that's the relationship we have, but in terms of the investigation and hosting, you get to the police and the same would apply.  It's a police matter not for us.

>> JOHN CARR:  Nobody knows the truth with the dot web or the darknet, because by definition it's the darknet and the dot web.  What we do know is the vast majority, the overwhelming majority of people that are using the Internet are using the Internet that we are talking about here.  So if we were to allow fears about the darknet and the darkweb and the challenges that it presents to become centre stage, we could all just go home now, go home, go to bed, pull the duvet over your head and say goodbye to the world because nobody has an answer and this is not in the space of child abuse images.

Look at Jihad stuff, look at drugs, look at pharmaceuticals.  I was at the ICANN meeting in Dublin the other week.  99.67% of all pharmaceuticals sold over the Internet are sold illegally.  Only .3% of pharmaceuticals sold over the Internet are being sold legally.  So it's not just in this space that we have this, but we work in this space.  So we have to do the best that we can in this space.  But I still think the open Internet is a massive Internet bit.  Nobody has got the answer on the darkweb.

>> CAROLYN NGUYEN:  Susie can add to this, but we have a pilot program working with Internet Watch Foundation, Google and the Child Exploitation and Online Protection Centre, but it's only to remove pathways to the darkweb.  I agree with John's comment, but I wanted to highlight that.

>> MODERATOR:  Just a quick comment, as a matter of fact, if you review the reports, assessment reports published by major law enforcement agencies and one is from Europol, it was released a few weeks ago, one of the challenges or the threats that they identify is the darkweb.  We are all struggling with this.  The lady who raised her hand, please.

>> AUDIENCE:  Thank you.  Can you hear me?  Thank you.  My name is Mary Udomon from Nigeria, and I have listened to all of the presentations and I find out that the focus is only on pornography, child pornography.  In my environment we are talking about other things.  We are looking at recruitment of children online for terrorism, for drugs and other things. I'm wondering whether this coalition is looking at those aspects or you are just looking at the sexual abuse or the pornography aspect of child protection online?  We are really concerned that children are being recruited online by jihadists, by terrorists, and what can this group do?  Is there any aspect of the work of that looks at that?

I'm also interested in the unit, can it also be made available or can it also work when you want to investigate recruiting of children online for terrorist act or drugs or for jihadists?  Thank you.

>> CAROLYN NGUYEN:  Thank you very much for the comment.  Let me start with the second question first which is the capabilities of photo DNA.  How that works essentially is we look at an image.  So there has to be an image there.  And then essentially what we do is we give back a calculation of certain characteristics of the image.  But we don't do anything with the actual content itself.  I just want to make that clear.  So it is a mathematical calculation of the content of the image.

And the idea then is if a similar image is found somewhere else or a slightly modified image is found somewhere else, then you can use this key, essentially, to run through the database to identify images that would come up as similar.  So in a nutshell, that's how photo DNA works.  You have brought up an interesting question for us, which is if there are capabilities to identify images that would be, I don't really know, but if there are capabilities to identify that an image is used specifically for terrorist recruitment.  To your point, the technology would work the same way, but we would need to rely on you or on law enforcement organizations to help with those characteristics.  Does that make sense to you?

And to your first question, with respect to terrorist content, as a private organization, we cannot make determination of what is inappropriate.  So we have to rely, because that's a law enforcement agency question.  So we have to rely on, what happens is when we get complaints about contents that are on our site either as a result of our search materials or contents that are actually uploaded into a Hot Mail or a Share Point, we have to report that to organizations, and it is up to them to determine whether that's inappropriate content and we remove it accordingly, but we cannot make that determination on our own.  So we work very closely with law enforcement agency and others on that.

>> MODERATOR:  To your first question, this session was specifically designed to discuss the use of data repositories for specifically combating sexual exploitation of children online.  Nevertheless, having said that, it doesn't mean that we are not aware that Internet and technologies are being used for other illegal means, ends, which is recruiting the kids online, grooming them.  There are many, many ways technologies are being used to exploit children, recruit them.

But it wasn't this session, and I'm sure that it could be that some of our members are actually working on that.  I'm not aware of it, but maybe some of our members are addressing this issue.

>> The hotlines within the INHELP network deal with in addition to child sexual abuse material which is the unifying factor of the network, they also do take reports on other categories of reports and some of them do take reports of suspected terrorism to use a very catch all term.  So it is possible and it can be the role of a national reporting mechanism, a national hotline to take those kinds of reports, but, you know, just to reiterate what Microsoft has said, it has to be done in very close collaboration with the relevant authorities because there is sort of a legal judgment to be made about what is legal and not, and also what the threshold for concern would be.

So it is possible some of our members do, but it's not something that INHOPE has any knowledge about.

>> MODERATOR:  Thank you.  Yes, ma'am.

>> AUDIENCE:  Thank you very much for all of the answers you have given, but the truth is that I am bringing this up for people to think about.  Should we only concern ourselves with the sexual abuse of children and not thinking of the protection of children?  I don't know whether that is another workshop purely on child protection online and probably will go beyond just the sexual abuse and think of things that we, especially from countries that are very, very vulnerable to this.  So that's why I'm raising this.  I'm sorry if I'm off point, but it's a concern.  Thank you.

>> MODERATOR:  Thank you very much.  We appreciate your comment.  Please, John, do you want to reply?

>> JOHN CARR:  I work with the children's organizations like save the children in Britain, the NSPCC and those they are very, very engaged with exactly the points you are raising.  It's simply that this workshop is just focusing on one particular aspect.  But when you look at all of the issues that are going around the world the things that you have mentioned absolutely, certainly the children's organizations that I work with, that's very much part of their agenda.  And probably some of them are also working in Nigeria, because they do work internationally, they are not just working in the United Kingdom, so, yes.

>> MODERATOR:  We have time for one more question, if there is any?  Yes.  Can you introduce yourself, please?

>> AUDIENCE:  Good morning, I come from the Ministry of Science, Technology and Telecommunications of Costa Rica.  I don't exactly have a question.  I just wanted to share with you that the Internet Governance Council of Costa Rica identified some key ideas that would be supported here at IGF.  And one of them is that the Council supports a model that guarantees privacy and security.  We advocate for a model that insures the privacy and security between universal access schemes and in the safety area the priority must be the safety of children, and that's the position of the Council.  Thank you.

>> MODERATOR:  Thank you very much, Carla, I'm familiar with the work that the Costa Rican Government has been doing in the past and is doing now, and I think they are demonstrating that they are ahead of some other countries in the region and that we really appreciate it.  So having said that, I would like to announce that tomorrow there will be, as I said previously, there will be another workshop.  It's called multi‑stakeholder solutions for youth produced sexual content.  It will be at 4:00 p.m. in room 5 tomorrow.  So for those of you who are interested in self‑produced content by youth, please feel free to come and attend.  And I would like to thank you all for attending today, and I hope it was interesting, and that would will be it for the moment.  Thank you very much and thank you to the panelists and the speakers.


(Concluded at 10:30 AM).