Deprecated: Function get_magic_quotes_gpc() is deprecated in /home/wgig/public_html/igf/website8/web/cms/libraries/cegcore2/gcloader.php on line 63
FINISHED - 2014 09 03 - WS206 - An Evidence Based Intermediary Liability Policy Framework - Room 5
 Welcome to the United Nations | Department of Economic and Social Affairs

FINISHED COPY

NINTH ANNUAL MEETING OF THE
INTERNET GOVERNANCE FORUM 2014
ISTANBUL, TURKEY
"CONNECTING CONTINENTS FOR ENHANCED
MULTI-STAKEHOLDER INTERNET GOVERNANCE"

03 SEPTEMBER 2014
16:30
WS 206
AN EVIDENCE BASED INTERMEDIARY LIABILITY POLICY FRAMEWORK [CB]

 

 


***
This is the output of the real-time captioning taken during the IGF 2014 Istanbul, Turkey, meetings.  Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the session, but should not be treated as an authoritative record. 
***

>> MODERATOR:  Good afternoon, we are going to make a start.  So welcome to this workshop on an evidence-based intermediary liability policy framework, organized by the Centre for Internet Society, India, with the Centre for Internet society, Stanford, and with the multistakeholder panel of experts.  
My name is Jeremy Malcolm and I work with the electronic frontier foundation.  EFF wanted to be a partner in a new project to develop some principles on intermediary liability that several panelists are also a part of and we'll introduce that in due course.  There have been a range of recent projects to gather and present evidence to inform intermediary liability policy.  And you'll be hearing about a number of those from this panel.  
So why is this an important area of Internet policy?  Because online intermediaries are a part of our online communications.  Although the Internet does tend to support peer to mere communications which don't have intermediaries, in reality, the value add that intermediaries provide makes them, in practical terms, dispensable.  
And this is made intermediaries among the biggest online businesses.  Apart from the economic footprint of their own operations, intermediaries have the potential to facilitate economic growth and innovation throughout the economy by organising and facilitating access to information, data and user generated content.  
But, the role that intermediaries plays is one that directly impacts on Human Rights, including both civil and political rights, such as the right to freedom of expression and freedom of association, and also economic, social and cultural rights, the right to education and the right to participate in the cultural life of the community.
That is the focus of this workshop which looks at how different liability rules that apply to various categories of intermediaries in various jurisdictions have impact on Human Rights of users both within and outside of those jurisdictions.  The aim of this workshop is to come to a more in depth and rounded understanding of what are the different classes of intermediaries, how they differ functionally and if their differing roles should have impact on their responsibility with regard to protection of Human Rights.  
So we will be hearing from a distinguished paneled of experts from across all the stakeholder groups who will be presenting and discussing ongoing research on this topic, or just giving their perspectives.  
So, in order in which they will be presenting, we have Martin Husovec from the International Max Planck Research School for Competition and Innovation; Geancarlo Frosio who is an intermediary liability fellow at the Centre for Internet and Society at Stanford Law School; Nicolo Zingales, Assistant Professor at Tilburg Law School University; Gabrielle Guillemin from Article 19; Elina Hickok from Centre for Internet Society India; Francisco Vera, advocacy director from Derechos Digitales; Titi Akinsanmi policy and government relations manager at Google, Saharan Africa.  I'm not sure if Titi is here yet; and Malcolm Hutty from the Internet Service Provider's Association.  
So if you are attending this workshop, hopefully you've read the background paper.  I'm sure everyone reads the background papers at the workshops they attend, don't think we if not, you can download a copy of that from the website and it is worth doing.  There's also a list of questions there which have been set out as possible discussion starters and I'm not going to read out those questions.  But if you do have a chance to look back on the IGF website, you can read those for yourself there.  
We also have some people who may be contributing from the floor, including those from the Association for Progressive Communications.  So a packed lineup.  So without further ado, it just remains for me to pass on to Martin.  Thank you very much.
>> MARTIN HUSOVEC: This is on?  Okay.  Hello, everybody.  Can you hear me properly?  Hi.  Thank you, Jeremy.  
I'd like to talk today a little bit about my research, but I'd like to present a little bit of it in the form of a story.  Back in the 90s, many jurisdictions developed a mechanism which they considered incorporates certain balance between on one hand, a return of investments of the right holders and the other hand public interest for Internet as a public space that is conducted through competition and innovation.  This compromise was included in a framework of the safe harbors of different kinds of difference services, that will be something said about that today.  
What I'm going to talk about is something different.  Two decades later, many states are trying to revisit the social contract.  The claim is that a technology in the meantime changed the equilibrium.  The hope is that if we empower the courts with the possibility to impose state of the art measures on intermediaries, we will be all better off.
So let's have a brief look at Europe.  In Europe, the law abiding intermediaries, Internet service providers, which are covered so even those who are covered by safe harbors, are nevertheless huge, increasingly in private lawsuits by intellectual property rights owners, who are asked to assist the right owners in enforcing their rights.  The claim is not that they are liable.  The claim is that they should help the right holders to improve their situation by employing certain measures.  So the line of liabilities is not anymore important.  It is accountability without liability.  This line is overstepped by an instrument of injunctions.  So injunctions serve THE aim of increasing exposure to accountability.
Does even intermediaries who diligently deal with the content and who did nothing wrong, can still be subject to forced cooperation to the benefit of right holders?  This could be bad or good.  So, let's have a look at a story which I think it states nicely the problem.
In 2010, German Federal Supreme Court was hearing a dispute about open Wi-Fi's.  The question was, should the private owners who open up their Wi-Fi's password protect their Wi-Fi's in order to help the right holders or leave it open?  The Court did a balancing exercise.  The economists would probably see an exercise of improving or increasing or decreasing social welfare.  
So there was an interest at stake and their respective cost.  So look at private users, and it was seen it was negligible for the private users to password protect their why fees and even for the security reasons it might be a good idea.  On the other hand, for right holders, they thought there is a better chance that in the future they will enforce their rights plus that some infringements won't happen because some of the people wouldn't have any means on how to protect themselves from being a veiled.  Now, the German Federal Supreme Court as a consequence decided to password protect the Wi-Fi's and render a rule basically said that in that particular case, the individual had to protect the Wi-Fi by password with certain technology.
The problem was that not even a few weeks after the ruling, public started asking what is the scope of this decision?  Does it also apply to commercial users like shopping malls?  Municipalities who want to open a Wi Fi to public?  Or cafés?  Is there any time limitation for this rule?  Can we revisit it in the future?  Any room for    any hope for that?  That all became open.  
As a consequence of this, German landscape of Wi-Fi's started lagging behind the rest of the world.  So, even Der Spiegel, a national magazine, weekly magazine, ran an article which basically said, was crying about this development asking and reviewing certain sources coming to the conclusion that there is a problem and that people see it as a problem.
The question can be that the Court didn't mean it that broadly, and the question could be still raised whether the in that particular case, what the Court really meant what it became in a society, but the problem is that innovation, at least struggles to get through because of course open Wi-Fi in the meantime became important tool for competition, but also for different kind of innovation that the Court didn't think of at that time.  
So did the court do something wrong?  Well, the exercise of reallocating of existing resources really should have made everybody better off.  But it didn't.  And it didn't exactly for the reasons why inventing a car is more well for maximizing than any sophisticated of horsepower.  
So the Court looked at a picture of the Wi Fi and measures existing in a point was really arrived in reallocating existing resources would be actually a good idea in a form of protecting the Wi-Fi.  But a problem was that inventing any invention that is based on open Wi-Fi, and crucially, will struggle with there and this invention could increase the social welfare way more than any reallocation of resources by protecting the Wi-Fi's.
So, to wrap it up, the ambition that the courts will be able to set up more sufficient balance for us by means of injunctions which we'll try to seek, is very ambitious.  It's very ambitious because the courts often do not have the information and the society does not necessarily benefit from open ended accountability of intermediaries and the cost of uncertainty can be very high.  Technology might temporarily change that equilibrium but it's the same technology they can restore it.  Consider the filtering capacities which increase the capacity of Internet service providers to detect and enforce rights but at the same time, the same technology can also assist rightholders before, for instance, they file notice.  
So this was something that is happening mostly in the European Union these days with injunctions that can be issued against those who do not infringe or are not found reliable.  But this is spreading further, spreading to other countries.  Other countries are inspired.  So we European framework of accountability without liability, I think it needs a reform and is just exactly what I work o.  N and if you have interest, you can look at www.accountablenotliable.org and when the research is done, it will be all out there.  Thank you.
>> MODERATOR:  Thank you very much, Martin.  And Giancarlo is going to follow up.
>> GIANCARLO FROSIO:  Hello, everybody.  So, my contribution will be more brief, and what I will try to do is to tell you something about an evidence-based approach to policy that we have been carrying out at the Stanford Centrefor Internet Society.  
As you may have understood as has been already hinted at, the key question of intermediatory liability conundrum is about consistency, it is about the necessity of adding clear definition because the regime is too much throughout the world.  And intermediaries cannot cope with this differences and the side effect is that user rights have been hindered this way.
What we have done at CIS in order to come up with a evidence based approach to intermediary liability policy making, is to create what we have called the World Intermediary Liability Map.  We have mapped legislation, case law, and additional resources about intermediary reliability throughout the world.  So far we have covered about 70 countries, and the goal is to have accountability for each jurisdiction in the world so that those information may serve as a follow up basis for any type of policy making in the field.  This is a step 1 of the project and we want to proceed further, still looking closely at the necessity of having clear definitions in the field of intermediary liability.  
What we are trying to do right now is to come up with an announcement website in which we will include this information we collected so far and we will try to aggravate those information in a better way.  To come up with that better aggregation of information, we are trying to come up with a taxonomy of what intermediary liability is about.  
And so, we will try, we are trying to define clearly who intermediaries are, which is the subject matter relevant for intermediary liability.  Which are the illegal instruments used to regulate intermediary liability can span from legislative, administrative.  And we have seen increasing administrative tools in order to tackle intermediary liability issues, self-regulation, and then the measures that have been applied in practice in order to deal with intermediary liability, then the work should go in the direction of understanding which type of liability out there in the jurisdiction.  
And so, the scope of my brief talk here today is to tell you what we have done, what we plan to do, and that the creation of the taxonomy could be done in a collaborative way.  So we are coming up now with this taxonomy in order to define a clearly what the intermediary liability conundrum is about.  And we'd like to open this process to collaborative participation.
There are many stakeholders here at this table in this room already working on a similar project and so, it is the moment to join forces and try to come up with a common understanding of what intermediary liability is about in order to provide legislation and policymakers with those evidences in order to come up with meaningful notice and provisions to regulate the matter.  And this is all on my slide.
>> MODERATOR:  Thank you very much, Giancarlo.  So we are sort of going in order of stakeholder groups.  We are sort of starting with the active and now we'll move into the civil society and then move into the private sector.  So next on the list we have Nicolo Zingales, Assistant Professor of Law at Tilburg University.
>> NICOLO ZINGALES: Thank you.  Hello, everyone.  
So what I wanted to tell you today is that basically some of the research that I have been doing and also what are the next steps that I think are important in this space.  
I worked on research on the notice and takedown procedure in particular.  So looking at the systems that are in place in different countries to respond to requests of takedown and the extent to which the users were involved in this basically adjudication that intermediaries sometimes are required to undertake when they are asked to release these requests.  
So the key issue is that we need more user involvement in this space.  We need review of several legislations.  We have a paper with the Association for Progressive Communication that is posted online and is trying to identify some best practices from several intermediary liability regimes that could apply to the African context.  
And some of the recurrent issues came up where the fact that essentially because of the way that the notice and takedown regime was structured, the content was immediately taken down presumptively and basically the user whose content had been taken down was only given at the subsequent stage, the possibility to restore the content, put it back.  
This is problematic from two perspectives.  One is that there is no due process or no right to be heard.  And this creates a sort of presumption of guilt.  And then the second issue is that the freedom of expression can be hindered in this way.  So, essentially, it is against two principles that are well recognized by the human rights treaties.  And the question is, how can we enhance the participation of the users in this space?  
So, some of the regimes around the world provide for a notice and notice system which allows the users to be informed and allows them to interact before the content is taken down.  But, improvement can be done also to where this is not currently the case.  
So, there are several ways to involve the users.  One is in the notice and takedown procedure, a standardized form could be provided so that copyright owners or other rightS holders were requesting the takedown, need to address already the specific defenses which can be erased by the users.  And so, in this way, they will need to fill in blank spaces and say why these defenses will not apply.  
Another common safeguard that is used to protect against this issue is, adopt strict litigation sanction.  Apply not only when there is a misrepresentation and but there is the foresee-ability that a defense could have been raised or could have been valid.  So, in this way, inferring the knowledge of the content might have been legal and shifted the presumption in favor of the users.
A third solution that has been suggested is the use of reverse notice and takedown procedure, which functions in the following way:  The user who would want to utilize a certain content that is protected by copyright and by technological protection system, which impedes the access to such content, can notify the users that he would like to make a public interest use of this work, and then unless the copyright owner replies within 14 days explaining why this public interest use is not warranted, the users would be entitled to circumvent the technological protections system without being found guilty of violating the specific provision of the copyright law that condemns such circumvention.  
So this was a specific example about copyright but, there are -- there is the need for more involvement of the users also in other areas.  And in general, the issue is multistakeholder corporation in this space sufficient or adequate to enable the users to have a say and enter as part of the social construct, essentially.  
And, on this, there are contrasting pieces of evidence, for example, the case in point is in the U.K., there was an attempt to reach an agreement between the copyright holders and basically the content industry and ISPs regarding the cost for the filter that would be imposed according to judicial procedure, but still you need to decide how to spread the costs.  And this was supposed to be the setting in a multistakeholder fashion, involving users as well.  Because it was so difficult to reach an agreement that -- in this case was supposed to broker, this eventually was circumvented by an agreement reached on a separate basis between ISPs and content industry, which essentially nullified the whole thing that tried to be followed.
In this case, the multistakeholder corporation didn't play positive role, at least not able to deliver an effective agreement in a short time period.  However, there are others that we are hopeful that the multistakeholder corporation will play a major role in the future and this is what we want to do with the Dynamic Coalition on Platform Responsibility.    
I just want to put a plug to something that we will do tomorrow.  It will be the first meeting.  Essentially with this Dynamic Coalition, we want to analyze Terms of Service and to the extent to which companies, online platforms are using this service provisions that allow compliance with basic Human Rights.  Particularly, due process, freedom of expression and privacy.  
In this regard, I want to invite you to join a workshop in the morning from 9-10:30 about platform responsibility and then in the afternoon from 2:30-4:00, we will have the first meeting of this Dynamic Coalition.  And essentially, the objective is to have a multistakeholder dialogue involving platforms but also government representative and organisations, civil society, private sector, et cetera, to define what are the possible model contractual clauses that could be adopted in order to secure the best efforts in respecting the Human Rights by these platforms.  So, maybe we can expand on how this can be done on a later stage.  For now that is all for me.
>> MODERATOR: Thank you very much for that.  And moving on again, Gabrielle.
>> GABRIELLE GUILLEMIN: First of all, I'd like to present my organisation, Article 19 who are not familiar with it.  It's an International free speech organisation based in London.  And what we do is develop international    develop standards and are involved in the court cases and we base our work on international standards on freedom of expression.  
What I'd like to talk to you about today is our work on Internet intermediaries.  You can find a copy of this work on our website.  It's called Internet intermediaries: Dilemma of liability.
First of all, we looked at different types of intermediaries and so we determined that there were four that were important.  First there was Internet service providers but then we saw immediately that there was some confusion because sometimes the term, ISP, was used to describe a number of Internet intermediaries.  So we defined them as access providers so, so the ones who give you access to the Internet.  
Then, we looked at the category of host and determined that we would define them as those who control the website or web page which allows the parties to upload or post material.  
Then, we looked at the other Internet intermediaries in the Internet Ecosystem and found social media platforms and search engines and then saw they had different functions but that sometimes, depending on the courts or on the liability regime, sometimes they were considered as hosts.  So for example, a platform such as Facebook or Twitter, would be qualified as a host.  And search engines similarly, depending on the qualification and the countries and jurisdictions, could also amount as a host.  Even though what they do may not be necessarily just offering space for third parties to up load content.
In looking that the taxonomy, we were also very conscious of the fact that when you think about Google, a lot of people think about Google as search but they also provide a -- so may be applicable to Google as a company.  
Then we looked at the different types of intermediary liability and basically found three models.  One end of the spectrum we had strict liability model which you can find in countries such as Thailand or China where effectively, Internet intermediaries are required to monitor content because otherwise they face criminal penalties or the withdrawal of their business license.  And the other end of the spectrum, we have a court based model where in order for content to be removed, one has to go to a court with a complaint.  And which is essentially based on the communications and decency acts in the U.S.  In fact, this model has been very much endorsed by the U.N. special rapporteurs so that was also of great interest to us.  
In between, we found the safe harbor model which is essentially conditional immunity system whereby in order to get immunity they need to fulfill certain conditions.  Sometimes it is only for certain types of content.  For example in the U.S. you would be familiar with the Digital Copyright Millennium Act.  And in other countries -- where in the EU, it is very horizontal model.  So, Internet intermediaries will be fixed with -- they have or will have benefit from immunity and liability so long as they don't have actual knowledge of a legality.  
The problems with this model, especially the EU directive, others are well known, and I think they have already been touched on.  The problem is, first of all, it is not very clear what constitutes notice so very often Internet intermediary will get the complaint, but it doesn't really explain what the problem is, if it's a defamation claim, why it is defamatory if any defenses apply.  So, there are problems around that.  And very often, there was also a complete lack of clarity around what actual knowledge meant, not just in terms of getting noticed but also the legal basis.  For this reason, we kind of tried to think about sort of approaches that could improve the notice and take down regime.  And then we found some countries such as Canada, they had a notice system.
So, I think on some level, Nicolo already told you about it and the idea behind that was that instead of going straight to the intermediary for the content to be removed just upon notice effectively, the idea was to place the dispute back in the hands between the person complaining about the content and the person who is written the content rather than the intermediary.  
Because we thought, and this has been touched on already, that Internet intermediaries were not best place to make these judgment calls.  Very often they have to determine whether content is defamatory or copyright infringement.  We thought it wasn't appropriate for a private company to make these determinations, and this was very much in line with the international standards that we found on this issue.
So just a couple of points but you can read more details about the policy on our website.  A couple of aspects we thought that such notice regime should have is like more detailed notice to begin with.  So explaining what the complaint is, where the offending material, and so on.  We also thought that it would be useful to have perhaps a fee so that in order to extend the number of claims that are made, because very often as we have seen in the notice and takedown regime, it is very easy for people just to complain about something and the content gets removed because otherwise the intermediary might be fixed with liability, and so that gives rise to abuse.  And especially under the EU system, there is a very little safeguards against abuse.  So, our thinking was that with a small fee, it might help fund a system of looking and then also extend the abuse.
We also looked at what might be desirable in relation to criminal content.  The main point here for us was that generally speaking, the most effective way with dealing with contact which may involve criminality, is to involve the authorities.  Because ultimately, although it may be less infringing of free expression to have the content removed rather than the prosecution, at the end of the day, if the content does amount to criminal activity, the best is to complain to the police and let them investigate and see whether or not it is worth pursuing.  So I'll stop here and we'll very welcome any further thoughts from my fellow panelists and members of the floor on this issue.
>> MODERATOR: Be willing to have some interventions from the floor a little bit later on.  I'm interested to hear what representative from Google has to say about some of the suggestions about notice and notice and fees for takedowns and so on.  Titi from Google is in the audience and she is going to participate a little bit later.  Before we move on, we do have a couple of more speakers to get to on the panel.  Elaina Hickok for the Centrefor Internet Society India is going to be next.
>> ELAINA  HICKOK:  Hello?  Okay.  Hello, my name is Elaina and I'm from the Internet Society based in Bandalore, India we are a policy research think tank.  
I'm going to talk about our research on intermediary liability and research we contributed to a report that it being issued by UNESCO.  Tomorrow there will be a session on that report, titled "Fostering Freedom Online: the Role of Internet Intermediaries"  And it seeks to understand how intermediaries both foster and restrict freedom of expression online.  
In that research, we studied three different types of intermediaries, social network, search engines and ISPs across 10 different jurisdictions and 11 different companies within those jurisdictions.  
Now, a section of that report looks at intermediary liability.  So we studied the intermediary liability regimes across the 10 different jurisdictions and looked at a range of models and provisions ranging from jurisdictions that did not have intermediary liability provisions in place, jurisdictions that were developing intermediary liability provisions and jurisdictions that had developed intermediary liability provisions in place.
We noticed that in jurisdictions that did not have liability regimes in place, it created both procedural and regulatory uncertainty, particularly for international companies operating or providing services within that jurisdiction.  
For jurisdictions that are developing or have recently developed intermediary liability provisions, it seems that there is a trend to cherry pick best practices from developed frameworks, but at the same time, adopt it to local context based on content that is prohibited and at the same time, also, include additional provisions.  So we are seeing intermediary liability regimes with retention mandates, with disclosure to law enforcement requirements, with requirements for reporting cybersecurity threats that the intermediary might find.  Within its jurisdictions they do have developed regimes.  
I think Gabrielle has talked a lot about the gap that is exist but we also saw there is often a broad content that is prohibited.  And these are increasingly including terms such as cybersecurity or antiterrorism and we are finding they are not aligned with constitutional norms in a jurisdiction.  We also saw models that placed the intermediary in the role of judiciary having to determine if content is legal or not legal.  And sometimes also being placed in the role of the police.  So proactively monitoring content online.
Across jurisdictions, there was also a lack of due process and a lack of remedy.  Not the report is not just about intermediary liability.  It looks broadly at content restriction online.  And I think along with all of these others it's just a scratch at the surface of research we need to be doing into understanding evidence that needs to go into these intermediary liability regimes, including how these regimes are impacting users and how they change their behaviors online because of the regimes.  Thank you.
>> MODERATOR: When will the report be available?
>> ELAINA  HICKOK:  Hopefully in the next month or two it will be finally published.
>> MODERATOR: Very good.  Thank you for that.  And moving on to Francisco Vera, Advocacy Director from Derechos Digitales from Chile.
>> FRANCISCO  VERA:  I want to speak about our intermediary reliability and enforcing Chile or corporate rights and how that has changed the panorama globally with the negotiations.  As you know, the U.S. standard on copyright Internet liability was set in 1998 with that DMCA, consisting of a notice and takedown regime which was pretty very much flexible.  You just had to fill a chip with your data and state the content you want to remove, like an affidavit saying you're like responsible for the content of that statement and then determine that they received notice and should take down the content or otherwise be responsible of it.  Of course there is the situation in which the intermediary can for the content or quench in the notice.  But that is not the focus of what I would say.
I will say from that rule or that act where they started the ruling U.S. around 1998.  In 2004, chilly signed its FDA with the U.S. agreement in which Chile committed to have been intermediary reliability regime on copyright.  And the implementation of that obligation that was stated on our FDA started in 2007 to 2010 when our copyright law was passed finally and contains an interesting role on intermediary liability, which is   judicial only.  What that means is that if you want to take down content in Chile from intermediary, you should form of a go to court and ask for an injunction that the judge should like analyze and provide, and then you should serve the ISP in order to get the content or in order to get the notice done and then they can do whatever they want to.  And also, another mixed system in which you can forward or notify the ISP and the ISP forward the notification to the infringer.  
What happened after?  The USDR, Trade Office of the U.S. was pretty much not comfortable with the Chile an implementation and rule and they started adding that issue in their report, their report the U.S. has to determine which country is not fulfilling their standards in terms of Internet property protection.  
Also there was other consequence that was to start drafting a new international agreement called TPB, the Transpacific Partnership Agreement, in which they are now demanding to have these intermediary rules to be not judicial at all.  
And what the consequences are here?  That's the important part.  Is that now, what the U.S. trade office wants is to get the DMCA exported to other 11 countries without any evidence.  I mean, without any evidence supporting that that sort of rule is actually beneficial for the industry but more importantly, for the public interest.  
The only statistic we have or the studies, is that the number of notice and takedown request in the U.S. has been scaling almost exponentially.  But from 2012 to 2014, the element from like 12 hundred thousand to 10 million, sort of, monthly.  
So what we have now is a different scenario than the one that was initiated in in 2008 -- sorry, 1998.  1998, there was no massive search engine with the capability of indexing almost every continent in the world.  There was not a media generated platform so you or other social networks that works with a content provider third party.  And now we have those.  
But also, we have some mechanisms which are actually in the Internet are looking for this content and automatically sending out these notices.  So in 1998, there was a system when you did like this sort of statement saying you had content that belonged to you and you need the content removed and now in 2014 we have both surrounding content, and sending automatically these notices which from the transparency reports we know they receive around 10 million a month but there are also many other different ISPs that keep receiving this content.
What happens now?  There is no evidence of the success of the system for public interest.  In fact, there is evidence on the other side saying that now setting up, for instance, any possible service Internet that works with third party content will be submitted like the first day of the functioning to hundreds of thousands of notice and takedown requests done by automatic systems that are not even considered intellectual property problem.
Now these are being used as anti-competitive mechanism to discourage a new webpage or service to enter to compete.  To enter competing to provide services in the Internet.  So, there is a system that was born in 1998 that has now turned to a mechanism to disallow competition, disallow new services on the Internet and that is not serving anyone the public interest.  
I'm not saying this is fault of Chile.  We implemented the system in a manner which is consistent with Human Rights, especially in America and in our Human Rights system.  We very a special prohibition of having censorship on the Internet.  And this system in the manner in which it is outlined with informal notification and takedowns is not being consistent with that instrument.  And is there a huge silence from the Inter-American human rights system on this side.  
Just to know that when many people complains from this fright to be forgotten ruling in Europe which is scaling thousands of request, nowadays we have millions of requests taking down content from search engines and many times they are not even based not even incorporate.  And all of this with the -- of this workshop, with no evidence that proves the importance or the service that this system is paying for the public interest.  Not for the economy.  Not for the competition.
So now what we have is the TPB is the U.S. trade office to export their competitive disadvantage to have new Internet companies to the rest of the countries that are negotiating the TPB.  So there is no reason for some requests but some specific interest from some that probably they are not even getting so much from the system but a lot of comparability of having them surrounding the net and keep taking down content.  So just I'm happy to give that conversation going.
>> MODERATOR: Thank you very much.  And moving on to the private sector, we have Malcolm Hutty from the European Internet Service Providers Association.
>> MALCOM HUTTY: Thank you.  I'm also with links to London Internet Exchange, my day job, and the Chair of the Intermediary reliability committee.  Both groups are essentially representative organisations for intermediaries.  I'm speaking here in a private capacity.  
The first thing I want to address is something coming out of the discussion we had so far which is the so called discussion between liability and forced cooperation.  Now I'm not a lawyer, but from a business perspective, this is getting awfully close to what my lawyer friends call a distinction without a difference.  
The idea of a forced cooperation in practice means divulging to the courts the power to make rules that the intermediaries must follow by means of injunction and then penalizing them if they fail to do so.  That penalty is essentially rather similar in business effect to actually imposing a direct liability.  The main difference being that we are at least protected from the rather extravagant estimations by the audio/video entertainment industry at the value of their product.

So the idea of this -- I would like to come back to the principles.  What is the idea behind this?  It is an incentive mechanism.  There is bad stuff on the Internet and the idea is if you place this incentive on the intermediaries, they should deal with it.  They can make it their responsibility.  Why?  Well, there is an idea that we are very clever people, that we can produce grand tech solutions to society's problems.  In particular, the expectation is that we will apply our usual approach in building solutions to provide services to people to produce capital intensive rather than labor intense and I have their highly scalable systems.  
What this actually means what is expected particularly from those that were just engaged in preemptive regulation of unwanted or undesirable content, is that we should automate the problem.  
In increasing parts of our lives is conducted online.  With the growth of e-commerce and particular rights of social media platforms, this means an increasing part of our lives is conducted through intermediaries, which means that really quite a lot of what you do is exposed to what could be these automated systems if we are to be required to create them.  
I'm going to set aside for the purposes of this discussion questions of technical reality, what's actually technically feasible and so forth.  Firstly because it is not that helpful and doesn't add that much value to this discussion and secondly, I'm on the theory what is not possible today might be possible tomorrow, at least to a greater or lesser extent.  But let's look at what the implication of more of our lives being conducted to intermediaries means to deal with the bad things that happen.
It's this change in the social change, essentially positive or negative according to your perspective.  It's positive because it exposes vast areas of people's lives to regulatory enforce, which enhances power of the state to implement public policy in the public interest and expect it to be executed.  And it is negative for essentially exactly the same reason, depending on your perspective.  
Let me give you an example.  I think many of the people in this room will be familiar with what is known as the Twitter jokes trial.  This was an occasion in the U.K. where there was a threat made against an airport, a threat of essentially to let off an explosive device.  What happened was that a man, having gone to a small regional airport called Robin Hood Airport came back from it and tweeted, I don't have it in front of me so I'll try to do did from memory.  "Robin Hood airport is closed.  You've got two weeks to get your stuff together.  Or I'm going to blow this place sky high."  This was followed by three exclamation marks.  Now, he was prosecuted for making threats.  He was convicted and after several layers of appeal, the conviction was ultimately quashed.  This case was significantly criticized on the basis that any reasonable person could have seen this was a joke and shouldn't be taken seriously as a threat against someone.
So far so good.  To me, the significance of this is that in any off-line context, this never would have gone anywhere near a court, not because we would have judged it was a reasonable to thing to have said so you should be allowed to get away with such things, it would have been because nobody noticed.  
This is what I mean by the increasing impact of our stuff being conducted and through intermediaries.  If we were to start automating on a massive scale, the enforcement of the things you're not allowed to do, then even supposed we are able to do it, those sorts of decisions will be conducted by machines and things that will simply would have been part of -- or slipped below the radar, however much it might be wrong, we suddenly become part of -- effective in a sense of practiced regulatory regime.
So, this really means that whether you like the idea of asking intermediaries to build big systems to regulate unlawful conduct, depends on really what your primary concerns are.  If you're principally concerned about pedophiles coming over the Internet to get your children, about copyright piracy, extremism online and about people spreading untrue rumors, then you probably will think this is a great idea.  If, on the other hand, you are greatly afraid about the overwhelming power of the state and big corporations and feel the people already far too exposed to power, you'll probably be terrified of it.  
Before you make up your minds, I'd like to give in industry perspective as to how we are likely to have to approach the problem.
    
We have natural commercial imperatives.  How
will we --   what will be the things we have to take into account if we build any systems, however much they are automated or however much they are procedural?  Firstly we'll look for them to be cost effective.  We are businesses and have to run what is a practical economic proposition.  Still need to be efficient.  Certainly they'll need to be efficient and legal.  More than that, in most cases, we will look for them to be respectful of government interests or the government demands that we engage in this kind of activity and that we restrain unwanted behavior.  
There is a difference between different political cultures and legal cultures as to how far intermediaries will go at the moment beyond what is legally required of them to restrain their user's behavior, to do more in terms of conditions because that is what is wanted at the policy level.  In some countries, providers are willing to go a long way and other countries much less so.  
What we are unlikely to prioritize is extensive fact finding in individual cases.  We are unlikely to prioritize detailed examinations of novel and complex areas of law.  And we will find it difficult to apply strictly complex and qualitative and subjective judgments of how legal qualifications are intended to apply as to protect the rights of individuals in particular cases.
Let me give you an example of how this worked out.  It's already in practice in one example in the United Kingdom where I come from.  A few years ago, the entertainment industry, basically Hollywood, brought a legal case against ISPs in the U.K. asking us to block access to -- or asking the Court to require us to block access to certain calm copyright infringing sites.  I see a representative of a couple of companies in the room.  
The site in question had already been sued by Hollywood for copyright infringement.  A finding had been made against them that they were the prime infringer of copyright and the damages awarded against them sent that site into bankruptcy.  
This site should have disappeared.  But it
re-materialized on something that really they just effectively -- I'll have to be careful speaking in public, but frankly, individuals stole the code from the company so they could resurrect it as a Phoenix company elsewhere.  This is, whatever you think about copyright regulation, the most grotesque example of scofflaw that you can imagine and the court was highly unsympathetic the site in question the second time around.
So the providers had an order against them made, Hollywood won and the providers were ordered to block access to that site.  And the Court was very clear in what it said, that it is going to this measure because having seen and already been tested that this site is completely illegal.  They've had every opportunity to defend themselves.  A finding was being made against them.  They acted subsequently in a completely disregard of the law.  
We would not be making such judgments if these things were not the case.  Subsequently, Hollywood has come and asked for orders to block The Pirate Bay.  And the reaction of the providers has been understandably, we are not willing to engage in continued extremely expensive litigation in defense of people we have no interest in defending.  So, those applications against The Pirate Bay were made and not defended by the ISPs.
Now there was no finding that The Pirate Bay is an infringing site in the U.K. but the Court said, well, there is nothing in the U.K. but they found it had a finding against them in Sweden.  That will do.  So we'll pass the order against them.  And the next thing that happens is Hollywood comes back to court with another long list of further sites that are set to be copyright infringing, against which there is no finding of primary infringement anywhere in the world.  But again, for reasons of cost effectiveness and it's not our job to be defending other people's rights, there is silence from the ISP benches as to whether or not an order should be made against them and such orders are made.
So we have moved from a position where the courts said it is essential that there needs to be primary findings in individual cases to ones where courts are routinely turning out long list at the behest of plaintiffs where there is essentially nobody on the other side.  Because of the imperatives.  The natural commercial imperatives.  
Now I should not say it is a criticism of my industry.  If you don't like this, this is ensuring that is regulated properly is a social function.  It is not one you can reasonably expect to be left to profit-making entities.  
The consequences however, the economic consequences of this for the industry are significant.  Certainly it makes it harder to invest in new types of online services that are likely to be the targeted for specific enforcement measures and that are likely to be subject to liability or to the so called forced cooperation regime instead.  
And the other thing that is particularly expensive for us is that, when we are required to engage in some sort of solution, while the cost may be recovered at any given point, what we have found is that the real cost of that applied further down the line.  We implement a solution that has a certain cost, soapy it.  But when we then need to change our networks or systems, we can't just throw that away because it is a court mandated or legally mandated solution.  And therefore we have to design around the old system, and therefore the enforcement mechanisms gradually accrete mechanisms we have to continue to apply with as we are trying to develop systems to develop essentially extend broadband to the next billions or whatever it may be.
Essentially what this means is that the costs can be quite expensive further down the line and it retards our ability to bring new products to market.  
So, those are the issues from a social perspective and the economic consequences for our own industry.  Where are we?  I would say at a crossroads.  We have a choice in what approach we prefer best.  We either rely more on a traditional, essentially court-based and off-line mechanisms for regulating behavior and disputes.  The downside of this is it will be costly and costly to the public purse.  It will tend to be slow, and in particular, we will lose a great deal of the opportunity to extend regulation much more deeply into people's lives so as to implement the public interest.
Or, we could rely on intermediaries to develop and automate systems to control our behavior.  This doesn't have those sorts of problems but does have other problems both in terms of hindering the developments of the Information Society and potentially yielding up many of the things that we have traditionally expected as the protections of a free and liberal society.  The choice is yours.
>> MODERATOR: What a great way to end our panel and to go to the audience.  So what does the audience think of that rather stark proposition?  Nick has his hand up.  Do we have a microphone that can go around the room?  We'll start with your comment as soon as we get to a microphone.
>>   Can we respond?  Because this project is overlapping very much with this evidence based approach as far as identifying trends in intermediary liability case law and legislation.  So maybe you like to introduce that and what we will have tomorrow.
>> AUDIENCE: I'm the manager of Internet and Jurisdiction project.  That was a really great overview of the state of the art of intermediary liability and the different projects that are going on worldwide.  
Let me stress one thing that you evoked but didn't really mentioned.  A lot of those request intermediaries so platforms, and technical operators, receive today are increasingly transnational, so those are directory quests from courts and public authorities located in other countries that cannot be directly enforced on them and at the same time traditional modes of interstate legal corporation are not adapted to digital realities of the Internet where most online interactions involve multiple jurisdictions.  
So, this situation creates a lot of tensions and you highlighted this in your different interventions that there is a general lack of procedural standards for the submission of requests for the handling of requests and making of determinations and there is a lack of due process, transparency and also appeal mechanisms in that regard.  
So we have a workshop tomorrow at 2:30 in room 2 that is called, "Will Cyberspace Fragment Among National Jurisdictions?"  where we will discuss the unintended consequence system of the different national approaches that are currently adopted that do not take into account the transnational nature of the Internet and we will present the state of the global multistakeholder process that is the Internet and jurisdiction project and present the transnational due process framework draft architecture that is an outcome of this process.  Thank you very much.
>> MODERATOR: Thank you.  Nick?
>> AUDIENCE:  Nick from Queensland University of Technology.  Thank you.  That was a really interesting panel.  There is tension here between efficiency on the one hand and due process user rights on the other.  I wonder if you could reflect.  Is this always a zero-sum game?  Is there any way we can have both efficiency and due process?  Whenever we see examples like the judicial process in Chile or New Zealand, the rights holders in particular, in copyright, become relatively upset with how much that process becomes more expensive and more difficult to manage.  Does it always have to be opposed in that way?
>> MODERATOR: Who would like to take that?
>>   On the international side just one small note.  I think when you have international problems, mostly in my experience due to copyright rather than other content, because other content is sometimes defamatory.  In that case, also, some platforms you have are in the cloud, which means a data center in Arizona.  So most of the cases do end up using a U.S. legislation and DMCA in case of coverage.  So even if small countries have good system on notice and takedown, it's not as important today in the current setting.  On the efficiency, we don't have a position here on Derechos Digitales in when we have doing the (Indiscernible).
It is important to have an efficient judicial system and safeguard the rights but also we need to face the fact that there are other rights in the world.  I mean, there are so many other products related to real estate and child support and with cybercrime.  So when you say you, we need a real efficient system for copyright to survive.  I mean first, that's not entirely true because the government really individual to survive with increasingly amounts of content distribution.  Nonlegal content distribution.  And secondly, it is like we say we want to give every other human right.  I mean, if you have a country where you can have a really good judicial system efficient for all those cases, I totally support a sufficient case but keep stating and pushing copyright over every other right, especially through FDA that has to be accomplished and enforced in many ways.  I really don't see that that is consistent with public interest.
>> Just very briefly on Paul's point about the transnational aspect.  As regards to content of the requests, I think that is what international standards on Human Rights can be very useful because in a way it is a way of harmonizing the various laws on the content regulation.  So, that would help resolve some of the conflicts in a way of what is legal and what is not in another country by looking at those standards instead and similarly, those international standards help in defining the procedural safeguards necessary in order for these requests to be examined like the right to notification and so on.  
The other aspect, I think to this is that, a lot of these requests also likely to be dealt with under terms and conditions, which is an aspect we haven't already talked about and something we need to have in mind because a lot of requests don't even go through that process.  It's just they can be taken down in terms and conditions.  
Secondly, on efficiency and due process, I think it is a tough one because to a certain extent, due process is like a thorn on your side.  It is how you safeguard rights, but any procedure almost inevitably will make it -- it will make it lengthy and hard to be efficient because you have the procedure.  It's also how you present the rights.  
So one of the reasons why between like court orders, which are the preferred model and notice and take down where there is so little, we thought that notice of notice was a good alternative to at least have some process to try and have some district resolution makers between the parties concern and then if that can be resolved in that way, then take the matter to court.
>> MODERATOR:  Does Google have a position on this?  What is the preference between the notice and takedown regime which is easily automated and then the more rights, Human Rights perspective regimes in accord with perhaps your stated values.  Do you have any remarks?
>> AUDIENCE: I'm not Titi.  She left already.
>> MODERATOR: Okay.  Maybe we'll come back to that.  We also have a remote participant.  Do we?  Do we have a remote question?  Let's take that first and then we'll go to the other people who had their hands up.  Thanks.
>> REMOTE MODERATOR: We have a question from rain hart from Europe Law Students, a situation on Syria.  He is questioning how the panelists see the first instance judgment of Adelphi versus Astonia after ACHR and what we think the outcome of the grand chamber position.
>> So maybe I can -- so the question was as far as I understand, what do we think?
>> REMOTE MODERATOR: I can repeat.
>> No just tell me whether this is correct.  The question is how do we see or what do think of the first instance decision of the ACHR in Adelphi versus Astonia and what do we think will be the outcome?
>> REMOTE MODERATOR: Outcome of the grand chamber decision.
>> Okay.  So as some of you may know, I personally am very critical of the first instance decision because I think it sets the wrong incentives for the intermediaries and I think setting incentives is an extended way how the state acts.  So for the future, we should not have to state imposing strict liability to intermediaries in cases where the third party content is being concerned and especially if it is a defamation case where the bar should be lower than in intellectual property cases.  
Now, my opinion of what the grand chamber will do, I can only say I'm hopeful that it will follow the intervention which I authored on behalf of the European Information Society and that is that a court will require as a human rights standard when it comes freedom of expression that no general monitoring obligation is imposed on intermediaries which means it will outlaw any strict liability when it comes to third party content.  Which means the states will be pushed to any kind of negligence based or any other standards.  
And second, that endorses about the analysis that intermediaries should be safe in the process of determining whether to take down the content or not.  They should be endorsed and giving the opportunity to the originator of the content to get an opportunity to defend his case and only after that, take an action.  So that is what I hope the ACHR will do.  What it will do, I don't know.
>> MODERATOR:  Shall we -- the first happened that I saw was Andrew Bridges.  Do you have a mic?
>> AUDIENCE: Thank you.  I wanted to flag for attention and consideration long term two types of online intermediaries that I don't think I heard discussed here.  And those are the two types of intermediaries that the Stop Online Piracy Act sought to regulation in a way that has taken effect even though SOPA failed in Congress.  
Thanks to the White House intellectual property enforcement coordinator, payment processors will go to their customers and say, we don't want to process payments any longer for peer to peer software providers or VPN service providers or use net service providers so we are cutting you offer.  That is happening with no due process, no -- you can argue with the bank but there is no process for dealing with it.  They have never had a seat at the multistakeholder table to discuss these processes.  Like a 6 strikes type policy.  
The other is advertising networks because they are dependent on certain very, very large advertisers, will cut off websites from syndication saying, well, you have made some of our big customers happy    unhappy, and we are cutting off advertising syndication to you until you make them happy.  When you get them to tell us to add you back to the list, we'll add you back to the white list and meanwhile you're on a blacklist.  
So these are two types of takedown that are occurring with none of even this informal process in place.  And it is something I'd love to see everybody studying these issues to wrap into their work.
>> MODERATOR: Thanks for that.  We already heard about the platform responsibility workshop tomorrow and that may arise as well.  I am going to jump to Henriette because she was going to be a panelist anyway.  So if you would like to make your point.
>> AUDIENCE: Thank you, Jeremy.  It's also more a process for going forward.  But I think thanks very much to the panelist.  Really interesting to listen to this.  I'm glad the Adelfi question came up.  I think it's an example of Human Rights experts in the judiciary don't understand this context, and so the lack of procedural clarity definitely but it is also lack of really understanding the full range of implications.  
Also I'm not sure if people saw on Twitter that apparently the IGF YouTube live stream is not available in Germany because of copyright bots blocking it.  My suggestion was for the next IGF, would it believe useful for us to facilitate a kind of NETmundial process where there is some scoping background information on intermediary liability and then a gathering of text and gathering of different positions?  And building towards the 2015 IGF in Brazil, coming up, not necessarily with complete consensus between the different stakeholders about how to regulate these issues, but at least some common understanding of what the issues are that are at stake.  Would that be a useful exercise?  That is the question.
>> MODERATOR: Thank you.  And maybe that provides a useful segue into something that I was going to mention, which is that EFF and the Centre for Internet and Society India are announcing today that there is a new project to develop a set of intermediary liability principles which you can see up on the screen a zero draft white paper which is the very first public output for discussion.  
At the bottom of the first paragraph, you can see a tiny URL.com where you can go.  If you can't see it, please come up later.  We'll try to leave it up on the screen so you can get it down.  
So this is not a, by any means, a fixed document.  It's the very, very first iteration.  That's why we are calling it zero draft.  But we want to get it out to the community to have your input into it.  A number of our panelists are on a steering committee as are others in the audience or elsewhere at the IGF, who will be trying to get this into a first draft shape at a meeting in October and following that, once the first draft is ready, we will be going out again into the broader community getting as many people as possible to give their input towards a final set of intermediary liability principles we will be able to present and launch next year.  
So, please do take the opportunity to take down that URL and have a look through the zero draft white paper, bearing in mind that disclaimer that it is not representing anyone's fixed position.
We have 1, 2 the back and someone else with the microphone in the middle of the room.  Please, go ahead.
>> AUDIENCE: My name is -- and I hope you can all here me.  I wanted to comment on this very, very good panel, which had a very, very wide variety of views.  
I just wanted to comment that notice and takedown systems has a very big downfall.  Notice and notice to notice system has its downfall because we have this anonymity problem, let's say like that, not really a problem but some people might say it is.  
So, I don't think there can be one single solution that could be applied to all kinds of content.  For example, hate speech needs one kind of approach and then a defamation that needs a totally different approach.  And you can't have defamation content to be regarded without due process, even though it does take time and it is not efficient.  However, there is not possible to change our community and say well, we want everybody to act nicely.  We want everybody to have only cool things to be said to other people.  So we can't shape Internet because of that.  
However, to the remote participant who asked about the Adelphi, I wanted to, since this is very cute and nice case for me, I wanted to say that I totally agree with the moderator who said hopefully grand chamber will decide in favor of Adelphi in this matter because I think it could have a very huge implication for Europe as many of the panelist are aware.  But otherwise the Supreme Court decided that they will not apply Adelphi European court decision from the last October, 2013.  So I think this is also a way to go if we do have some member states who have not been able to apply e-commerce directives, such as Poland, because they have a lot of cases where the strict liability is being applied to intermediaries as well as my own country, Astonia, where the Adelphi case arose from, where the strict liability has been seen as an answer for defamation.
>> MODERATOR:  We have five minutes left.  I'm going to ask the remaining three people to take only one minute.  You're one of the three -- one of the remaining three people to spend one minute on their question so we can go to the panel to round off and we are collecting questions and then we'll go to the panel.
>> AUDIENCE: Thank you.  My name is David Ferris from 21st Century Fox.  So I represent somebody from the copyright industry.  And I just would like to say we are gathering a lot of data about the different efforts to work with intermediaries to address online copyright infringement and we would be happy to share with any of you as you develop your research.  
Just one piece of evidence that I would like to provide is data.  You might recall that the European Commission had notice in action consultation.  And as part of that consultation, we did a review of the number of notices that we sent and the number of counter notices we received.  
So, I'm just going to provide you one example due to time, but we sent almost 7.7 million notices to cyberlockers.  We actually received one counter notice.  So in percentage terms, that is .0000001%.  So, the data demonstrates we take our responsibility as rights holders very seriously to do a robust review of the content that we asked to be taken down.  And also we offer that counter notice procedure for the users and again, one user actually used it out of 7.7 million notices sent to cyberlockers.
>> MODERATOR: Thank you.  The panel is chanting at the bit to respond.  Let plea go to the other question in the back and then one more question in the front.  No, I think it was the gentleman in the back.  Sorry, we won't be able to take all the questions.  We have one here and then one in the front and then back to the panel.  Thank you.
>> AUDIENCE: My name is Brishat from the Software Law Centre in India.  India, we did a detail study of the intermediary reliability regime and set of roundtable conversations across the country.  We have come out with a report available on our website.  So what we have done is arrived at a set of principles.  Since you're coming out with a new set of principles, it could be useful.  
What we discussed mainly with respect to the regime India is there is no put back provision in Indian as of now.  A part from that there was ambiguous provisions which makes it easy for anyone to get any content taken down.  This are things that need to be discussed, what can be done.  We are now given two parliamentarians so they can come on with better laws.
>> MODERATOR: Thank you.
>> AUDIENCE: My name is Eleanor from the Association for Civil Rights in Argentina.  We have made some kind of research and investigation because we have a lot of cases for intermediary liability and that is about defamation and right to image and not copyright.  So it interesting because it is not only the copyright view of intermediary liability, which is also available at our website.  
And I want to point out that in the umbrella of the American Human Rights system, we have a new report that I think Francisco has also mentioned on Internet and freedom of expression under the standards of the Inter-American Human Rights system.  And I think that the report is very useful to inform this kind of principles that and the process that we are announcing, and also for the work of other advocates in the world, which may be they are not under the umbrella of Inter-American Human Rights system, which is very generous in the protection of free speech, but I think that maybe we have to look more at -- I mean, the dangerous side of these kind of discussions that we are always looking at what is happening in Europe, which is in complete different system of Human Rights protection, and totally different with more protection on privacy and other rights vis-à-vis freedom of expression.  
So we have to look more on what is happening in other regions like for example, Latin America and the Americas and I absolutely encourage everybody to look at these OES report, which is I think, very useful to protect the intermediate and to protect free speech in this area.
>> MODERATOR: Thank you very much.  I apologize we won't be able to take any more questions from the floor.  We have run out of time.  But we are going to go back to the panel just for a final few words from each of our panelists.  Let's just go from right to left.  It's the fairest way.
>> I'd like to thank the floor for all of your very interesting questions.  Maybe I'll react briefly to the comment made by a comment on Adelphi and notice to notice.  I mean, I agree that uniform solutions, one size fits all, is unlikely to be the best approach.  And it may well be we need to have a look at different types of content but then we run into the problem that something -- criminals, civil and criminal depends on the country.  Sometimes something that is civil may be criminal and in another country and vice versa.  
And on the Adelphi case, just very briefly, it was a new site which brought into the equation a set of other problems to do with publishers liability.  But, article 19 was also involved in the case and we were hopeful too.  And I'll just close my comments there and pass on down the panel.  Thank you.
>> Okay.  Private sector is not only like one position.  We have ISPs and intellectual property and there is also diversity within the constituencies.  And of course, intellectual property in those industries are looking for their own profit as any company should do.  
Having said that, the problem, and this is rather than property when we have this notice of takedown systems you is create systemic incentives to take down content.  And then having 10 million requests for taking down content in one month denotes there is a system incentive to keep taking down content.  The fact of having only one counter notice is, which is only the U.S. system, that allows that mechanism, is also proof that there is no incentives to provide any counter notice.  So if you tell me one counter note, you are saying there is only one case if R. in which the takedown request was poorly drafted?  I don't think so.  So what I'm envisioning -- no, what I fear is to have a future in which votes will be controlling our speech on the Internet by making takedown requests, by making preemptive filtering of collaboration with authorities.  That's what I don't want on the Internet.  And that's not compatible with any freedom of expression principle.  That's like the broader headline.  Thank you.
>> Thank you.  Firstly, Adelphi and Astonia, U.S. also presented an amicus brief on that so I can only talk about that in the most general terms.  Suffice to say, we believe that it is very important that hosting providers should be able to provide a platform for general public discourse.  And if they are held responsible for everything foolish and unacceptable that everybody says you're saying that such platforms cannot exist.  That would be a social detriment that I believe ought to trump any individual's concern about a particular defamation.  That is my view.  
With regard to some of the other points raised, firstly, the millions of takedown notices that were issued. If you're under any doubts that the pressure that my industry is under from those that seek to regulate content, I believe the intervention from the judgment from Hollywood there will have set those doubts aside for you.  
Clearly there would be no counter notices in Europe.  Europe doesn't have a countered notice procedure.  But, we struggle with responding to the vast number of complaints there are about people's actions and behaviors, and the approach that I have laid out for you is not one that we would enter into lightly or particularly want to do.  But how else do you expect us to cope with that sort of volume of requests?  
I would also at this point, I think it is appropriate to draw the comment from the front here about, about this isn't just copyright.  Absolutely it is not just copyrights.  There are a whole host of stakeholder groups that have problems with the ways that people behave online and also generates complaints to intermediaries asking for us to restrain that behavior.  So those millions from Hollywood are just a part of what we have to deal with.  Thank you.
>> I wanted to connect a few of the points that were made.  So, here we are -- they often talk about balancing of conflicting rights.  So the freedom of expression of one conflicts with the right to privacy of another.  The right to be forgotten of one conflicts with the right to know of another.  
So the problem is, how do you ensure that intermediaries take into account the different situation in which these rights is involved?  Because these rights has a different weight depending on the situation so agree in that regard that copyright may not be the same situation in the case of a fancy speech.  
So, one of the key issues is anonymity.  The disclosure of identity of the users.  This was a very controversial part of the judgment.  And this should also be balanced by the intermediary depending on what is at stake.  So if anonymity was used to protect the right to life, to protect from a threat to your personal life, then it is clearly more important than if you just for exchanging copyrighted files.  
And the same kind of balancing is to be done in other situations.  So what we can do is formulate some presumptions, which are in accordance with the case law, and what the intermediaries can use to be guided in this balancing act.  
So, I think in this regard, the work that we want to do with the Dynamic Coalition is precisely to take stock of the existing case law and to suggest contractual clauses in the Terms of Service that are as close as possible to that and possibly take into account not only different situations, like offending speech and copyright, but also the reality of different regions of the world where the case is different.  
So I invite you tomorrow to come to start this discussion, which I think it will be very important in the future to be sure that the automated enforcement, which we are inevitably going to end up with the Internet of things coming, respects our fundamental rights.
>> I would like to thank the audience for their questions and comments as well as pointing to research that they are undertaking or have undertaken.  I would encourage you to come to the UNESCO Session on Friday and comment on our draft report, as well as comment on this zero draft principles paper that we at CIS and EFF has drafted.  
And just quickly closing, I would like to reiterate what was said about the need for capacity and that when we look at intermediary liability and gaps, it's not just about the regime but also about those enforcing and implementing the regime and they have technical capacity and understanding as well.
>> And I'd like to conclude with saying, I first I want  to react to a gentleman who unfortunately already left but it was somebody who knows him.  First I'm interested in the resources.  Second, I think it is tricky to use the data about notices in a way that results presented from both sides.  First because a number of notices never tells from how big is the pie so we don't know from what percentage.  Second, the number of countered notices filed doesn't confirm that all the other notices have been successful and are okay.  
The reason for that being is not only do ISPs do not have incentive but many ISPs try to operate under U.S. framework which gives disincentives to encourage users to file a counter notice.  
So, I would definitely be for restructuring existing counter notice system as well as notice system because it both doesn't -- but especially the counter notice system today does not properly enable the ISP's to encourage the users to first to actually file a counter notice in case they feel it is incorrect, but second also the users today are given very strong or very bad situation where they file a counter notice.  They have to unveil the identity all the time and many repercussions and many grounds and solutions that can be done and which would increase that number for sure.  
As a more general note, and the last thing I want to say is that I think when we draft all sorts of different solutions on intermediary liability, accountability, liability, whatever.  We should keep in mind the actors in a game will always respond mostly rationally.  And of course, I understand that if the system does not protect the interest of the user in a particular way, and it is effective, it is cheaper for the right holder or ISP, they will do it.  Is our job to make sure that a framework will prevent that.  Of course they will try to save on costs and find the best solutions.  PR is one of the pressures.  So I think this is what we should keep in mind when we draft those laws.
>> So let me remind that the comment of the representative of the right holders as to the key question that we are discussing here, how important is the evidence approach to intermediary liability policy and regulation?  Because I mean, that is an example in which data can be    if the process which they are collecting are used, it is not carefully reviewed, can be used in a deceiving way.  Then those data there, they are naturally itself and then on the other side, you may look at the work of Danielle Sang, Ph.D. at Stamford together with the work of the Berkeley and other institution takedown project, which is telling you that more than 20% of notices are bothersome.  
And again, I mean those data may be read in a total different rate.  Very, very small number of counter notices can just tell you the counter notice system is completely inefficient and that particularly because that counter notice system is strongly influenced by the DMCA approach and is going to put liability on the users so that users are never going to use that -- go to the countersuit.  Because then a fair use decision they have to face on the use of the material is going to be a burden to them.  
So that comment is a good example for us to say that it is definitely necessary to base any policy and to move forward in trying to next things on very solid evidence based notions.
>> MODERATOR: I'd like to thank our panelists and the audience for your inputs and also for staying almost 15 minutes later than our scheduled time but that is the advantage of being last workshop, we don't have to compete with anyone else.  So, hope to see many of you this evening if you're going to the Google event, which is coming up next, otherwise we will see you tomorrow.  Let's give a round of applause to our panelists.
(Applause)

***
This is the output of the real-time captioning taken during the IGF 2014 Istanbul, Turkey, meetings.  Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the session, but should not be treated as an authoritative record. 
***