IGF 2018 - Day 2 - Salle VIII - Understanding Cyber Harm: the Human Rights Dimension

The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> MODERATOR:  This will be an interactive discussion about cyber harm and human rights, with possible looking forward to the next IGF, possibly we might be able to take this discussion further into perhaps a more formal workshop panel structure.  What has been distributed here is a brief, it's a summary of the description that was on the Web site but also suggested items for our agenda.

I'll give you a little background, why we have initiated this and where we are coming from in a minute.  But we have got about an hour and a half.  I don't think we will need that amount of time.  But if we do, that is a marvelous indicator of a robust discussion.  So we can go to an hour and a half if we need to.

I had anticipate that had this, we weren't going to have any projection or transcription or things like that, but it seems we do.  So that will be helpful.

Anyway, bear in mind if you are going to say something it's not as informal as we had perhaps earlier anticipated from the beginning.  A little bit of background, some time ago starting back in 2013, 2014, I and a number of us who are here were members of the Freedom Online Coalition's working group 1 on an Internet free and secure.

That working group developed over a period of three to four years, three years, a set of recommendations on Cybersecurity and human rights, and also a definition of Cybersecurity and human rights, the Freedom Online Coalition, those working groups of which there were three, their mandate under the Freedom Online Coalition has ended, but those of us who were working on the human rights and Cybersecurity initiative thought that the work was important enough and substantive enough that we should continue it beyond the Freedom Online Coalition.

A number of us are doing that.  What we have decided is that we have, and there were a couple reasons why we are looking at cyber harms, but one of the key elements of the definition and the recommendation that we developed, which is here so we can hand those around as well, one of the key defining elements was recommendations and that definition of Cybersecurity, was that we probably for the first time when it came to defining Cybersecurity, we talked about the individual, we talked about the person.  We talked about, we talked very much in the recommendations on the human rights, not as a balancing act for Cybersecurity but as something that is mutually reinforcing.

When we came to, so it was this focus on the person, and how do we continue the work of the working group, that made us think about, maybe there is a dimension here that we need to consider, which is what are, in order to measure how one can implement these recommendations, one has to understand what the implications of Cybersecurity are for the person and their rights which led us to starting this initiative which is this initial discussion, which is how do we assess what the cyber harms are to the person?  How do we understand how we can minimize or mitigate what those harms are, and through what kind of measures.

So what you are receiving now is the definition from that working group and some of the recommendations, and that work can be found on our Web site, which is free and secure.online where you will find all this work.  The reason why this work is important and it's useful to those of us who are interested in cyber harms is that the work, the recommendations and the definition have global support.  They are supported by the governments of Freedom Online Coalition, at the time I believe it was about 27 governments, and they are supported by private sector Civil Society and other organizations.

You can find all that in the data.

  (coughing).

The challenge is really, that is a brief history of that, so the working group is continuing in a ad hoc format.  There are a number of us involved in that.  We would love for others to join us in the effort of taking these recommendations and definition forward.

That is a brief context, if you will, to this discussion, which is if we are really to talk about Cybersecurity and human rights and Cybersecurity and its impact on the person, how do we assess that, how do we measure that and how do we work to minimizing those impacts on the person?  That is what this discussion is about.

It's talking about cyber harms, not talking about cyber harms in terms of how they impact an organization or a structure or a infrastructure.  We are talking about, what we want to focus on is how we assess what the cyber harms are in terms of their impact on persons and their human rights.

Does that make sense to everybody?  Does everybody understand the general framing?  You will see, if you go on the Web site, you will see the recommendations in the background and the history, and see some of that on that piece of paper.  We have a number of people who will jump in here and there in the discussion.  I thought it was useful for me to walk top level, and others will help me out, what is generally meant by cyber harms and how we are looking at a slightly differentiated approach, if you can call it that.

I probably won't do justice to the work they have done on cyber harms and others can jump in and perhaps describe some of it as well, but perhaps the first piece of work that came out on cyber harms that was a real research paper that went into it was in 2016 from the Oxford business school.  It looked at cyber harms more broadly.  I'm going to read a couple things from there, so we are all kind of on the same page.

But and I apologize for reading off my laptop, but that research paper defines cyber harm as the following:  Cyber harm is generally understood as a damaging consequences resulting from cyber events which can originate from malicious, accidental and natural phenomena, manifesting itself within or outside of the Internet.

Those damaging consequences need not be limited to ICT, physical or emotional harm, both material and personal, can also be envisaged, in other words, where cyber harm is concerned, the spectra of cause and consequence are unusually wide.

The one interesting thing about this particular definition and I think these have evolved over time is there is no particular focus or reference to the human rights, there is talk about what the harm might be to the individual from a physical sense, but it doesn't go that extra step.

Let me keep going here, and then we can come back to our discussion.  The way that this paper measures cyber harm as it talks about a couple of approaches, it says we have to measure it according to the following.  Who and what can be harmed, the different types of harm, stakeholders and different priorities and perceptions of harm, potential measurements of harm and the categories, and then who is responsible for acting on those different forms of harm.

There is a whole analysis method that Oxford uses to go into this, which is quite useful.  It talks about things like physical, psychological, economic, reputational, cultural and political harm, and it impacts the individual, the organization, property, infrastructure, etcetera.

Those are the kind of things that have been used traditionally, I would say, to deal with cyber harms.  The one thing that what we are trying to do here is trying to say, that's fine when you are looking at an organization or, and you are looking at cyber harms as they come from an outside entity or cyber threat.

But there are multiple dimensions to Cybersecurity that impact human rights.  You have an external cyberattack, threat that can impact human rights.  You have a response to that, that can impact human rights, and then you may also have policies that are developed nationally that may impact human rights because they are disproportionate, etcetera.

What we want to do is have a discussion here about how we can develop that understanding of what they are, and how we can measure them, and then we have a much better capability of saying to states, these are the set of harms that come from these kind of actions, these are measures that we need to take to mitigate or minimize those actions so we are not impacting human rights.

That is the kind of outline that Oxford has used, and feel free to jump in or David or whoever, that there has been further work by Oxford where articles have been written on how these cyber harms can also be derived, can also occur through surveillance and excessive law enforcement activities, things like that.  You can find these online.

The issue really for this discussion is how do we leverage what's already been done in terms of cyber harm, and how do we bring a human rights focus to that work.

This is the way that we have framed this for this discussion but obviously would love to hear other perspectives on how we might frame this and take this forward, and what issues might be first front of mind, so that the goal is to come out of this, this workshop, informal workshop, with a sense of a direction, so that the work could continue and then we can continue to take this forward in some form with your help, leveraging the work that you have seen from the working group, Freedom Online Coalition and work done by others like Oxford, and take this forward into a workshop perhaps of the IGF or else where, and then have a fuller discussion.

Shall I leave it there?  David, Ian, Mallory, I'm going to turn to you first if you want to add or, I think I've spoken long enough.

>> I'm Ian Brown.  I'm not here speaking on behalf of the UK Government.  I'm principal scientist for one of the Government departments, but I'm not working on this policy area.  But briefly to mention what the UK Government is doing that is relevant to this work which is all public, but the current Government, conservative Government which was elected in 2017, had a manifesto commitment to make the UK the safest place in the world to be online.  That is something that has been very high political priority for the Government, with the Prime Minister and several Ministers giving regular speeches on the matter.

What President Macron said yesterday, our Prime Minister has said a number of times previously, including last summer she declared enough is enough when it came to online harm and promised to work with allied Democratic governments to reach international agreements that regulate cyberspace, and do everything we can at home to prevent the spread of extremism online.  That extremism has been quite long running focus of the UK Government, radicalization to violent extremism to give it its full title, our home office which is responsible for policing and criminal justice and so on has a extensive program called prevent, which is about, it's interesting how the language is used in different ways by different people.

This program talks about individuals who are vulnerable to extremism and radicalization, teenagers from specific communities in the UK, the Government has had concerns about certain materials that are being accessed online, which leads to calls for schools and for Internet cafes to block access to certain Web sites.

But alongside that long running concern, we have also had particularly the last two or three years I think a real focus from the UK Government on protecting children on child welfare, including exposure to Internet pornography, to cyber bullying and to harassment, and if you look at the consultation paper on the Government's Internet safety strategy which was published last year as well as those areas, it also covers the F word, fake news, disinformation, the UK Government now also bans the term fake news you will be pleased to hear, online misogyny and trolling, there is actually a number of female members of Parliament in the UK amongst other public figures have suffered a lot of abuse online and found it very concerning, even to the level of receiving death threats and physical violence happening at the same time and having to get police protection.

That is something also that the UK Government has been focusing on.  A couple final points, I think the UK Government is very keen to see a lot more research and evidence in this area.  It's great that this work is going on.  I think it could be developed a lot further.  Lastly, we have seen some institutions in the UK that have not previously been involved in these kind of debates, published research for example, our childrens Commissioner for England and Wales published a report looking at children's online privacy and vulnerability as a consequence of sharing data about themselves online.

Of course, things like cyber physical systems become much more prevalent including toys, that is something that often now becomes big media issues in the UK, a doll that is spying on children, or that is not adequately secured, enabling other people to hack into a toy and then spy on families.

There is interesting scope there as well.  It is not only the human rights traditional agencies and defenders, it's some countries including the UK have institutions to protect other vulnerable groups, particularly children, would be great to involve them I think.  Thank you.

>> MODERATOR: Thanks, Ian.  That is very helpful.  Mallory, did you want to jump in and give a perspective on, I've talked about the Freedom Online Coalition working group, but I don't know if you want to add anything more on that or comment.

>> Yeah, I think the way that this came up is we were trying to think about what could we talk about that would be most impactful, what could we do, because we have a limited capacity at this point to continue progressing our work.  This came up because, and I think it ties into the main session earlier today, people were there, which is that there is no shortage of fear, uncertainty and doubt coming from the governments.  It is not something that we need to make evidence.

What is necessary is to ground that evidence in human rights abuses, as one specific frame for what is wrong and what is going wrong and what are the potential harms.  But then also being able to be concrete and specific about what is happening, rather than having the conversation high level about this is bad, Cybersecurity is important, but then we have some more concrete evidence.

It serves those two purposes, like specific evidence of harm, ground in human rights, changes the conversation, and it gets it out of the space of let's just secure ties everything and take over all the things.  That is why I think it would be, really, Matthew is right, this meeting is really to scope the issues, to hear from experts in the room about this, because yeah, it's meant to be a discussion.  I'll stop talking now.

>> MODERATOR: Henry or David.

>> Yanked into the room at the last minute.  Firstly, there is a difference between Oxford University and the business school which is worth bearing in mind.  Oxford University doesn't have a position on this, the business school has produced some work on it.

This is something I scratched down in an attempt to be controversial probably.  I can remember at the time of the World Summit in 2003, trying to argue that the Internet and digitalization in general was not an inherent good as most people at the World Summit thought.  It was something that was going to have transformative impact and those impacts would be both good and bad.  They would be used by people who wanted to let's say improve society and those who wanted to harm society, those who wanted to use them for their own purposes, which were potentially malevolent.  It was a mistake to see it as something other than a transformational change, to try and see it as something that was a inherent positive, we needed to deal with those potential negatives.

Thirdly, most of the consequences will be unexpected.  So I think in practice, I kind of feel, that was not a proper view at the time of the World Summit and a lot of the problems that we have now have stemmed from a failure to address the fact that something that people wanted to think was inherently good also had these rather substantial down sides, which we are going to have negative economic, social, political, cultural consequences, we haven't done enough thinking about it, and that is why we need now to have a more serious understanding of what is harmful and where risk lies and so forth.  I'd reiterate, an enabling technology is going to be used by everybody to enable what they want to do.

In practice criminals are good at using enabling technologies and always have been.  So also are governments, so are exploitative businesses.

So I would say that, who was it that said this the other day, Vint Cerf gave a outstanding lecture last week in Oxford which was addressing these issues, was it him that said that this is also ‑‑ probably wasn't him.  It was somebody else.  Set that aside.  But he was emphasizing these down sides in that speech.  One of the things we have here is yes, it is a enabling means of disseminating information, it's also the most effective means of manipulating information we have ever had.

And we have to put those two things together.  The one final thing I'd say is that I'm not convinced that locating this as a human rights discussion is the right place to locate it.  I think it is significantly wider than that.  I draw attention too I think to ways in which human rights are being understood within many societies, as a concept, and particular to the way, to the kind of thinking around this that was published last year which is talking about a need to reframe rights language, rights language lost its resonance for many people, and that in order to regain resonance it needed to be reframed in a different or broader ethical values laden framework.

I think simply saying it's in the rights, it's in the international rights regime and therefore no longer works for many stakeholders, governments, businesses, Civil Society stakeholders but particularly ordinary people.

>> MODERATOR: Thank you.  Want to jump in?

>> I think David has covered a lot of what we were having to say from Africa, the bigger concern being that there is all of these conferences and even this week there is this emphasis on promoting digital inclusion at all cost, don't even call it inclusion, they call it access and that sort of thing, without looking at the potential harm that comes from that.

I think, and this is also why I decided to see Jack in the room who does work on online abuse, part of that problem is that we don't understand harm well enough yet, and I think I have more questions than answers in the sense do we need to differentiate different forms of harm, do we need to look at different communities.  I think tied to what David was saying around language is, I'm sure harm will be understood in very different ways in different communities, as rights are.

Some things will be considered bad in some communities and not in others.  So I think again we have more questions than answers at this point.  But it is important to have more nuanced understanding, especially as we promote the need to connect everyone at the moment.

>> She's teed you up, Jack.  You want to say a couple words?

>> Thanks for this and also thanks for alerting me to this meeting.  I think this is really interesting, so I'm from APC, Association for Progressive Communications.  We have been working on the issue of online gender based violence since 2005 and have been doing on the ground research on how this is being expressed, what are the adequacy of legislative response as well as by Internet platforms, since 2008 primarily focusing in the global south, and trying to exactly pinpoint this issue of what is the harm, what are the responses, what is the sort of research that needs to be done, how do we even define online gender based violence when there is a whole bunch of different kinds of acts that are involved, and even though David thinks that rights is no longer resonating, I think for many groups of people, rights is still really important to even just say that this impacts on a particular area of rights, for example, right to freedom of gender or sexual expression.

If I may, maybe I can share some of the things that we have uncovered in relation to this, so when you were speaking about the different types of harm for example, we met specific cases in the period of two years and we had something like 1124 cases mapped, and the kinds of harm that we looked at was psychological harm, social isolation, economic loss, limited mobility, self censorship and looking at online gender based violence as a specific barrier to access has been a important piece of work in relation to that.  We tried to unpack who is affected which was one of the questions you raised earlier, specifically someone involved in a intimate relationship, professionals often involved in the public so things like MPs and so forth that was being spoken about but also celebrities so to speak and existing survivors or from physical assault are those who are existing in situations of gender based violence, like a continuum.

This work has been really important to engage with global norms around gender based violence, to expand and take into consideration the cyber or Internet digital dimension of this, and last year there has been quite a lot of recognition of this work in various processes, so from Special Rapporteur on violence, resolution on gender based violence, to recommendation 35 that is a treaty body that has specific mechanisms of monitoring and trying to make sure there is state accountability on this issue.

I see that there is a disconnect between conversations on Cybersecurity and conversations on online gender based violence as if it's two different things.  This is exciting because it's trying to locate where the harm is happening, to specific bodies, to specific people who are in specific contexts, and that is quite critical to see that there is an existing body of work, that has happened, that the research last year sort of can speak through the existing body of work as well and see how they can build on each other to gather a deeper understanding, and how these two bodies of work that are doing, I mean, yeah, different networks of work that is trying to come up with a response, can benefit from more of this bridging conversations.

>> MODERATOR: That's useful, thank you, and also exciting, because as you say it's a body of work that already exists, that's identified some of these key harm.  That is really good.

So, we did say this was going to be a informal discussion.  I would welcome anybody taking the mic, as long as you introduce who you are and don't feel shy.  This is really going to hopefully shape something that we are going to take forward.  Thoughts about what kinds of harms that individuals, how do they impact human rights, has anyone in the room done any work, like APC has done?  Please feel free to ‑‑ yes, please.

>> Hi, I'm Catherine, I work for CIFE, I've not done something on the gender based discrimination or gender based violence, but I want to quickly share some observations that I have had the past two days, one and a half days, so this is my first time to IGF.  So I don't know if this is accurate description of some things.

First thing is, yesterday a lot of sessions talked about multistakeholder approach to talk about Internet Governance, but then I feel that one question is being missed and the question is, some of the countries there is no Civil Society, so basically, the state's civil societies are being crashed, limited, so they are not even in the discussion.

So then I feel like, sure, everyone understand multistakeholder approach is important, but when we are dealing with authoritarian regimes, so yesterday in the panel people are saying, go out, I think Russia, China, and some, but yeah, what do we do with them, so China being an extreme case.  Then the second question I have is, I agree with you that ICT is not necessarily good.  So we have to think about when they are being adopted and used in different scenarios, so again, this morning, the session about smart cities, I think people realize and it's pretty fully recognized that if it's being used correctly, consider inclusion, you consider other things, under Democratic governments this could be, bring a lot of benefits.  So then you make the governance, you make the public service delivery more efficient and more effective.

But again, you also have authoritarian governments trying to promote these kind of model how to tighter control the society, and it's not just in their own countries but they are trying to promote this model in other countries.  We are talking about international impact, right.  So there are a lot of countries in the middle, then what about them?  What are the norms that we can help to set out and to help, I don't know, guide or prevent some unintended negative consequences from happening, like we see in extreme cases that are already happening.  Third, yesterday, President Macron talked about there are two sectors or two actors that he thinks should be regulated, one is Silicon Valley tech giant and the second is China.  My question is, how about tech giants in China, right, so when they are going to say, neighboring countries, like say southeast Asia or they are basically constructing a lot of infrastructure, in modern countries, then what happened to the data?

I have no idea.  So I don't know if these are the questions that we should ask, but these are definitely the questions that I have.

>> MODERATOR: Welcome to the IGF.  (chuckles)  These are exactly the kind of questions that we tussle with.

Some of those I think are a little bit beyond the scope of what we were trying to discuss here.  But I think they are the kind of question that you should ask a lot of people and get their feedback on, because this is a great space for getting feedback on some of those questions.

The one that I think that, let me touch on the second one, which was the use of ICTs in what you called different scenarios and coming back to David's point, which was this is a enabler and it can be used in different ways, which is absolutely right.

I can only go back to the document that we circulated, which was this list of recommendations.  You asked about norms.  There is a lot of norms floating around at the moment.  In this working group we deliberately didn't call them norms.  We called them recommendations.  I probably should say that the working group that developed these was a multistakeholder working group and had the U.S., Canada and the Netherlands in the working group as well.  These recommendations come with significant Government support.

But what is interesting about these is it does say that when you, it implies that when you are developing ICTs or different kinds of technologies or whether you are developing security policies or whatever it might be, that one does really need to look at these in terms of the impact that they could have on human rights.  We talked specifically in here about Cybersecurity policies and decision‑making being rights respecting by design.

So there is a very clear call for when governments in particular in the case of these recommendations are considering policies, or considering for that matter responses to other cyber threats or things like that, that they should do so with human rights first and foremost in mind, rather than it being something of secondary or tertiary importance.

That is a partial answer to how do you take the discussion forward, I think part of the suggestion that we are making here is that you take that discussion forward with a human rights from the outset approach in mind.  That is very challenging, granted.  It's very challenging to achieve, but it's a recommendation that we can, we have significant support for and have the backing to be able to take that further.

I don't know if anybody else wants to answer any of your other questions.  But I'll leave it at that.  Anyone else?  Please jump in.

>> My name is Carrie.  I have a question.  I can probably talk to you off air about some of the questions you had.

I want to see, listening to the discussion, I didn't get the name, taking in the issues that you raise in terms of acknowledging that based on the research here it would feed into your work in identifying that you are a subset of different types of harm based on different profiles.  Is there a assumption because we are tying it to human rights that harm is automatically a violation of human rights?  Because the conversation you had now steered towards governments and the rights that governments should have to ensure human rights are sustained, but harm in terms of being conducted otherwise, would it be perceived harm, I perceive that you are harmed online, as opposed to your asserting harm because you actually did feel harm online, abusing your example because I could assume for example that I'm from the Caribbean and persons would say, women's rights, from the Caribbean, it is not as much a issue as when I speak to my colleagues in Latin America, where the Caribbean women are more assertive, in Latin America they are less assertive in terms of how structures are with the male gender issues (background noise).

I'm wondering, have you considered, because tied directly with human rights, it trumps it up to Government automatically, but some of the harm we are speaking about is people to people harm, and the responsibility of state to work with private sector to help to stem that harm.

I'm kind of lost in the discussion, only because I'm hearing like 7 different concepts, that is trying to merge together, but I'm not seeing the thread sewing them together, so when it goes out to persons that have to consume it, to make sure that I catch ‑‑ because there is a strong message but it's, when you had described it, I got it because you tied it back to the discussion on this side.  But I think some of the lead in the dialogue is away from harm automatically being a presumption as a human right when you start a automatic presumption.  I'm saying if someone could sew it together.  For me that is why I can't participate in the dialogue because I'm a bit lost with the thread.

>> First of all, human rights is not just the obligation of states.  Companies also have to stand by principles of human rights and there has been a lot of different kinds of I guess mechanisms in order to do that.

Human rights, there is also a person to person issue.  That is one thing to say.  Secondly is that whether harm automatically becomes a violation of rights, I'm thinking in my brain and I'm like, I don't know if it's violation but generally it will definitely involve some kind of right because we are talking about civil and political rights, but also talking about economic, social and cultural rights.

Having a rights framework is very useful, even though a little bit passe and maybe oversaturate, we are like, oh, my god, no more rights, it's still a really important and useful framework because it's the only one we have where we more or less agree and say these things matter and we develop mechanisms and systems to understand how this affects different populations and how do we ensure we go about looking at it.

In some ways, you are almost looking like a nested conversation, yes, Cybersecurity is a framework.  We are trying to look at specific source of cyber harms.  Within that we are trying to look at making sure that online gender based violence is recognized as a bunch of specific forms of cyber harms, where a lot of work has been done and also gains have been made at both national, local as well as international levels of norm setting, to that extent.

It is almost like trying to see it from that level, and which ever your entry point that makes more sense to you where you sit, then you ‑‑

>>   The discussion how you framed it, thank god the transcript is there because it helps the introduction of the dialogue.  I think somebody slapped on human rights everywhere, and to me that is not what you are saying.  I heard something different, how you describe it, a nested approach, that would help the framework to read these questions in a different lens.  That is my recommendation to you.

>> MODERATOR: That is a great recommendation.  By the way, that is exactly the purpose of this discussion, to thread that needle or however you want to characterize it.

The recommendations that you see are not specific to Government.  They were developed by a multistakeholder group.  They were specifically designed to be applicable, we talk about processes as well, so that could very well apply to a private sector initiative or other initiative.  So it is not specific to Government.

But I absolutely agree with you, but that is really the purpose of this is to try and figure out how do we tie these things together.  Nice way of putting it, the nesting concept.  Anybody else?

>> Thanks, good afternoon.  I'm a human rights diplomat from Luxembourg.  Thanks very much for organizing this conversation.  I have two things to say.  One is the more broad fundamental issues that have been raised about human rights framework, and I think the lady on the right, I'm sorry, I don't know most people in here, has said it very well, human rights are a broad, complex and also evolving framework, that benefits a lot from innovations, that is indivisible in the sense that you don't just have civil and political rights, but also economic, social and cultural rights and indeed possibly rights beyond that.

This is a universally accepted framework by governments around the world, although it is also contested.  But the conversation comes from people, usually who don't have a good reason to contest it but who are either Government or greedy corporations or extremist groups.

  (coughing).

Etcetera.

Nobody contests the human rights framework and the protection that it provides to human dignity and equality for some reason, other than enrichment or criminal activity or repression and so on.

That is very important, and when you say that, that has to be understood in the right context as well.  I'm not sure I do, but I think it's important that we put this in the right framework and that we also take into account the evolution of human rights.

A lot of this is not very new at all, if you look at article 1 of the Universal Declaration on Human Rights, it states that all human beings are born free and equal in dignity and in rights and I think that holds true to this day.  That is whether you are the poorest person in Afghanistan or richest person in Zimbabwe, that is something that is a all inclusive and universal framework.

Then of course also the fact that human rights also live in not only hard law but soft law, there are things like the UN guiding principles on business and human rights, which lists state obligations, but also cooperation responsibilities for the protection of human rights, and it's a good starting point, a lot has been written about it, and it is also certainly protectable.

The questions of Catherine, I think this is a very current issue that she raised and it's this problem that Government export forms of control, I mean authoritarian Government export forms of control elsewhere.

I think when you link it to this discussion here about harm, you could call this a form of export of collective or very large scale harm because what it is designed to do is to restrict the rights of people elsewhere, and it happens in many different ways.  You have governments that export, that talk to other governments in legal contexts and they export sets of laws that restrict the rights of citizens, of Civil Society, of women, of youth groups.  That also applies in the field of Cybersecurity.

That is also closely linked to what you talked about when you said that there is a spreading of negative norms.  It's not just authoritarian governments, it's also western governments that can play along with this because there is a fundamental problem that we have even here with a false dichotomy we have built on the one hand security and on the other hand between liberties and human rights.  We keep getting this very very wrong.

There is also private sector dimension and that is export of surveillance technology, malware, surveillance software and so on by private companies, and there of course the question is, should such software be regulated, should it fall under dual user regimes and so on and so forth.

I wasn't here for Mr. Macron's speech yesterday.  This may be some of the things he had in mind as well.  Of course, the important thing to do is to organize pushback against all of this.  Thank you.

>> MODERATOR: Thank you very much.  Yes.

>> Hi, I work for the City of Amsterdam.  I have a question also about the scope of your work, as a city we do see negative consequence of digitalization as well, and we have a special concern also for the role of platforms in this process.  I'm wondering when you talk about Cybersecurity, I mean we also talk about polarization, security issues.

I'm wondering if you looked at that in the role of algorithms, role of platforms, not terrorist use but wider xenophobia intolerance etcetera, is that included in your scope because we feel the consequences of that in the city, but we do not have the necessary mandate or tools or whatever to regulate such platforms etcetera.  That is my question, do you include that in the scope of your work and how do you look at that?

>> MODERATOR: I'm looking at my fellow ex working group one members.  I don't know.

At the time we developed these recommendations, we probably were not looking at that broad a scope.  We were probably looking at a narrower scope, Cybersecurity.  But the discussion has moved on since then.  I think it is fair to say that the range of issues now that people typically consider to be Cybersecurity is much broader.

I don't think that's, that is something if we as a group or want to take this work forward, I think that is something that we could look at.  I don't think it's, I think this is exactly what it is, a discussion about the opportunity and how it could be scoped.  Mallory.

>> Yeah, I wanted to comment, because I've spoken to people at New York City as well about the ICT office of the city there, and they also work with several other cities, a consortium of cities.  The advice I've had is, my teamworks on standard setting and technical standards bodies and it's a great thing if municipalities feel like they are implementers because that is what they are, implementing wireless networks or implementing other things.

Following those conversations about standards and also these norms and the way that they are applied, it's similar to maybe less ‑‑ it is maybe a combination between like a Government's role but also a company's role in the way that business and human rights are followed.  So yeah, I definitely think that there is a role to play.  But it's emerging, I think as more cities look at, more municipalities start to look at these issues.

>> MODERATOR: Anyone else?  Okay.  I have a question.  Go ahead, David.  Come back on the map because I'm going to shift us a little bit.

>> To go back to the point I was making about the resonance of rights language, I tend to agree with Michael, there is no longer a universal consensus around human rights being a desirable thing in the sense there was in 1948 or perhaps even 1966 in public discourse but that in many areas of public discourse it is no longer seen that way.  In Britain, conservative newspapers will use human rights as a term of a abuse in many cases, the Trump administration does not see human rights in the concept that we would determine as being a positive thing.  White supremacists are using free speech as a key demand because they are arguing for the right to demand discrimination against parts of the community and indeed advocate violence.

The resonance that there was in 1948 has been lost, we need to rebuild that and rebuild it by thinking outside the traditional framework.  Otherwise we end up citing the rights instruments in a way that you might say Ecclesiastical law used to be cited in medieval Europe.

  (coughing).

Old Testament terms, that is what it says, therefore it is, we don't need to argue any further about it.

That was the point I was making there.  The other thing I would suggest is that we tend to think about, I'm worried about thinking about these things in a binary term, harm is a binary thing.  Let's take employment aspects of digitalization.

  (coughing).

Jobs more efficiently done, is that harm or not harm, it's harm for some people, not harm for other people.  I come down on one side of the debate but both of legitimate perspectives.  The meaning of harm is not a precise thing.

What about issues around acceptable levels of harm.

  (coughing).

There are acceptable levels of harm and what are they?  It is an example here, something I've been thinking about recently quite a bit, the dissemination of anti vaccination rumors and content are on the Internet, which is having a significant effect in receiving vaccination in some countries to the level where the critical mass is no longer present.

From some people's points of view, to restrict that would be a unacceptable violation of the rights of free speech.  But it's leading to the deaths of children, which let's face it is surely everybody would agree a harm.

We need to avoid thinking about these things in binary terms, and have a much more nuanced and thoughtful approach to consequences.

>> MODERATOR: That is a great point, David.  I want to come back to something that Carrie Ann said if I may which was the issue of the notion, which you have touched on, David, the issue of differences at the national, differences between countries or differences between cultures in terms of what harm is.  I think this is what you were alluding to as well.

I was wondering if anybody else had any thoughts on that, as to how one, how harm is interpreted or understood in different parts of the world, examples of that would be very useful.  Feel free to jump in.

>> Coming from Pakistan, I think criticism of religion is considered as one of the harms that is prevalent in cyberspace.  In Pakistan we are having this discussion where judiciary and Government have been demanding laws that would criminalize criticism of religion as well.

We do have a blasphemy law but being applied in the online space as well.  This year, we had this case where a person was accused, he was sentenced to death for committing blasphemy online as well.

Religious, the criticism of religion is considered as a cyber, considered as a harm in our case in Pakistan, because religion is a very prime thing.  I would like to share that initially, Pakistan also produced a Cybercrime act that was meant to take down terrorist content.  But we have seen that those anti‑terror laws are used to silence dissent.

Two weeks ago, one was arrested under anti‑terrorism law because he was accused of defaming the judiciary.  Defamation of judiciary is a subjective term but again these counter terror measures are being used to silence political dissent as well.

The measures that were meant to counter a issue they are now being used to target journalists and human rights defenders as well.

In our local context, in the global south, cyber harm has its own challenges as well.  It is being interpreted in different ways, and at times it is being used to silence free speech as well.

>> MODERATOR: What kind of organization do you work ‑‑

>>   I'm working for a organization that works at the intersection of media and digital rights.  As part of my work I'm managing Pakistan's digitalization Web site as well.

>> MODERATOR: I'd like to come back on that.  For example, in that case, how do you manage the issue that if you look at it from a traditional human rights perspective, you might say that is infringing on free expression, for example.  How do you, and that is again as David was saying, it's a western construct, particularly in this cultural sense that you are talking about.  How do you manage that kind of interesting intersection between the law on the one hand and this right on the other?

>> To be very honest human rights groups, they are very afraid of touching that topic as well, because we have had instances where religious groups have been inciting people for violence, right.  So the religious groups, they wield immense power and human rights organizations in Pakistan they tend to avoid these topics as well, because the mere discussion of changing the Cybercrime law or changing the blasphemy law has got people killed.  A former Governor in 2011, he was a powerful person, he had immense political power, but unfortunately, he was killed because he was lobbying and he was advocating for change in cyber, in the blasphemy law.

People tend to avoid, human rights organizations tend to avoid sensitive issues.  A lot of people tend to exercise self censorship on religious matters as well, even not just about religion.  If you are the one who wants to take a critical assessment of security policies in our context, that will get you in trouble as well.

So there are a lot of red lines that we have to respect in Pakistan, and unfortunately, we are seeing more self censorship in cyberspace as well.  Initially it was just in the mainstream media, so in Pakistan initially for a long time the human rights defenders and journalists could not find space to do a critical assessment of security policies.  They used to take to social media and discuss these policies as well.

However, things changed drastically in 2017 when four bloggers were picked up allegedly by security agencies, and they were analyzing human rights violations and security voices.  So you have religion and then you have security issues, and then you have human rights violations that might not be, the critical assessment is not welcomed by the state as well.  There are multiple challenges that are at hand.

>> MODERATOR: Challenges, yes, please in the back.

>> My name is Anil.  I work for the Danish Institute for Human Rights and we work a lot with, based on human rights as a topic and human rights impacts assessments.

I wanted to say I think the discussion is great.  I don't think we are going to solve whatever harm is, definitely not what is an acceptable harm or not.  But if we are talking about private sector actors, I think there is a good arguments to be made that whether some platforms like Facebook, Twitter or other platforms have not, have wanted to have a hands off approach and kind of, we are just providing a platform, we are not complicit in any harm that is done on our platforms, that may force to make decisions.  Facebook made a decision in Myanmar, they didn't want to but had to in the end.  Twitter pulled off people from Twitter in those countries so they are in fact doing this and they are in that sense causing harm too, for example, freedom of speech but then to make the point for human rights I guess to say that at least the guiding principles they provide framework that is not necessarily perfect but is a way to interact with harms and not have a ad hoc process.  If I have a platform where 99 percent of all the users only insult women or they make death threats to MPs, well, maybe there is the balance is not in my favor here.  I need to actually do something about it.  I'm not sure what the balance is.  But I think it's, we have this framework and I think we should use it until we have something better.  That can deal with private sector harms in a structured manner.

>> MODERATOR: You raise a great point about what frameworks can we leverage to understand harm.  It's a really good point, that we don't have to, we have the luxury of looking across a number of different initiatives that have been undertaken, including that one.

Any other comments?  Yes, please.

>> Hello, I'm Oliver, human rights lawyer for a organization called Free Expression Myanmar.  I wanted to respond a little bit to what my colleague in Pakistan said, so in Myanmar there is a similar situation to Pakistan, in the sense that there are laws or at least legal practices that defend majority religious views.

I think the difficulty when it comes to harm is that when you have laws, it's very difficult, I mean from a sort of western legalistic point of view, it's very difficult to measure the harm within the court.  I'm sure it's the same in Pakistan as it is in Myanmar that if you are creating some form of harm for religious group, how do you measure that harm?  Because obviously, a court would normally measure harm, whether it's in regards to financial issues, or compensation, if it was to deal with a criminal issue maybe it would measure the form of violence but when you are measuring harm against an idea or against a massive group, it's just totally impossible.

Then what happens in practice in countries like Myanmar and possibly Pakistan, is that the decision is just entirely divorced from any objective understanding of what is happening.  And it tends to the decision just reflects what the political atmosphere is of the day on that particular time.  I don't know if it's the same in Pakistan, or how, if these cases go to court, how do lawyers make arguments around harm, when it comes to religious ideas.  What do they measure, what is the objective underlying it.  Thanks.

>> Hello, I'm Karen from the Global Cybersecurity Capacity Center.  We published a paper which was mentioned earlier.  Unfortunately I was a bit late in the session.  There is still work in progress to develop a cyber harm framework based on interviews with experts, which was input for the existing paper, and it's something which should help nations later together with Cybersecurity model that we are deploying to help nations to better understand harm and better the assets and where harm could occur.

It's still a work in progress.  I'm happy to share.  The researcher couldn't be here today but the assets controls harms and threats and this came out of the interviews.  Yeah, I'm happy to speak maybe afterwards, where the research is and also see were there any kind of opportunities to share some of your work as well.  Thank you.

>> MODERATOR: Thanks, Caroline.

>> I think we got a little bit lost, in terms of where we are trying to get to in this conversation.  But that was very helpful.

Development of a framework to understand cyber harm so that you can figure out what we are going to do about it in terms of where we sit is helpful.  Did I get it right?  Yeah?  Okay.  Because I got a little bit confused.

The other thing I wanted to also respond to, maybe in relation to two things, one was around inciteful speech and where you measure the harm.  There is work being done to try to understand how do you actually identify and measure the harm of inciteful speech from various different kinds of standards which probably Mallory can also speak to.

It's not that, I think it is not about harm to ideology, it is harm to specific groups of people because of the sharing of particular kinds of comment, and the thing around freedom of expression is also to look at the expression of the most marginal voice.  It is not to protect the expression of the most majority voices, as well, but then I think that the exception comes when the minority voices is the one that gets muted a little bit.

The other thing to also say in terms of some of the work that's been done because I'm trying to figure out, once you understand the harm, and then what, no?  One thing that we have been looking at in terms of different stakeholder groups is through the online gender based violence work is to see how we can apply the principle of due diligence to states in terms of looking at harm.

It includes doing things to prevent harm from happening, and once harm, so this is through policies or legislation and once harm has happened to ensure that there is appropriate redress and recourse.  For states it can be a little more straightforward, although jurisprudence and online issues will always come in.  When it gets trickier is when you talk about nonstate actors.  Private corporations, you can still say that you can move from liability to responsibility, that there is a responsibility and there is already framework but maybe to actors like technical, yeah, people who are more responsible for technical language is leaving me, but yeah, telecos for example or to groups where you are more involved in technical aspect of this, that is where it gets blurry, because you often don't see them as having either rights, duties, obligations or responsibility, nor is service provider so where do rights come in, in relation to harm, in relation to role in this kind of circumstance.

>> How we measure harm, it is physical strength, because actually, Georgia is one of the most Democratic countries in our region, but in spite of the fact, Russian, fake news and charge used by some KGB people are implementing ideas, the western world is bad and all corrupted, and it is bad example for children, etcetera.  The minorities they are beaten.  I came to say I'm from, we are keen to be Democratic, but since ....

  (receiving no audio).

>> MODERATOR: Thanks for that, sorry we don't have a mic that can reach to you.  Anyone else, we will talk about whether or not we can find steps and ways forward.  We got a couple of minutes.

As we characterize this from the beginning, this was just a kind of initial discussion, so one of the thoughts that we had was that we would try and compile the comments that have been made and now we have a transcript, it makes it much easier.  And think about how we might frame this in the context of the recommendations and the definition, and some of the work that has been done as you have seen on the paper, and you can go on the Web site, on free and secure.online and you can see the rest of the work.

But if you are interested in continuing to be involved in this, as it evolves, and as I said, we are very much in the beginning stages, please let me or Henry or Mallory know and we will be sure to keep you in the loop as things evolve, and what we will do is do a summary of this, try and give it a bit more context, and framing, and if you have any references or documents that you think would be useful to help shape this work, research papers or whatever, that would also be very interesting to hear about.

Mallory and then Henry and we will close it up, I think.

>> A quick comment, because I won't be here tomorrow for the Best Practice Forum on Cybersecurity, but usually at the end of the session every year they discuss possibilities for the intersessional work for the next year, if people find this interesting and want to get engaged in intersessional work, this could be the topic to do it on.  The format is they do scoping thing, everyone can submit to it and they come up with a final report.  This year it was on cyber norms.  But this could be an idea.  I know there may be other ideas people have but if there are people in the room for that session tomorrow, that feel strongly about this, this could be one possibility.  It doesn't necessarily replace what we want to do, and if you want to work with us on this, we can do all the things, because it sounds like there is no shortage of evidence building that is needed on this and further exploration.  That would be my only comment.

>> Maybe just to add, it's a exploratory session but it's good to hear there is work happening, and if anything it seems like there is need for more collaboration, and if we could at least try and do that, whether it's through a D.C. or this working group that would be fantastic.

>> MODERATOR: As I said, please come give us your business cards or write down your E‑mails, so we can keep you informed, and that way, we can get out some notes from the session to you, if you are interested in continuing to be engaged.  Thanks so much, everybody for turning up.  Much appreciated, and thanks for your comments.  I think we are finished.