IGF 2020 – Day 11 – WS353 Hacking-Back: A Dialogue with Industry

The following are the outputs of the real-time captioning taken during the virtual Fifteenth Annual Meeting of the Internet Governance Forum (IGF), from 2 to 17 November 2020. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 




>> MODERATOR:  Good morning, and welcome, everybody, to our discussion of hack‑back, Hacking Back, A Dialogue with Industry.  Before we get too far into this, I want to throw is to Justin Vallese to take it away.

>> JUSTIN VALLESE:  Hi, everybody. I would just make a few introductory comments just perhaps to take a step back and see why we are having this discussion.  So for about a decade there was a process on trying to work on cybersecurity issues and try in the framework of governmental experts, the GEG.  And there was some progress between 2006, more or less and 2016 especially around the applicability of international law to the cyber base. 

Then these efforts sort of floundered, sort of collapsed, and there was no longer any progress, whereas at the same time, there were all of these big attacks we all know about nut PTR and others.  So there were various initiatives from the Tech Accord, from the charter of trust, from many private actors saying we need to do something.  It cannot remain the far best. 

So many of them banded together around the Paris Peace Forum in 2018, and that's how we got to that initiative, the Paris Call on Trust and Security in Cyberspace with a lot of contributions from the private sector, from the industry, from Microsoft, Siemens and many others.

So almost two years from now, November 12, 2018, and the nine principles that we all know including the ban on hack‑back.  So I do not know, I'm not a technician, I do not know what constitutes good hack‑back and bad hack‑back so to speak or at least the measure that can actively prevent cyber-attacks, cyber espionage, et cetera, that I will leave for the panel.  What I can say is it is critical that we go further in working around the Paris goal on trust and security in cyberspace.

Just two days ago on November 13, three days ago, we had on the third edition of the Paris peace Forum, Microsoft along with people from the UN, along with Ambassador Patriota and Ambassador Lobr, discussing and the Cyber Peace Institute discussing how we take it to the next step, that is to say how do we take the nine principles and the community of many actors, more than 1100 actors.  That's a huge multi-actor platform.  How do we take it to the next step, that is to say trying to work to sort of deepen and elaborate on each of the nine principles.

Also, how do we start of narrow the differences between this huge multi-actor platform, the Paris goal which sort of has its life at the Paris peace Forum among other places on the one hand and the UN process.  On the UN side things have restarted, but they are difficult.  They are really difficult.

And so the discussion was how do we get that convergence between the formal multilateral process which is key and which we support 100% and on the other hand, what can the Paris goal bring to that.  And I think, and I finish with that, that one thing that the Paris goal community can bring and that session that IGF can bring is precisely to go into the details of these nine principles, and if we manage to, I would say almost to write some norm if you like or at least to elaborate some precise definition of what we mean by hack‑back and what we mean by lifelong cycle of products for which we must maintain safety, et cetera.

This will only add to the work being done not just at the cyber, at the Paris goal, but also at the UN because one day or the other we will need these definitions, we will need to see what we mean exactly by hacking back, et cetera.  So this session is really important.  It is well placed in the continuity of the Paris Peace Forum, the IGF and the UN, and I'm really glad that it's happening, and I will be all ears from now on.

>> TREY HERR:  This is an interesting topic for us not because it's impactful or complex or much argued over, but because it's this constant recurring theme.  The private sector has tremendous capability.  Private sector is on the firing line, owns the infrastructure, builds the infrastructure, why aren't they allowed to roam as freely as possible to police the infrastructure? 

It's stagnated a little bit in the last two to three years conceptually and legally.  First, let me introduce our fantastic group of panelists, Seth Cutler who is the Chief Information Security Officer with NetApp, Kaja Ciglic who is Senior Director of Diplomacy with Microsoft, Ed Cabrera and the Alissa Starzak.  I'm Trey Herr, I run the cybersecurity program at the council.  We will have a moderated conversation with our luminaries and then we will go to the group.  So let me encourage all to ask questions in the Q and A section throughout.

To get us going as a group as a whole, I know we are waiting on Ed, but we will fall in line.  I want to ask for everybody we have a prohibition on private hack‑back.  What does that mean in your view?  Alissa, then Seth then Kyle.

>> ALISSA STARZAK:  So I think it's actually a really interesting question, because I think we have to think about what we are trying to do, and then think about what the limits are in that context.  There is an understandable definition, which is basically a hack‑back is an attempt to go back in and take data, either take data back or do some sort of damage that will prevent it from going forward.

I think the question though is really about the limits on private action to address a cyber threat.  So I think we often think about things in the defensive category and those are obviously not what's of concern.  The question is really about potentially trying to get the information, so someone attacks you, what do you do in response?  Is there something that is external to your own orders in response and to the extent you are doing it, is it actually an aggressive act is the way I would think about it.

But we actually do have some definitions in this space and I think some of our conversation today will be thinking through exactly what they mean for companies like ours.

>> SETH CUTLER:  So I think the key principles that have been outlined in the Paris Call or, you know, are foundational, but there is a lot of gray and blurred lines along the way.  I immediately think about offensive actions by non‑government entities taken in response as a retaliation.  So at the highest levels this includes unauthorized access to protected assets, comprised of deleting or retrieving data and then intent is the key here.  If the intent is to cause harm, that's the problematic area.  But there is a lot of gray along that way.

So Alissa talked about some of the active defenses and I think we will see a lot more of that, more over time, and I think that's part of the gray area that needs to be defined.  Kaja, lost occupies a different view in the ecosystem and what is your take on that.

>> KAJA CIGLIC:  We do occupy a slightly different position, and I think maybe just from the fact that we have a large platform effectively that could potentially be by mistake often times I think impacted by hack‑back.  I think that's why maybe we are a little bit more conservative, but the definitions that were just outlined or sort of in broad strokes, I think we principally agree with.

I think the intent Seth was mentioning is particularly important.  I think that's kind of also to me, when you just look at the term, you know, like the hacking back, the back part of it is to me signifies retaliatory action even though it's sort of in the conversations is, the definition seems to be often allowed more broader.

So for us, it's a question of ensuring that you stride that line between what is retaliatory action which would be frowned upon and what is actual activity that is done in an effort to protect your own environment?  As I mentioned earlier, because we are a platform, I think for us it's much more, it's a scarier space than maybe for some of the other more security focused companies, but, you know, cloud fair is in a similar position.

So that will start us off.

>> MODERATOR:  This is interesting, let me dive into the retaliatory question because I think it will be something that will color the discussion.  Since the announcement of the Paris Call we have had significant development in United States in this strategy around defend forward where American military operations and intelligence operations are looking to push beyond networks to gather information and plan attacks closer to the source.  This it illustrates a utility of a proactive offense and it speaks to something that is a common refrain to be moving not just within your own networks but to be listening beyond and trying to get ahead of the attacker cycle.

To your group, how much of your thinking about the hack‑back debate is colored by this understanding of it as solely retaliatory action.  How much does the discussion change if it's proactive to try to prevent an attack if it's preventive.  Let's go in the same order if we could.

>> ALISSA STARZAK:  I think it's a necessary component but not for the reasons you suggested.  The question is really the action piece of that as opposed to just the listening component.  So the retaliation piece is not just about the intent of what you are trying to do, but it's potentially, well, I guess it's related to the extent it's talking about the activities you are actually engaged in.

If you are thinking about going out and trying to figure out what's happening, that might be different than, for example, taking action to take back or to encrypt it or to take something that is actually activity.  So I think, I think there is a little bit of a gap there.  So it's not just the sort of I want to do harm or I want to retaliate question, but also what action am I taking, and what does that then look like.  I don't necessarily see sort of an information gathering exercise the same way as I would see an activity beyond.

And I think that the latter is much more potentially damaging for the reasons I flagged, even as someone in industry who has a network, the concerns about potentially being a victim of hack‑back or any attempted hack‑back, it's, things go around.  So being both a cybersecurity company and an owner of a network, you have to be worried about those things.

>> SETH CUTLER:  I agree.  I think as an international company, you know, we always have to believe in protecting our global assets and interests and then simultaneously adhering to international law.  So there is this, you know, gray area of whether or not active listening or collecting data to thwarting an attack, there is that gray in everything in between.  So I think that active defense as I spoke about earlier will continue to harden and expand so it's non‑retaliatory in nature to collect that information if it's shared with a legal and Government entity.

>> KAJA CIGLIC:  I think I would agree with that.  I think in addition I would also, you know, and I look at like someone with a large network, so a lot of the stuff that we look at from just an observation perspective we look at is on our networks or the majority that we see.  But, and that is definitely something that is considered defensive, you know, in particular if you don't necessarily do anything about it, except perhaps back to your own system, because you discovered someone is actually trying to sniff around a particular vulnerability or something like that, but it's definitely an important part of our defensive strategy.

Because you mentioned Government, Trey, I think on Friday we published a blog post about how we have seen a particular Government entity effectively surveilling, spying, trying to steal information from vaccine researchers on COVID‑19.  So that was on our systems, but it's definitely something that we keep an eye on.

And both in terms of Government and criminal actions.

>> MODERATOR:  Welcome, Ed.  We are into this discussion around extent to this retaliatory is way key part of the hack‑back discussion.  Kaja and Alissa spoke of something interesting the nature of the act as well as intent.  I'm willing to buy this but it seems like a flexible definition especially given that Paris Call calls to prohibit hack‑back where the information I'm gathering to monitor the activities could lean into the kind of access from an adversary network or foreign network that might be considered prohibited.

So to come back to the group, let's go, Alissa, Seth, Kaja and Ed.  What's on the right side of the definition and what's being prohibited?

>> ALISSA STARZAK:  I will come back to the point that Kaja and I both made being on side of both sides of a subject of hack‑back as well.  What we are trying to do is make sure that we don't have a bunch of vigilante, the sort of overall nature of the Paris Call is we don't have a bunch of vigilante actors trying to think about how they then separate in cyberspace.  We want a set of norms about what is acceptable and what is not.  One component is to say that private actors shouldn't just because they think they are wearing a white hat shouldn't be going out and trying to reclaim information or trying to reclaim, trying to take activity against potentially bad actors.

I think the challenge, I think, in the space is exactly where those limits are can be tricky.  I think the reality of cyberspace is that there are always going to be gray areas on exactly what looks like something that is intrusion in systems, for example, that where there is the exact activity that itself hack‑back is going to be slightly gray.  I think the prohibition component actually gets at some of what we already have in law.

We have a bunch of restrictions about what is permissible already.  And I think that those are really useful exercise to understand that a lot of, some of the things we would potentially talking about are already prohibits in a variety of different areas.

So it's not clear that we have to fully legally define what constitutes hack‑back as long as we have a shared understanding that certain kinds of actions are unacceptable and should generally be something that is not, not only frowned on, but it's recognized as potentially illegal from a private actor standpoint.  I think that's very different than what happens on the Government side and understanding the distinction of when something should be Government action versus when it can be private action is one of the most important parts of the conversation.

>> SETH CUTLER:  I agree.  I think the physical realm has a solid foundation that extends to the digital realm.  It comes down to intent.  So you see this gradual move from things like, you know, honey pots and sink holing, people using DLP and UVA data tagging.  It gets a little bit gray when you start talking about trip wires and land mines and in the physical realm we have dye packets in a bank when somebody steals money from a vault, a little grayer in the digital world.

So as data started to be tracked or destroyed or encrypted, you know, that line is still not quite defined.  So it's an interesting and thought provoking conversation, I think.

>> KAJA CIGLIC:  Maybe I will just add, or, like, reenforce something, I think it's important for all of us to remember.  There is a clear statement in the Paris Call on prohibition on hack‑back.  I think that's largely been because there has been conversations around potentially allowing it.

I think pretty much everywhere where there are cybercrime laws, it is prohibited because it would break a cybercrime law.  If you look at the U.S. legislation, if you look at the European legislations, the ones I'm most familiar already with, it's, so the question here a lot of the time is would I want to weaken those protections that already exist to allow private sector  actors or individuals for that matter to act offensively and aggressively in cyberspace?

And to me, the answer is no.  I would say another point, not just that it's really different what states do, and obviously Microsoft encourages states to do less, thanks.  But it's also, you know, the middle ground, I would say.  We are private sector actors.

Or even white hackers can partner and work with law enforcements in particular to bring down criminals, in an offensive manner.  So with authority from governments, I think that's also another angle that is something that is much more common and it's vetted and people have a conversation versus not me because I don't know how, but an engineer somewhere deciding that, you know, they see something bad, and they want to help in reality.  I think completely from a helpful perspective.

And then potentially the person on the other side could sit in China or North Korea, and perhaps in a Government, and so even worse than attacking a private sector provider.  It could have really bad consequences for the world.  I have gone dramatic.  Sorry.

>> ED CABRERA:  I think I would add and sorry for the technical difficulties.  I'm glad to be here with all of you.  Great panel, great discussion.  I think from a perspective is, yes, it's not only a technology challenge in cyber.  There is not one for one from physical as it was being discussed earlier.  I think now we are getting into also international law as well, I mean, given the challenge of what is decided as a criminal act from a hacking back perspective.

I think that definition is going to vary, and sometimes it's not even defined in a lot of countries, so the question is how do you do that well minimizing if you are a corporation attempting or thinking of doing this to defend yourself, what can you do without exposing yourself to obviously litigation within the country, but internationally.

So I think here it is interesting and it's a great topic to understand what are the lanes in the road and unfortunately we are talking about a dirt road.  There is no lanes.  So we have to, you know, obviously think from a perspective let's say from a private industry perspective think that this is, you know, there is a lot of exposure here, and then what is this, you know, return on investment so to speak?  What's the cost benefit analysis of doing hack‑back to further, you know, mitigate the threats of you are facing.

So is it the last result?  If not you are going to go under as an entity?  Because you are being, you know, attacked incessantly.  I don't know what that definition is or where the bar is, but I think ultimately everything has to be put on the table and to be considered.

I think electric a law enforcement perspective, when we start talking about the lanes in the road, albeit maybe not a dirt road, but definitely not a well‑paved road is that.  It's paved in the sense that, yes, there is historic case precedent, international law, and in working with other countries, there is very gray areas and pitfalls even from a law enforcement perspective.

What are they doing to further their investigation, and are they doing activities that are seemed or deemed illegal in those countries.  So it's a very ‑‑ I'm glad we have a lot of time to discuss.

>> MODERATOR:  Let's push on this a little bit.  I think it's interesting, there is a little bit of distinction emerging for some of you, there is a clear set of definitions already in domestic law that we pick up and carry forward.  For others there is more ambiguity.  Let's think about the U.S. in particular for a second.  A lot of the architecture for hack‑back rests on the Computer Fraud and Abuse Act, which is premised on the idea of authorized access or not and protected computer systems.

So what has been agreed to or permitted and with the network boundary question, is this something you are allowed to touch? I'm curious what this group sees as the major dividing line using the authorization and network boundary architecture, where is the boundary of this is permissible, this is kind of thing a private sector entity should be doing either as a service or in conjunction for a larger platform provider and this is over the line?

I wonder if we could start with Kaja and go to Seth and Alissa.

>> KAJA CIGLIC:  I largely would say it depends a little bit.  You know, I still think overarchingly it should be no.  But, and I think the reason I say it depends a little bit.  It is the question of intent and, you know, is the question of if we broaden the conversation a little bit.  It is and part of the reason I want to do this is just to make sure that everything kind of follows through with the conversation, because I know not everybody is computer experts.  So some of the, as we go forward, some of the terms like sink holing and stuff like that it would be good if we explain it.

So to make, you know, there is obviously if you do, if you look at the sort of think site penetration setting just because I'm looking at some questions that are being graced on the side chat as well or vulnerability disclosure.  Those are obviously things I would not consider hack‑back, and the vast majority of the industry does not consider hack‑back.  Some penetration testing is done with explicit permission of the rendering question.

The vulnerability research while not done with explicit permission with a vendor in question, but it's, again, done with a particular objective, the objective being trying to find and fix the vulnerabilities.  And if it goes through a clear process of coordinated vulnerability disclosure, I think ‑‑ not the vast majority, but a lot of vendors will accept and work and reward the researchers who do those things.

I think that's still a shift that needs to happen in the industry more broadly, but I think we are getting there.  But and on the other side, you have the question of sort of intent being more malicious.  So that's the complete extreme, right?  Just hacking ‑‑ this way is good intent.  This is bad intent.  You are a bad actor, you want to do bad things and the hack‑back as I think we are talking about it here is somewhere in the middle is the intent is positive because it's defensive.

But ‑‑ hopefully, the rules of the road and the knowledge of what is happening is sort of a little bit murky.  And so that's where, I think, that's why I'm a bit lake it's a sliding scale.  I'm just not explaining very well.

>> SETH CUTLER:  I think you did great, Kaja.  It's a challenge.  We are balancing offensive and defensive within a legal framework.  So where does that, where do those boundaries start and end?  And I think we all know that the law is not going to keep up with the technology, you know, albeit it's foundational and there is a lot that could move from the physical to the digital realm so I think it's a great place to start.

But I do, I do go back to the active defense, right, and collaborating, you know, with legal and Government so that we can keep those boundaries in as much sync as possible.  There is always going to be ease legal, ethical and practical standards we have to adhere to so there is a lot of work that still needs to be done.

>> ED CABRERA:  I completely agree.  Seth hit it on the head.  Arguably it's the active defense.  I think what Kaja was referring to or peeking to also is something that has to be something really discussed and defined.  And as Seth said, you know, the legal if framework so to speak is that lagging indicator.

It's not, you know, being updated as it should.  You look at the computer fraud and abuse account here in the United States, and it definitely, you know, needs to be updated, but then everybody is sort of hand wringing when they are talking about updating, what does it do and what does it allow?  So I think the challenge going forward is from the vulnerability researchers, you know, trend micro we have zero day initiative in about three thousand researchers we participate and that becomes a challenge and a concern from a research perspective.

As they do their research at what point could it be miss construed or seen and we spoke to intent here earlier.  So intend is very critical.  But a lot of complicated issues on what is at ‑‑ the one thing we can agree on, at least I think we can, is what Seth was saying at the time active defense piece now working with Government and or lay enforcement, you know, to further every a criminal session so to speak.

And, you know, just to make sure there are checks and balances in that.  I think ultimately the biggest challenge with anything that might be construed with hacking back is collateral damage.  Compromised infrastructure is used on a, I wouldn't say even hourly basis, every second around the world there is compromised infrastructure that's being leveraged and utilizing or doing international attacks be it from ransomwares who did data breaches and so forth.

So I think that's where the focus would be and where we could focus our efforts to see where we can expand that collaboration and cooperation.  I think when we think of the terms we talked about earlier is that truly hacking back and doing offensive work, yes, that gets into that challenging complicated gray area.

>> ALISSA STARZAK:  I agree with all of those things.  I think the other piece to think about, and this gets to your question in the first instance is that the definitions we use in the Computer Fraud and Abuse Act.  It's not intended to focus exclusively on hacking back.  That's what you are getting at in the potential.  When Kaja is walking on the spectrum it covers activities on the one side of the spectrum which extends far beyond hacking back it extends into the malicious activities as well.  I think that was the point that there is a continuum.

Personally, I think that say be okay.  I think you are right, Ed is right that we need to sort of flesh out what that means form a research standpoint and make sure that we give delay for people who do have good intent, who are trying to address vulnerabilities in the system.

I don't think that necessarily means we need to fully update it for the purpose of hack‑back.  And that to me are, there is an important dysfunction between those two.  I think one of the questions on disclosure, how we get into exactly what we are trying to do with the information, what the goals are, what we, again, going back to the intent question, that's where it becomes relevant.

If you are not trying to take retaliatory action on it but you are trying to make sure that someone can do, active defense or to harden their still using the information that you have, those are all things that we want to encourage.

So the concern I, I think, when we get into the law space is how do we encourage the appropriate behavior without taking something that could be long term damaging and have a lot of collateral consequences that we are not repaired to deal and I think thankfully, I think it's broader.  It's important that we have a set of norms because things can get out of control quickly if you are, if you don't have rules in place to do this.

So the question of whether something is a hack‑back versus when something is an effective hack, if the question is entirely about intent, it's, you know, there is a whole world of people who could potentially say I was trying to do something good.  I was trying to get into the systems to show you that I could provide the information back.

So understanding that those lines are going to be a little fuzzy and we are going to have to navigate the uncomfortableness or discomfort with intent space that is, that that comes up as part of it.  That didn't diminish an idea of building norms or prohibiting something that looks like affirmative hack‑back.

I think we just have to understand that there is going to be some gray in the centre.

>> MODERATOR:  This is helpful listening to the discussion to understand the point made about the spectrum, but the ambiguity when we bring in something like intent that you could have seen encouraging good faith hack‑back.  Or understanding where there are acceptable activities in the definition and that one singling category of denied or banned activities doesn't help us.

We have invoked the same a number of times so I wonder if we could come back and we have this question of what role does Government play in a hack‑back discussion?  What makes each of your entities, your interests or activities different?

>> ED CABRERA:  Absolutely.

So the hack‑back is something that's, you wouldn't think.  So at Trend Micro, we do a lot of threat research that relies where we come in on the research side from not only the vulnerability we were just talking about, but also the threat research.  So we do have an extensive amount of resources, threat actor groups, but in doing so we often stumble across evidence of criminal activity, and that's where we work with international law enforcement quite extensively when this happens.

So if you think about from a threat intelligence sharing model, you have strategical, operational and tactical.  A lot of this becomes a tactical type of threat, intelligence, information sharing, where we work with law enforcement.  Once we realize we do have something of note, we reach out to those entities that have jurisdiction.

And it's just not just one agency, it could be multiple agencies on a number of these cases.  I think that was more par for the course.  But that's when we work with law enforcement or national law enforcement to do that.  So from pay, from when we are talking about hacking back and these discussions, that's where at what point we are very, our researchers are fan Taft he can researchers and do a great job, but there is always this assessment, continuous assessment in each case are we doing what we need to be doing for the great are good, but, two, in a way that it doesn't expose us to any kind of liable.

So for us, you know, this is near and dear to our heart, and there is a lot of consternation when it comes to if there is a tightening of the definition, be it for vulnerability and for research.

So for us, that's where we sort of lie on the intersection of this discussion, is from a private industry, we are there pretty much actively.  The question is how do we do it responsibly, and but effectively for the greater good.  Can you guys hear me?

>> TREY HERR:  Yes, Seth, do you want to comment?

>> SETH CUTLER:  Yes.  I don't think this directly intersects with the line of business or there is a specific industry that plays a direct role.  There are groups that are closer to this, like Trend Micro with Ed where they are kind of on the edge and they can help supply that information.  And I think we can all contribute to that same, but it ultimately is a matter of law and public‑private partnership.

I think there is some good arguments about certain types of industry that might need additional assistance in this realm, so critical infrastructure, healthcare, you know, those for the social and wellbeing of individuals.  That's going to require pay bit more thought about how to protect and collaborate with those industries where, you know, folks like Ed or NetApp where we can help provide that information into a public Forum similar so what we do with ISAC and maybe it a public‑private partnership where we can collaborate and let the law and entities take that information and work with that.

>> ALISSA STARZAK:  I think it's interesting to think about the different companies and who has, who potentially is most advantaged or diadvantaged in different ways.  I think there are industries that aren't represented here that obviously have, potentially have interesting things to say about it as well.  I think for us, so CloudFlare has a big global network.  We provide both cybersecurity on that network and performance services.

We have more than 25 million websites.  So we can have a global view on what's happening, which is interesting for us on the front side of it, because we often, we can see threats across our network in a really anything way.  And then, of course, once you see a threat across your network, the ability to sorts of protect others from it is an important piece.  One thing that we, so we also collect threats from that standpoint.

The challenge for us is the idea of active defense isn't something that we affirmatively seek to do.  We see that that as a cooperation with law enforcement, a cooperation with larger groups as well, recognizing that that sort of, that should be their pursue separate from our standpoint.

So we have a lot of the information to provide that would be useful in that space, but our sort of action is to collaborate with others who can take action generally.

I think one sort of industry piece that we haven't talked about that I think may be useful to talk about it, the people who would take the information we have, for example, and provide tools to do affirmative hack‑back.

So we have different sides of the equation of who is particularly involved in this space.  There is an industry of people certainly who provide a set of tools that can be used out there.  If certain industries need help, I think we need to think about what the limits of the help is as well.  When you have tools for hire in certain areas like hack‑back that might raise even more concerns than a company that sort of affirmatively has tied already or is engaged.  That looks a little bit different to me.

So understanding, again, where the limits are and who is potentially involved, I think that would make use of the conversation.

>> KAJA CIGLIC:  I don't think I will say many things.  In terms of Microsoft, the global player obviously providing both Cloud Computing services to your still traditional desktop tech services as well as we have large cybersecurity operation.  I think we look at this from a perspective of protecting ourselves and our customers more than anything else, so the vast majority of the times we look at people who are already active on our networks because it is a large network, and find ways to disable them, but as Alissa said even so, we always do that.  I think it's not just that we provide the information.  I think a lot of times we pioneered creative legal strategies, I would say, to work with law enforcement, to encourage them to think about this online environment in a slightly different way than they would normally think about it, as for a lot of law enforcement officials around the world, this is a very new space.

So that's the other thing I think tech companies can add a fire amount.  It is almost capacity building with law enforcement entities around the world to take bad actors down, and some of it is, you know, you actually put on training and some ever it is working on specific cases together, which I think has been helpful over the last few years.

I would also pull on the point that Alissa mentioned that the Paris sector active industry that sells some of these tools.  You know, everybody on this scale or this webinar panel is a member of the Cybersecurity Technical Accord, which part of the reason the group was pulled together was to make a really clear statement about not engaging in those types of activities, not engaging in regulatory and selling offensive tools.  And, you know, a little bit like we are talking about norms earlier or more broadly for Governments, I think something like this for private sector, this is also critical, drawing a line in the sand in terms of what is helpful, not helpful and even harmful to go forward.

>> MODERATOR:  A number of you mentioned in talking about your own equities, the law enforcement community, and there is a bit of an emerging thread where, you know, if we are handing off to law enforcement or acting with law enforcement, then that might expand the scope of permissible activities or we are merely a collaborator.

I want to ask about breaking this model a little bit because I think we are all assuming we are working with Government entities who are relatively capable and have the ability to interact.  To Kaja's point, the amount of capacity building going on, I understand these problems and really educate how they can get, as a law enforcement agency, get their hands around them is significant.

So think if we could for a moment where in the context of more hack‑back discussion.  We are not taking about working with offensive, defensive and creative capabilities, but rather Developing Countries, many where that has a lot less capacity to act.  Does our understanding of hack‑back change whether it's with a Government that doesn't have the capacity to act or asking or implying that sector entities could be acting on their behalf where they don't have the ability?

If we could start with Alissa, Ed, Kaja and Seth.

>> KAJA CIGLIC:  I think the Government component of that is a little challenging.  This isn't the only area where we need development of norms.  It's not just the private industry component or laws and norms.  The Government component is important too because I think that there is a pretty significant challenge in privatizing law enforcement in that way.

So if you actually have an entity, a Government entity that is not particularly capable, and is certainly less capable than the private actor that they are then contracting with.

So a question of who is in control and what the limits are is just much harder to figure out.  And I think it's sort of, it's akin to sort of a private militia.  You get into questions of control and what are the appropriate limits and who is actually overseeing them.  And if you don't have the capability of inherent in the Government, chances are you also don't have the oversight you would want in a Government actor generally.

So I think to me, those are things, yes, capability building as Kaja suggested is an important component, put that is building up infrastructures not just on the offensive tool side or not on the offensive tool side, but on understanding what you are talking about.  What are we trying to do here?  What are our defensive strategies?  What are actually, potentially what are appropriate laws that govern this space?  How do we make sure there are appropriate checks in place?

There is a lot of capacity building that has nothing to do with the notion of offensive capability that I think it really important to talk about.  The description you gave, Trey, is one of those things that makes me really nervous from a company side, and I'm probably not alone there, as I see other people on the call smiling too.

>> ED CABRERA:  I would agree.  Because it's true, as I was making the statements about working with Governments, yes, all Governments are not created equal.  All intentions are not created equal.  So this becomes a definitely, a slippery slope.  Speaking from the private industry side of what support do you provide and under what form?

That is challenge, I think.  I couldn't speak to a blanket statement, but I think for us it would be a huge challenge and a huge hill to climb to provide some kind of support that might be misconstrued or seen as, like, what we are talking here this private militia or type of discussions.

I think providing the level of support needed to that Government in as much as providing information sharing but I think the equation or analysis has to be looked at is what is the outcome of the support?  What is, what are we talking about?  Is this an agreed upon criminal activity or as we know what's happening in a lot of countries is these countries going after political figures and so forth via cyber means.

So I think it's one of these things that an assessment is done, has to be done, right, if that keep of support is requested on where we are we lie.

Ike every good attorney would say, it depends on the answer is depends on the situation, the outcome and the country, organisation or law enforcement entity.  So there would be a lot of analysis.  I think that's why we sort of obviously in the United States, we are aware of the F.B.I. and the Secret Service and homeland security investigations but also internationally with NCA.  Those types of relationships are easy, and they have been long rooted since we were created 30 years ago.  Those are easier questions, so I might be punting on this answer.

>> MODERATOR:  Kaja.

>> KAJA CIGLIC:  Yes, I think I would almost build on this.  Just I think to remember these are all international companies perhaps operating internationally, and I think some of the concerns that were raised are just at the beginning about, you know, acting in cyberspace and then unintentionally attacking someone you may not want to still apply even if Governments are involved.

I think the points that Ed raised about the potential reputational fall back and also just, you know, the ethical moral compass of the companies that sort of stand for a particular way of doing business or stand for human rights and privacy.

I think it would be, I think it would be really challenging to give a blanket answer and if would be really challenging to also, to be like we will always work with Governments on something like this.  It's much more likely that we will, in a lot of cases if compelled push back and drag them to sort of to not do that.

But, so, again, I think it's not an easy answer, but I think the, I think there is something in as Alissa was saying, educating Governments and others in the industry to be entirely on effort on both the practices, just cybersecurity practices and just, again, law enforcement practices.  But also on some of the human rights dimensions in this space.  I think it's equally important and we shouldn't, you know, as much as cybersecurity, I think is our bread and butter of the people on the call, I think it's human rights across the board is not something we should forget.

>> SETH CUTLER:  I agree with the previous panelist.  It's a great and complicated question.  Ultimately laws still apply.  It's similar to warfare.  The difference here is, I think, attribution is still challenging and getting this wrong is very costly.  So there might be room here for international assistance, you know, more of the public‑private partnerships, but ultimately this is one of those things that it could be a very costly error if not done well.

>> MODERATOR:  This is interesting listening to this, and I say this having been based in the U.S. for all of my professional career, it's a very western frame we are coming to this with.  There is an assumption of a clear, formal definition of what the state is, easy and robust discrimination between the state and the private sector.

We are not considering situations where political parties seen as illegitimate also have militias working with non‑state groups on a regular basis.  Thinking about Microsoft takedown, they are rooted in trademark infringement, copyright infringement, but what is an esoteric in our practice.

So it's helpful for all of us coming back and thinking about this again, there is a lot of ambiguity and stickiness.  I want to point out one other sticky case.  Thinking about hacker for hire companies, vendors in the states who are making, I would say gangbusters business that are recognized in some quarters as illegitimate selling offensive capability, supporting some cases the offensive capabilities, leaning very close to or anybody reaching the line on behalf of the Governments.  Where do they fit in the hack‑back discussion?

I'm curse for those on the call when you think about the entities that have been called, the Israeli NSO group should the construction and feel of these, and is there a discussion in your mind between sale and support?  Alissa, then Seth, Ed, Kaja.

>> ALISSA STARZAK:  You had to go with me first on that one.  No, I actually think, I think it's ‑‑ I think we get into a very complicated set of legal questions when we get into those companies, in part because of the question you asked which is focused on Governments which is really not about private hack‑back at all.

We get into human rights issues quickly, and when you get into hacker for hire questions because you are talking about abuse of these capabilities, then there is an underlying question for the companies of what is acceptable use of those tools.  Is there acceptable use of those tools and is Government use of those tools acceptable and in what cases and for what?

So I think that the challenge with a lot of those companies, it becomes relevant to hack back because it's a question of whether you can sell the tools to private companies who could potentially take action on whatever parameters they see fit?  That's where the lines in the Paris Call come into place. 

Can you go, me as a private actor, can I go hire a company to do something that looks like hack‑back because that's what I think I want to do?  That would be something that would be prohibited by the Paris Call idea.  I think that would be a potential problem, but those companies raise other issues that almost automatically feed filters into the conversation because it does raise the question about what is the appropriate space for, what should Governments be doing in this situation?  When is it important for Governments to do things that look like hack‑back?

I think it may be out of the parameters of this conversation, but that's why those companies, I think, raised so many challenges.  It does get into a set of issues as Kaja said before on human rights and appropriate action and what Governments have legitimate interests in that space and have the ability to take action.  So they are really interesting questions.  I am glad I don't work at a hacker for hire company.  Just leave it at that.

>> SETH CUTLER:  I agree it definitely needs legal oversight.  The line about who can use those companies, you know, whether it's a Government or non‑government organisation, how that filled in a framework, where the liability is if somebody goes wrong, you know, incorrect attribution.  It raises a lot more questions at this point than we might have answers for.

I think ultimately it still needs to fit within that framework.

>> ED CABRERA:  I would say maybe a framework to look at or at least to bring up in the discussion is the fact that obviously the physical world you have from like a different industrial-based perspective you have companies selling arms internationally to various countries.  So in a sense, we, that is an accepted legal practice and business practice.  There are guardrails to that, but I think that essence of providing capacity building and in forms of tools could be akin to providing from an arms perspective, although a lot of people cringe when you say armor and cyber and stuff like that at times because obviously not a complete one for one.

But still, I think here that framework is a difference of providing tools or capacity building and/or training, and then you have on the other side is you are actually hiring a company almost like from a mercenary perspective.

So international law is around private Armies and being leverages and stuff like that.

So I would draw those two distinctions.  I have no answer to say one is better than the other, but at least in my mind that's how I think about it to try to put those in proper buckets.

Obviously, there is spillover and there are gray areas, but I think definitely we need a lot more, a lot more discussion, but also a lot more accepted guidance to do that or norms or, you know, for this type of activity.

>> KAJA CIGLIC:  And maybe just, to me it is, you know, there is the value of the Paris Call to an extent is in the fact they brought together so many governments, so many industries, but it's still not all of them.  It's like 78‑ish Governments.  And so that's not everyone.  So not everyone subscribes to this idea of hack‑back or even hack for hire is a problematic issue.  And, you know, we see this also in terms of are expert control, if you see it with Ed was saying.  So.

It might be illegal in the U.S.  It might be illegal in Slovenia, but it may not be illegal in Israel.  So at that point that's a decision for that legal system.  We have here can be like, I also would not want to work for a bright hack‑back company, but, you know, there are people and there are environments where that is perfectly acceptable.

Then there is a question on the international level, how do we manage, regulate the sale of these technologies across borders?  For the private sector, as Alissa was saying, that gets complicated quickly.  When the buyers are Government, the area gets very murky very quickly as well.  So I think that's, that's kind of where I would probably try and draw a line.

>> TREY HERR:  Primary contractors bubbled up at the end of the discussion, although not a perfect parallel, there is an interesting question about where those entities operate, that it is typically not Europe, the United States, again, areas with strong formal legal institutions, that it is more often the developing world or areas that are failed states.  I'm wondering if there are parallels if we think about in a hack‑back space where everybody is happy to take down North Korean infrastructure.  If it’s a French surveillance or German National Intelligence Agency, I think it's very different. So I think it's helpful to think about the implicit assumptions.

Let's go to the panel.  We have a question, can the panel members comment on the 2020 Australian Cybersecurity Strategy that provides critical product employed providers to protect themselves against cyber-attacks?

As we see some Governments starting to open up, you open up the cybersecurity strategy and the word protect themselves, that phrase appears almost a dozen times.  How do we compile efforts to allow effecters to be able to affect them.  Open up the room.  I think just to cover the question we have technology companies on this car.  This is to suggest hack‑back by non‑security vendors.  Why don't we go Ed, Kaja, Alissa, Seth.

>> ED CABRERA:  I'm not a hundred percent familiar already with the details, but like we have been talking about before, I think it's in the definition, right, it's in the guardrails.  What is allowable?  To allow somebody to effectively protect themselves, like Seth was saying earlier, this active defense definition and what does that mean or is this truly a hack‑back situation where there is a complete offensive component to it?

I don't know, and obviously I do recognize there are complete gray areas.  But then it's all to your point, the challenge of non‑technical organisations and companies doing this and what capacity do they do that?  What are the chances of them having some collateral damage if it in fact is someone taking down some infrastructure that, oh, coincidentally, is being used for legitimate use and you have shut down X in the other part of the country or around the world.

So I think to speak to it, I like the idea of increasing the capability or options in a very narrow focus from an active defense perspective and really drawing out what possibilities there are, but obviously, to give examples of them, a sort of complete, you know, to be able to do what they want to do, I think that's definitely a challenge.

>> KAJA CIGLIC:  I think it's terrifying.  That's an idea. 

>> MODERATOR:  Very specific.

>> KAJA CIGLIC:  I'm not going to pretend I'm super familiar already. I think the thing referred to is probably the critical infrastructure proposal that's floating around and in there my understanding is ‑‑ I'm sorry, that's the dog.  My understanding is that it's not even necessary given or even it's not necessarily giving the critical infrastructure powers to do something, but authorizing the Government to intervene, which I also think probably oversteps the boundary of where we would want the line to be drawn. 

Just while I understand the desire and need to protect your own critical infrastructure, I think if each country would suddenly put forward proposals and actively try to change the technology that's being used, I think it would be an unworkable situation given that we have global platforms.

>> ALISSA STARZAK:  I think that's true.  I through one of the interesting challenges in this situation is that this is our language issue that we have talked about before.  I don't actually think that the Australia cybersecurity strategy was trying to get at the notion of hack‑back.  I think they were trying to encourage people to do more defensively, to think more aggressively about what they could do.  I don't think it was extending to hack‑back.

This is the language challenge we often have.  It's putting sort of real terms behind what we mean.  And it's where does it stand on the continuum.  I think people, particularly in industries that are not in the cybersecurity space generally don't often realize all of the things that they can do that have nothing to do with hack‑back.  But you are talking about potentially industries that traditionally haven't had robust cyber defenses, so being open to think about what your defense measures can be, still offensive, not offensive in the hack‑back, but defensive, being more aggressive, thinking about what you can do and making sure you have a robust strategy on defense is really important. 

So you don't want to diminish that by scaring them off and saying there are, you know, you are not allowed to do X, Y.  You want to encourage them to be more strategic about their defense, but, again, I think that there is a barrier at some point where it doesn't extend to something that looks like more aggressive offensive action which is where you end up in the hack‑back sort of space.

>> SETH CUTLER:  I'm not like the panel well versed in the Australia framework but the sentiments is the same.  It's where do you expand the defensive measures in order to protect especially things like critical infrastructure and health systems where the need for tightening and hardening the defensive measures are absolutely going to be expanded, but I think the sentiment is the same with the key principles.  I don't think that shifts regardless.

>> MODERATOR:  Another from the field to anyone who feels particularly ready to answer, when would the private industry be justified in considering hacking back to promote safety and security online, particularly do social media companies have more justification in hacking back if press freedom is attacked.  Should the response differ depending on the attack by a non‑government party or a Government party?  I'm curious if anyone feels strongly about this?

>> ED CABRERA:  The only thing I would add is from the perspective of, you know, private industry where would we even think about or when would we think about hacking back and the just case associated with it.  I think when you are looking at, you know, health and life and safety or end play is an easier point to do it, from protecting liberty, I think that is a gray area and I think we have all seen that there is challenges from my perspective you might look at it when you are looking at that type of activity, you know, but I think when you are looking, I think it's an easier thing to just  say what are we doing or if we don't do something can somebody be hurt or health and life and safety issues. 

So arguably I throw that out as a possibility, but it is a complex situation, and I don't think any company organisation is too keen to step up and answer these questions unless they ultimately have to.

>> ALISSA STARZAK:  I think one of the challenges and this is what Ed was getting at.  Hack‑back is an aggressive measure.  I think that's what people haven't fully appreciated.  It's not something that you do lightly, and the questions, I think there are difficult questions around what happens with press freedom if you have something that is clearly intended to express action, particularly if you are, if it's in a place where maybe those freedoms are not that robust.  So if you want to law enforcement in a country where it seems to be targeting something and those freedoms aren't there, it raises extra challenges.

At the same time, I think that the problem is, again, the question doesn't sort of fully appreciate the significance of the hack‑back as the tactic in response, and I think that's what Ed was getting at.  So is there ever a circumstance when hack‑back is appropriate?  If life is on the line, if, if it's really sort of a, there is something that is, the significance is at that level, it might look different, and there might be circumstances where you could work with other entities to potentially address the concern.

I think as important an issue as something like press freedom is, it's not going to be seen in exactly that same light.  So the likelihood of it going there, as important as the underlying issue is, it's very hard to imagine a circumstance where that is something that certainly I would see as justifying hack‑back when I don't think hack‑back is almost ever justified going back to the Paris Call piece.

>> MODERATOR:  This is interesting listening to answers because we know there is a couple of entities in this discussion that offer, Google has project shield, CloudFlare, I think it's CloudFlare for campaigns.

>> ALISSA STARZAK:  Project Galileo.

>> MODERATOR:  These groups seem to be high risk, high value, vulnerable populations in this information ecosystem and at least the intent appears common there if not the act.  And, Alissa, to your point, there is a massive distinction between protecting against a denial of service attack and moving proactively against an adversary even though the intent may map.

>> ALISSA STARZAK:  Absolutely.  That's exactly it.  We think that those things are incredibly important from a defense standpoint but that doesn't extend to something that would look like hacking back and the affective or defensive component looks very different to us.  So we see that as an important critical set of services to provide, which is why we have a set of free services for entities that are vulnerable in that space, but that doesn't mean that the hack‑back part of that is still a step beyond.

>> MODERATOR:  We have another question here from the room, one more question to examine would be whether hack‑back is a last resort or something that would remain as a last option, could options such as the dirt road hack‑back be measured to fix issues that can't be fixed in a time bounded situation.  So let me ask to the group, thinking about a crisis situation, where the consequences of failure to intervene are particularly severe, what is the potential expansion on the realm of permissible activity in trying to stop, block or mitigate the actions of an adversary let's say in a critical infrastructure network.

Do you feel the severity or time intensity might shift your sense of what is permitted activity here.

>> I think a lot is basically what Alissa said.  I will also say, even, you know, this is a hack‑back from a technical scenario, but even if you look at information being processed in some of the, you know, by us, for instance, when it comes to a live terrorism scenario, like the speed of collaboration between the industry and Government gets incredibly fast.  I think if I remember correctly, this is in the public realm, after the Charlie Hebdo attacks in France, we, the Government, reached out, we responded, and it was less than an hour.  So I think it was, the attack was still going on when we started sharing information.

So I think there is also a, you know, we shouldn't just think about it does that then mean the private sector actor acts alone?  It's also, you could still go the traditional route if seeking sort of Government approval and enforcement support, and it's just, it just has to happen much more fast given in this hypothetical scenario.  But, and this is, I would say this is important because some of the concerns I have about this is there is always that one hypothetical scenario, right? 

And I think that gets us on a slippery slope really quickly, and the idea of, oh, we will just do it once, because it's really important this time I think is problematic because it then, it kind of establishes it as an accepted practice, like slowly one by one without people even noticing.

So I would be much happier if we tried to find additional ways to respond to situations.

>> SETH CUTLER:  I agree, but I think there is room within the Paris Call principles for life safety where we can within that same framework in, like you said, with making speed be the most important, but it could be done in a, within the legal framework with private and public partnership, especially when it comes to life safety.  I think there is room for that in the principles.

>> ED CABRERA:  I agree and I think we said it here it's the public‑private partnership which is key, and there has been an established, as Kaja said and there is also on the physical side as well times where public‑private partnership is able to do this quite quickly, and so I think the question or the distinction would be the unilateral thing.  I don't think there is any organisation, private industry wouldn't want to undertake that daunting responsibility to do this unilaterally, because I just, I find that very hard to believe that even if they wanted to or knew that they would obviously reach out and there would be a quick response to, you know, in other words, if this private industry found this first and needed to act, but even then advising and working and collaborating with be it law enforcement and or another entity, antiterrorism if that was the case entity that this all could be done very quickly.

>> ALISSA STARZAK:  I just echo all of those things.  I think it's also worth sort of to Ed's point, just thinking about what we are talking about and house it would work in the physical realm.  So if you got information that there was a terrorist attack, the first thing you would be to do would not be to go after it yourself as a private actor.

The same thing, this is the same thing in cyberspace.  So as we think about those pieces that the law enforcement or Government angle is important it has to be a piece of it, and that's what all of us are saying when you get into those, into something that is a threat to life and safety.  It is, there is a Government angle that has to be involved.

>> MODERATOR:  I appreciate Kaja's the Bush versus for Gore  discussion.  So for this group, we have a number of members of the cybersecurity core and a lot of people have mentioned partnership.  Following on this theme, I'm curious about what we think about the need for multi‑stakeholder cooperation here, not just to private Secretariat or to Government, but including civil society, including multiple actors in the context of the public‑private partnership?  What should that look like? Where does the Cyber Tech Accord feature in these discussions?  And open the room, to anyone.

>> ALISSA STARZAK:  Not seeing anyone else's microphone go off, I also think that the private industry plays an important role because so much of the online space is held by private actors.  The reality of helping build expectations, norms, principles, private entities have to sign up to them.  So we, as a group, I think that we can play a really important role in just developing what expectations should look like and what on the private industries side private industries would hope to happen and we can explain the consequences because we are seeing them on our networks, on our systems and we are seeing the possibilities.

So to me, of course, it's going to be a partnership.  We all have to have a voice in this space.  That's what multi‑stakeholder looks like, but it's, we can play a really important role in just making sure that the expectations are right from our side, that we have a voice at the table, that we can set expectations amongst ourselves and we can develop those as we have conversations with Governments and civil society as well.  Ed.

>> SETH CUTLER:  I have liked the threat sharing model similar to what ISOC does or the CERTs globally, there is a lot of to build on.  There are a lot of multi‑stakeholders there where there is good collaboration, a lot of sharing of information, so that would be a great opportunity to build on.

>> KAJA CIGLIC:  I would just completely agree, and I will just drop the email address in the chat window as well, but, you know, if you want to talk about these issues more and provide us with feedback, we would be really appreciate it.  So I'm just writing it now, but we would love to hear from you.

>> ED CABRERA:  The only thing I would add, I mean absolutely, I think Seth said it, Alissa said it and obviously Kaja from the tech accord perspective but there is so much room and so much existing program.  The formats exist that we can expand on, but arguably there needs to be more, and I like to think about when threaten tell against in general that strategic, operational, tactical.  I think strategically the tech accord, capacity building awareness, that is phenomenal, and I think, I think the area in the middle, I mean, I think the tactical piece is something that already happens obviously from a law enforcement perspective internationally between themselves and Governments and entities.

And then where private industry comes into play, but I think it's the operational piece, well, how can we improve the operational discussion on a daily basis, be it cyber threat, actor defense, or sharing of TTPs.

I mean, the operational component and then creating like Alissa said, more formalizing the norms and protocols.  The CERTs exist and great functions that they do, but where can we get a little bit more operational to be able to have a little bit more input on that side.

>> MODERATOR:  Helpful I think Kaja, unless you want to jump in.  So helpful thinking about the context that this conversation has had and with respect to Paris Call, you know, we have explored a lot today in thinking about what hack‑back looks like where intent matters, the activity matters, who the representative entities are that we are playing.

I'm wondering given that the hack‑back definition or the hack‑back prohibition in the Paris Call is probably the most actionable even though I know we are talking through its diffusion and lack of definition, I wonder if folks think that that definition presents a workable model, or if there is changes you would want to see as the process continues forward.  What would you want to think about embedding in that language to make a prohibition like that more effective or more actionable?

>> ALISSA STARZAK:  So I think that we haven't explored as much as I think might be helpful are the different aspects that it encompasses.  So I actually think don't think that the prohibition that is some the Paris Call is something you want to puts into law.  I don't think that's where we want to go because of challenging questions we are raising.

At the same time, there also is space to do a lot of things that get at some of the same concerns.  We were talking about hacking for hire, for example, and what restrictions apply on that.  Well, one of the things that happening in the Government space now is a pretty significant consideration about what kinds of controls you put around those types of activities, so whether it's the U.S. Government and Europe thinking about restrictions perfect an export control standpoint or a due diligence standpoint on how you provide tools.

That is something that gets at the same sets of concerns.  So if you control the technology, for example, you restrict it from a legal standpoint, you are potentially affecting how, what tools could be used for hack‑back.

So I think that it's important to sort of flesh out that space a little bit and understand that it doesn't have to be just the way it's framed in the Paris Call, just a sort of general prohibition on hack‑back.  There are other ways to control the Secretariat of activities that goat at those concerns and fit in together.  That to me is something, I think we can be a bit more nuanced in some ways than what you would see as the straight forward vision.

>> KAJA CIGLIC:  I think maybe from my perspective, I think what would be helpful as well, we tried this to an extent a little bit on the panel where we are giving examples, but also a little bit in the paper that I referenced in the chat, but in trying to look at the examples of what concrete examples of what companies are doing that, you know, is that spectrum just to drive, you know, we talked about capacity building, but drive understanding and awareness of what kind of activities are happening and why they are important and how they contribute to the safety and security of cyberspace or not depending on where in the spectrum they are.

But I think having those conversations with Governments around the world, with civil society, and with industry, I think actually is a good way to go forward to try and sort of shed a little bit of light on sort of this murky space.  Because I think otherwise you are really quickly going to just, this should be banned or I don't know, or this must be completely legal, which I think out of the three answers, just because it is a difficult subject and I think having concrete examples helps.

>> MODERATOR:  The illustrative aspect of the discussion is lacking when we talk about it briefly if it's 15 minutes on an hour long panel on another topic.  I know we are coming up on time.  I want to ask everybody in the room a final thought, what is the next five years of this debate look like, the frequency of botnet takedowns, it looks like it's going up, the complexity of those actions look like it's increasing.  The trip bot takedown in particular interesting for the number of companies and Government entities coordinating there, but certainly it's not alone.  So thinking about the next five years of the hack‑back debate, where do you see the issues we have raised today in particular thinking about intend and specific actions, these guardrails for behavior, where do you see those going?

>> Why don't we start with Kaja.

>> KAJA CIGLIC:  So I think you are right, I think they will be, given that we see an increase, I would probably say in, actually, we see ‑‑ we are more aware of the amount of criminal activity in particular of what's going on online in this year.  I'm not sure there is a massive increase, it's just we are all online a lot more.

But that doesn't mean that the actors are not becoming more sophisticated and more creative.  So I think we will see on the defensive side see the defenders get more sophisticated and created because we will have to be.  And that will probably, that will hopefully mean that we will work together more.  I think the last few examples of botnet takedowns we were involved in clearly demonstrate that, you know, there is an interest and a need from both costs and other private sector actors around the world in working together to address some of these challenges.

But I think there will also be pushing on this envelope more and more just because I think defenders will be a little bit more desperate.  I think this is why it's important.  I think there are clear lines in the sand in terms of where we can go and where we should not go.

>> ALISSA STARZAK:  I will be quick, but I think one of the things that's happened is just having more conversation in this space.  So even the Paris Call itself, the notion that we have someone, agreed upon principles or structures, I think that that will continue to grow.  We are asking discusses that we didn't have, you know, not that long ago.  My hope is that we will actually expand those conversations, and have more structures, and then I will turn it over.

>> ED CABRERA:  I'm sorry, I was muted.  I think the only thing I would add is that possibly, you know, in the same vein, like sets ask our questions from is it hack‑back or should he we be more, raise it to a higher level to be able to, what can we do going forward?  I anticipate we will probably, and fingers crossed we dipped into this similar to what we did from anti-counter narcotics back in the late 80s and 90s, we went and the financial action task force was created for anti-money laundering. And the idea to really utilizing some framework like that to be much more collaborative and going after possibly the monetization of let’s say cybercriminal attacks, but also this notion of creating norms around even infrastructure registration, information sharing, some of that governance possibly that we can maybe raise the discussion. So we don't have to talk about in particular hack‑back, what can we do at a higher level to go after other things that enable these types of attacks or groups as well.

>> SETH CUTLER:  I agree.  I see similar joint activity with legal entities to further gather information using existing laws to protect, so, you know, I think in the Microsoft case there was some IAP protections in there.  I think things like that will continue to try to push existing laws in order to protect, and I think active defense will continue to shift.  So that conversation will continue.

And then in general, there are just concerns that the threats and the vulnerabilities are vast, the ecosystem is complex.  This is asymmetrical.  So keeping up I think is going to further the conversation over the next five years.

>> MODERATOR:  That's interesting, and I think particularly intriguing that as we go through this, the notion that some of you, I think, in a single company are going to own both the origin and destination of these attacks to the private actor hack‑back question becomes an internal discussion rather than one for external norms.

We have covered a lot of ground on what is a sizable debate.  We have had some good questions.  Thank you all for joining us of the to the panelists, thank you, Microsoft, to organising to the IGF for providing the Forum and for everyone who signs in this morning, this afternoon and this evening, we appreciate you joining for a fun discussion.  Thank you.