IGF 2017 - Day 3 - Room IX - Best Practice Forum on Cybersecurity

 

The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> Good afternoon.  It is 3:00 and we decided to start on time.  We have a lot of ground to cover.  May I ask you to take your seat.  I am Markus Kummer.  And let me just briefly introduce this year's Best Practice Forum on cybersecurity.  Some of you have been part of it, have been on all the calls.  We had calls right from the start of the year.  This Best Practice Forum is a kind of follow‑up event of two previous Best Practice Forums, one on unsolicited communication, the other one on C certs.  Last year we had decided it was not necessary to continue with the previous two but we merged in a way and decided to have one on cybersecurity and right from the beginning this was conceived as a multi‑year project.  Currently the document is up on the IGF Web site much it's still open for comment.  Maarten will ‑‑ if you have not done so invite you to comment.  After this session then Wim from the IGF secretariate will talking about taking into this session and yesterday's session on cybersecurity so it will reflect all the discussions here.  And if you have any high‑level comments on the document already now feel free to make them but obviously we don't want to go into word shit Smithing of the document but I really mean high‑level, you didn't like this bit or you like this bit or this direction is the right direction to go or wrong direction to go, that sense.  And with that I would like to pass on the floor to Maarten van Horenbeeck, who did all the heavy lifting.

>> thank you very much, Markus.  Really appreciate you being here.  I know many of you have contributed throughout the year.  This has been a productive year and I hope to showcase some of that as well today.

Now, we have a small agenda that's on the next slide.  You may be surprised that we have slides.  The goal is just to help guide the discussion.  It's not that we are completely stuck to one schedule.  If there is a discussion that needs to happen we'll make time for that but the main goal today is to briefly introduce a few experts on the areas that we worked on this year.  Many of them contributors to this year's best practices forum.  To walk through the document at a very high‑level and tell you a bit what is in there, explain the process that we followed, showcase some of the areas we have been working on identifying policy options, and then we'll go into a deep dive with the experts in the room on two specific policy areas, and I really hope we get quite a bit of discussion from everyone in the room because you're all experts coming from different vantage points with different ideas and I think we can really enrich the document with the discussion we have here today.

We'll also introduce two high‑level areas it that were contributed ideas to continue to work into next year's if the best practices forum is renewed.  At the end we'll also dedicate some additional time for questions.

Now, let me first go into introducing the panelists.  I'll keep it very short.  I'll probably tell you more about their background and where they come from.  First of all, we have Deborah Brown from the Association for Progressive Communications.  We have Matthew Sheers, benedict Aeros, Alexander from global commission of stability on cyberspace, Kaia from Microsoft and I apologize if I mispronounce your name there.  Excellent.  And Christine Cooper from cert BR which is the Brazilian cert team.

What did we do this year?  Initially at the beginning we determined we want this had year to really focus on development.  And best way to tie into development was by looking at work previously done in the Internet Governance Forum being the CE in.  B, connecting and enabling the next billion projects.  There had been two phases of work at that point in time.  One of them focused on identifying very specific policy options that could help various stakeholders understand what policy decisions were positive or negative in actually growing the Internet and getting the next billion people online or next billions, rather.  Finally they also did work understanding how does policy options and the growth of the Internet can help support development or achieving of the sustainable development goals.  The way that we approach that work is by having a few volunteers in the group do a detailed risk analysis of CNB Phase I and two.  I particularly would like to thank Andrew McComak, one of the experts who did a lot of that analysis and published that work back to the community as part of this PPF.  We then did a focused goal for contributions.  We did two different ones actually.  One focused on organizations that participate in the IGF and outside of that community.  Second one focused on the national and regional IGF.  Because we wanted to get an expertise from the small regional communities.  We did a total of eight virtual and one in‑person meeting.  The in‑person meeting took place at the global conference on cyberspace just a few weeks ago.

Moving on to the next slide we also had detailed e‑mail communications and I wanted to highlight the richness of those conversations by showing the topics we worked on.  We looked a bit at industry responsibilities and what duty of care means, organizations to hack pack.  We identified forums that already worked on established areas of work, such as Internet of Things, had very rich discussion on cyber norms and confidence building measures.  We looked at Internet shut downs, how to define cybersecurity, and how to better engage private sector and above the which has been a challenge historically for this BPF.  The questionnaire we sent out requested asked a few very concrete questions.  The first one was, how does good cybersecurity actually contribute to the growth of and trust in ICT technologist and how does it help support sustainable development goals?  We also asked about the other side of that coin.  If we don't succeed at building good cybersecurity, how does that hinder all of those same goals?  We also looked at the assessment by Andrew and a few other volunteers to assess the CNB Phase I and two policy recommendations.  We asked everyone who submitted to identify very specific policy options that help address and in particular within this multistakeholder environment those cybersecurity challenges.  We also flagged that developments don't really happen in a highly coordinated way.  Managing Internet governance is really managing complexity because there are many stakeholders involved.  They all do individual things and they interact in unexpected and creative ways.  As a result we were curious where submitters saw responsibilities for each of those communities in helping ensure cybersecurity doesn't hinder the future development of the Internet because of the great work we do in security prevents new features, new technologist for actually helping people do something counter productive.  We wanted to flag or identify where it may be the case.  And then finally based on the suggestion by one of the BPF members, Valdis, we asked what everyone felt was the most critical issue that could be addressed within the context of the IGF or where the IGF multistakeholder community could make a lot of progress and that led to some interesting discussion throughout the year:  We received a wide set of formal contributions.  In total 27 which is almost I believe a third more than last year.  Saw more interest than the previous years.  These slides have a small overview of some of the submissions that we got.  One of the things we did identify is that the amount of submissions from private sector and government were lower than expected, or lower than other communities, such as Civil Society and the technical community.  That's something we spent quite a bit of time discussing and working to identify how we can address this in future years.

Then moving on, I would like to ask Segun who is one of the chairs of this to share with us what was learned yesterday from the main session on cybersecurity which copied many of the same topics.  I wanted to make sure as we come into our discussion we have some of that information here and it can help us drive forward some of the discussions.  So Segun over to you.

>> Segun:  We've been having back to back sessions on cyber security.  Even the report is still being processed, but what I'm just going to do now is just to give a brief ‑‑ kind of a short data on information.  What I have is so short but let me see how it will add value to what we are doing here.  Basically the base section on cybersecurity is entitled empowering global cooperation on cybersecurity development.  And the rush behind it is such that we're looking at the process where that can be a continuation of download on where the server space should be.  They submitted as a space for development.  We also looking at the intersection between the cyberspace and development and peace.  And somehow most of our speakers came from various sectors in society and intergovernmental and I have some of the representatives here.  Basically recognized the threats are increasing and we are ever getting exposed more to the issues of threat worldwide.  And the issue, especially from the SPAT reports, even though they were unable to have a kind of consensus based on in and given to them by UN, IGF should be a place where such a continuation ‑‑ can continue such that the interventions on how the issues of server space been deployed for the purpose of peace and development.  Then we also look at the recommendations and norms that were discussed yesterday, especially from Microsoft and from the chair of the global commission on the stability of server space.  Then cybersecurity was discussed fully in depth and one of the key issues that we also looked at is the prospect of human security, how to enable rights, or how can be balanced from the context of the cybersecurity.  But one particular threat, there is one considerable threat that needs urgent attention, that is the huge gap in capacity.  Because most of the speakers seem to have an agreement there is a big gap in the capacity to protect us and the need for us to know whether our legislator or whatever group that you belong to where to invest in critical resources.  Then we have a submission from various groups and international strategies to discuss in various contexts.  And overall we also place emphasis on the fact there's a need to increase cyber hygiene across the space.  Though there are several proposals from new instrument as the digital convention being proposed by the Microsoft, there was little interest from the floor and panelists look at first how law applies in better implementation of the existing law can help to address some of what you are talking about.  However, the proposal from the Microsoft actually attracted a lot of interest of the stakeholders in the section and the Microsoft proposal is looking at how the person can provide intervention and how government can protect the sector from the issue of cyber threat and attack.  Then finally we had the pleasure to hear various specific programs from other countries and organizations and what they are doing about threats.  We had representation from Nigeria and they showcase a case study where multistakeholder was used to drive the process of strategy development.  What I can say is that what we are doing is a continuation of how we can look at cybersecurity from the context of development and from the concept of military issues.  Then the issues of ‑‑ we look at in depth, I think a lot of contentious issues in that area but for now I think that is what I have but the detail of the report will soon be made available.  Thank you.

>> Wim:  Thank you very much.  Typically this feeds into the main session.  This schedule is a little bit the other way around.  I think that's good for the benefit of the document because a lot of this information can now also be reviewed and then see if it fits into the work that we have been doing throughout the year.

This year as I mentioned earlier our goal has been to identify policy options that can in a way serve maybe not recommendations but as inspiration to organizations trying to identify what the right thing is to do, to create a good balanced cybersecurity environment that can help bring the next billions of people online safely and also enable the use technology to achieve the sustainable development goals.  Now, this session would be significantly too short if we were to walk through every policy proposal because if you look at the current draft document, it is about seven pages of policy suggestions.  And a great degree of thanks goes out to our consultant Wim who has been putting together that information based on the discussions we've had in the group.  Did I want to highlight each of the policy areas identified as worthy of investigation throughout the year.  We looked at securing the reliability of an access to Internet services to make sure that people actually can get online in a reliable and secure way.  Mobile Internet came up, in particular driven by developments in the developing world where mobile Internet is one of the most important pieces in bringing people online due to the lack of auto infrastructure.  Very important subject was also how we can actually protect technologies from abuse or potential abuse by authorities.  There was confidentiality and availability of sensitive information which is a very typical information security question that came up quite a bit in the discussion.  We looked at abuse and gender based violence and there was some interesting identifications there of how this actually affects people in different economies and different countries in quite different ways.  That's interesting to read in the document.  We looked at shared critical services.  There's a quite a bit that makes up the core of how the Internet works and how that needs to be protected.  ICS technologies was brought up in particular in ‑‑ with regards to bringing online technologies that help provision water and electricity.  We're talking about industrial control systems here.  One interesting one was how information is sometimes collected and then later reused for different purposes than the original collection and what can be done to limit that level of exploitation of information that may not be initially expected.  Deployment of security development process was an area of great discussion and debate.  Finally how to prevent unauthorized access to devices.  Interestingly enough, also a few areas that came up that hadn't originally been anticipated in the review of CNB Phase I and two.  I'm listing those here as well.  Also listed in the documents separately.  First one was all about awareness building and capacity development.  Education and how do we educate users and where is the boundary between educating a user and actually making the technology secured by default was an interesting area of discussion.  There was also cyber resiliency of cities.  As cities start more and more using Internet of Things style technologies to provide services there was an interesting topic where there was some discussion as well which came up in some of these submissions.  Lack of diversity and cybersecurity.  In particular the lack of participation by women was outlined as a significant limitation to allowing us to actually grow cybersecurity as a discipline.  Crypto currency was brought up, the impact of social media on cyber security, in some cases actually referred to as fake news which was considered as an issue as well and finally whistleblower policies and implementation.  As I mentioned earlier, we do not have enough time to go in detail on all of them.  I would highly recommend you review Wim's excellent document.  It is open for public comment right now so you can review the individual policy recommendations and make comments on anything that you would like to see changed.

With that, I would like to spend a little bit of time discussing some of these in more detail.  As I mentioned earlier, we brought a few experts with us to this group in addition to what you bring to the table.  So what I'd like to do is for each of these give two experts the ability to share a little bit of their thoughts on what some things are that need to be taken into account as policy is developed to meet some of these goals and then have a wider open discussion for a few minutes on things that are important.  I would like to ask everyone to keep their contribution limited to about two minutes so that there's time to get a wide variety of use.  The first area of policy that was brought up as interesting was being certain that we can provide safe and reliable access and tied to that is securing those shared critical services.  Because those two really contribute to the fact that if a user uses the Internet it is operated and runs in a reliable way.  With that I'd like to give the work to Christine Hooper, an expert contributed in the best practices forum since the very first one on the computer security incident response teams and ask for her opinion on what some of the challenges are as we come up with policy recommendations in this area.  Thank you, Kristen, and welcome.

>> Christine:  Thank you, Maarten and thank you everyone for being here.  I think one of the good things from this BPF is we get into a lot of the discussions and I'll try to cover some of the things not covered yet in other sessions and I think that one of the challenges for having safe reliable and shared services because we call them shared services the core of the Internet but they are actually distributed systems that are managed by different people that depend on different industries that are spread across different countries.  So I think one of the major challenges is try to come up with what's the right incentive for all the players to adopt best practices.  If we are talking about IFX, RPI, talking about not meddling with DNS and how do incentive.  Because usually the one that implements the best practice is not the one that sees the benefit outright.  In all of these practices they are important for preventing highjacking, preventing DDOS, because actually it's so easy to have DDOS because we have the whole Internet of service of criminals to perpetrate the text.  I think this is one policy challenge, what's the incentive?  How to incentivize not necessarily by policy or by market or social responsibility.  So it's really I think we need to have a dialogue on that and it's not really easy.

For example, governments are big buyers.  They could incentivize by putting really strict rules on we only buy equipments that have best practice, they have security, we only hire providers if they implement best practices.  We are all talking about this but we can also have some incentive that's not really with a policy but with a push for the market.  And not a challenge that I think is touched a little bit is secure software because all is software nowadays.  And I think academia needs to play a bigger role and is not in my opinion into researching more add‑on security or more tools.  But we should really try to focus more into creating better professionals.  So most of the computer science professional engineers and programmers, they leave university, they don't have a clue how to do secure software.  If you leave a civil engineering course you know how to do one that will not collapse but nowadays the companies are getting a workforce that is clueless on how to make secure software.  So we talk a lot about the role of the private sector but the academia should also think about inserting secure development and secure software abuse cases.  So they need to think that since the very beginning and not create more security training but create security mind‑set on the society.  And I think this all could then have a better impact on one of the challenges that a lot of people discussed here that is the small and medium enterprises because they don't have the budget to have IT guys, to have security systems.  If we could have better software and security hygiene it will soft most of the problem.  I think it's long‑term, nothing short‑term but I think we should think about how we could change, how we think about the industry and the creation and training of professionals and it really involves I think all stakeholders and some multistakeholder problems to think about.

>> Maarten:  Thank you, Christine.  What I'm hearing is very much focused on who and when can we incentivize particularly organizations to implement cybersecurity, and second question is how do we turn it into mind‑set rather than just a patch to apply when we have the problem.  Thank you, that's very helpful.

I'd then like to pass it along to Benedicto Fonseca.  He also has some deep experience in the matter with the shadow server formation.

>> Benedicto:  Hello, everyone.  I'm an ex law enforcement officer from the U.K. and then I got better and what I do now is work for an organization called the shadow server foundation, which is a not for profit set up to do some of the plumbing between search organizations.  So day‑to‑day in the trenches work of scanning the Internet for bad stuff and then daily reporting that out to certs like Christine to tell them where the bad stuff is.  I'm really operational.  I'm having to change up a few gears to talk to you guys.  I think one of the themes that's emerged for me today has been the unforeseen consequences of regulation.  That sounds rather boring, so let me give you a few examples.  One of the things we see in my security group at ICANN, domestic legislation that's intended to improve cybersecurity actually ends up damaging it.  We see a problem that's either pushed away.  In law enforcement we call that displacement and all where we make a problem actually worse for ourselves.  An example might be where we talk about countries seek to localize, geo localize.  For example, Russia has passed recent legislation seeking to localize and regulate services so you have to connect to one another or store content within the physical boundaries of their country.  As a result, costs go up to business.  As somebody said in the previous session.  The Internet is a business.  Costs go up.  Hosting companies and transit providers and ISPs end up connecting to each other outside of Russia.  Exact intention to localize servers and make it harder for other countries to look at traffic, we all acknowledge this happens, actually makes it easier to look at domestic Russian traffic.  This wasn't a problem before.  Only 3 percent of Russian traffic went outside the country and now it is because ISP's response to cost incentives.  This is on the theme of perverse incentives, and they happen a whole bunch of time on the Internet.  We see a similar problem which many countries have and my own country, U.K. is really to blame in a huge way for this.  When we allow courts or governments to start blocking domain names.  We think oh, well, it's no harm if we do it just a little bit.  Just do it a little bit.  We all know that never happens.  The U.K. shame flee in my opinion, I'm no longer a government employee so I'm allowed to criticize my government, no the that I didn't before, allowed the part by another copyright infringing to be blocked by civil court.  As a result what happens?  You educate the population to use VPN services and similar.  So suddenly traffic ‑‑ on a lower level see proxies used.  People have learned to get Netflix around pirate bay blocks.  They use a DNS service offshore.  Suddenly we are leaking what governments need to start realizing valuable information, but DNS crews.  When you are looking up just an IP address for the domain name which reveals a huge amount of you and your Internet traffic, intrusive information even if it doesn't have any content, what domain names you're looking up, where you're sending e‑mails, that information is now being sent to random third parties around the Internet, exactly the consequences we wanted to avoid from a national security perspective.  We've also seen IP run out exactly the same time a lot of government attention on this.  Rather than sort of gracefully transition to IP6 nasty standard though it is, it's a heck of a lot better than the alternative, which is that we've just gone ‑‑ again, from a law enforcement perspective we have almost literally shot ourselves in the foot.  It's a problem that the FBI and DOJ call going dark.  By failing to plan, by putting in stop gap measures, we end up actually making lives more difficult for ourselves and let's not forget, I'm speaking from a law enforcement perspective but I'm a European cop.  That means I care about privacy.  They are the same thing.  If we have hacked computers, DNS leakage, these are things that make it harder fore cops to do their job but also endangers your and our privacy as users.  Let's ‑‑ I had a really stupid discussion early this morning that said would you choose privacy or would you choose security?  Let's not have this in this session, please.  They're the same.  They're aligned with one another.  And if cops are saying they can't do their job because of encryption and so on, they need to get better techniques.

I will plug a good news story we have been working on, excellent example of ‑‑ including Christine and many other people, which is the recent Andromeda take down, 40 different countries participated, nobody blew the case.  Everybody kept security, nobody blabbed to the media and he with successfully took down a bot Map that at the time we took it down had 2 million victims.  Two millions protected thanks to this international cooperation.  International jurisdiction La Chapelle's project has written up a good write‑up if you want to read more but props to Microsoft for ‑‑ and international law enforcement for working on that.  Some things to think about that.  Thank you very much.

>> Maarten:  Thank you.  Some good ideas on how policy can go very wrong.  When we try to regulate something it might lead to people making decisions less secure.  I'd like to make time for questions but see if there are any questions from remote.  Nothing?  Other questions or discussion from the group here?

>> Walter Martins:  I would like to go back to where you said it's harder to get government and industry participation.  As you know, I did a whole review about that in a session on the zero.  I think I have some answers for that you could think about for next year.  What was said is that mainly the resources at that side are perhaps less than elsewhere because they have to focus on many different topics.  What they advised is to try and find out what their top priority is.  Once you call that you will probably find focus, determinants, commitments, and also define some sort of end goal.  If it remains vague they will drop out of the process like happened a few times in the past.  So that would be my idea to get more engagement, toker to fund their priority and set some sort of end goal and commitment to the process in between and then perhaps I got some other things later but I think this is the first possible way forward.

>> Maarten:  Thank you very much.  I think we have time for one more.  Any comments or additions on safe and reliable access or securing critical shared services?  Again, the goal of this discussion is to get ideas that we may consider for the final document or get concerns we may need to test against individual proposals as well that have come up.

Anyone from the group?  If not, in that case we'll actually move on to the second set of policy options or areas, rather.  And that was quite an interesting one because as I mentioned earlier we did have really good participation from Civil Society in our group.  This was something that came up quite a bit and led a lively discussion.  It's really focused on how we could make sure that one's data is collected.  That data is actually used for the intended purpose and is in a way transparent to the user what happens with the data.  Second, how can we make sure that we protect Internet users against potential abuse by authorities using those very same technologies.  With that I'd like to give the work to Deborah Brown from the association of progressive communications for first intervention.

>> Deborah:  Thanks very much.  So APC is an international network of organizations.  We work to improve access to the Internet to advance human rights, gender equality, and sustainable development.  Cybersecurity is a key dimension of our work.  I think I'll just start off by acknowledging that technology can be a key enabler of the SDGs but in order for that happen, in order for technology to advance sustainable development data networks, devices, and most importantly people must be secure.  We've observed a trend of large scale development projects relying on technology in order to implement sustainable development and achieve the SDGs, and we see some risks in some of the world's most vulnerable people with this approach.  Just to give a few examples.  In India, this year there were several reports of large scale data breaches with biomatrix system called Aadhaar.  In 2018 it was reported that Aadhaar could have been leaked due to lack of IT security practices.  There were additional reports throughout the year of similar cases.  And so in order for people to trust these programs, for them to give up their data, to be part of programs that can greatly improve their lives, cybersecurity must be improved.  One example I would point to are the UN privacy and data protection principles for harnessing big data for development and humanitarian action.  These principles call for reasonable and appropriate technical and organizational safeguards to be put in place to prevent unauthorized breaches of data and also for risk and harm assessments to be undertaken to avoid any data breaches and to take risk mitigation steps before any new or substantially changed project is undertaken.

And I'd like to add one final point is that consent is very critical.  People who face discrimination on the basis of gender raise sexual identity, age or any other characteristic often are part of these programs and how their data collected for the provision of goods and services or to inform public policy in order to achieve the SDGs.  In some cases meaningful prior consent isn't there.  In other cases data may be collected for one purpose and then used for another and if the data's insecure, then these vulnerable at risk groups can be the target of violence or discrimination or harassment.  I think often when there's the best of intentions to use technology to achieve the SDGs and sustainable development there's some risks that come in hand with that also.  Thank you.

>> Maarten:  Thank you very much.  And I have to tell you I was excited when I saw the APC submission this year because I was unaware of all the principles you outlined there.  I think a lot of people from other stakeholder communities, I'm from the technical community aren't aware of a lot of these tools they can actively use.

I'd like to pass on to Matthew Sheers with GP digital.

>> Matthew:  Thanks, Maarten.  What's really interesting about these policy areas that you listed is that so many of them are interlinked and in a way indivisible.  I think that points to some.  Challenges with addressing this one in particular.  Let me say a couple of things and point to Deborah's covered the human rights.  Let me point a number of implications of some security questions which are critical to data theft and will become increasingly critical.  I think we ‑‑ it's obvious stating the obvious we face an increasingly data rich and connected future and that means that we're going to have to work ever harder to prevent that from becoming an increasingly data insecure future.  We look at these concerns have to be addressed, otherwise we're pretty much at risk I think much in the direction that Deborah was going that we're stepping into a privacy list future in which the future of capability will extracting and analyzing data will far outpace the appropriate societal or policy responses and our ability to exercise reasonable levels of control.  One of the biggest challenges we'll face, and I think it's pretty obvious one, is ensuring the appropriate level of security in a range of devices that we are going to become more and more familiar with, which are those devices that are for sale, at points of sale that will be accessible to consumers.  So we're talking about consumer market, small homes, small business market where the market pressures, particularly in terms of device cost will determine levels of security that are bedded in them.  That poses a significant challenge, not only for manufacturers but users in terms of understanding what those levels of security are, how they might be up‑graded, whether they're upgradeable or not.  This comes directly back to the issue of theft.  The only way you can actually prevent that kind of theft and possible repurposing of data is to be far more aware of your responsibility in terms of cyber hygiene and far more aware of the technological capability was devices connected to the network.  Set of challenges we have to become very familiar with and the myriad attack that took down the service, this is a very clear example of how much we as consumers and not just Internet policy experts have to be aware of what we're connecting to the network not only to protect our own information and data but also to protect the network itself.  I'll leave it there.  Thanks.

>> Maarten:  Thank you very much, Matthew.  Different perspective but I think it alliance well with the problem itself.  Avoiding that insecure or data insecure future is exactly one of the goals that we have as part of the work we're doing here.  So thank you very much.  Are there any comments or suggestions or things that you haven't heard yet in the audience that you think are important that we take along in this paper and in this work?

Yes, sir?

>> My name is Siva Subramanya.  We had our own discussion on the broad effects of cybersecurity which is not quite any different from the broader security area.  One of the things we identified was application promoted by operating systems like Android, that needs a bit of cleanup and attention from Apple, from Google and from other ecosystems.  So in the initial stages they were giving out ‑‑ developing applications to ask for any permission in order to foster application environment.  And now that's fostered and there are thousands and thousands of applications it is time for the companies to move on to the next phase venturing cleaned up, that's one of the very important aspects and then another core aspect, broader an is we have been responding to security threats which were quite real in some of the measures taken by law and order agencies and governments are quite warranted but then we ended up altering the way we live our lives.  If you look at how we live our lives today and how we lived our lives 25 years ago that is a drastic difference.  Is there any way by which we could have more conducive policies that does not take away our freedom and does not alter the way we live our lives, that's a much broader question that needs to be examined by governments and security agencies.  Thank you.

>> Thank you very much.  Definitely two very important areas.  Policy confusion didn't come up specifically in the discussions this year but I think it's definitely a challenge in the sense as more and more of these policies come to be and certainly growth of them how do we make sure they don't lead to an environment where we lose all control as users.  Definitely a good point we can take forward.  First one also very relevant.  We had discussion in software development life cycle and promotes idea software needs to be developed in a secure way.  I kind of want to check into my left and look at Kaia.  I know he cannot represent the ecosystem that you just mentioned but I'm wondering if you have anything you would like to add on that.  It was a little part of previous contributions actually in previous years by Microsoft.

>> Kaia:  I think Microsoft generally strongly feels and supports that there is a need for both your point earlier I think increase the understanding of IT professionals overall as they come out of university to sort of ensure that at university or even earlier there is a specific cybersecurity aspect to that.  I think ‑‑ I know we in our own internal processes have ‑‑ basically adopt an approach where we hire people and then we train them on security.  And I know that's not necessarily scalable.  I think the important thing is also to think about there is now I think almost a variation over the last years of specific cybersecurity degrees.  That's all good and great but you actually need across the IT ecosystem.  Similarly in terms of sort of secure by design software development life cycle approaches we have tried and put out there was actually into international standards to sort of ‑‑ for people to be able to access and learn what we have learned when we had to go through this actually quite steep learning scope over the last sort of 15 years but I think it's ‑‑ and I think we're increasingly doing things in an area where we are moving into IoT and also AI engineering to an extent where a lot of the ‑‑ particularly in IoT a lot of the devices put into the market and I would encourage you not to just think about the consumer devices but also devices being introduced and integrated into critical infrastructure into enterprises are sort of still being done without almost security is an afterthought if at all.  Understanding what can be done there with this dramatic expansion of this landscape and whether it's just training or whether it is a collective effort to find a way to engineer security in the network traffic in some way.  It's something to think about.

>>

 

>> Maarten:  The gentleman at the back.

>> Thank you very much.  My name is Dimson Alufia.  One thing we have not heard much about with respect to security infrastructure and enhancements of security and that's regulation.  Multi shares talked about attack which has been massive, IoT Artificial Intelligence what does it talk about regulation?

>> Maarten:  Good question.  Is there anyone on the panel who would like to tackle this and maybe share some of your thoughts on it.

>> Unidentified Speaker:  Thanks for the question.  We're not in a regulation free zone.  We're in a zone of overlap of regulation.  So the dine attack that you refer to, this is also the same bot net of insecure Internet of Things devices, mostly home TV recorders and security camera systems that had default passwords set, cheap, nasty devices that are connected to fast World Bank connections.  There's plenty of regulation around these.  There's plenty of regulation around the network.  There's no enforcement.  That's the problem, I would argue.  So regulators produce guidance for these systems.  They're sold often from China into many different countries.  Have varying levels of cybersecurity but also consumer protection standards.  Civil legislation, all of which could be used by both law enforcement or more likely by consumer protection agencies to hold these companies responsible.  Instead what happened some person, in fact I was involved in his arrest, Daniel K., gathered all these machines together, trivial, really trivial attack, so stupid really and this is what's scary and took out a bunch of Internet infrastructure but the country of Liberia just before elections.  These same devices was directly used to influence an election because we failed collectively.  When you talk about regulations, it's not the problem.  In my opinion.  It's about actually understanding in each country that we all have responsibility for not buying crap devices, excuse me language, and then actually if we do have crap devices, excuse me language again, in the country, that we remove them.  As a law enforcement person I hate to say this but I just read the manifesto of a person called ‑‑ what was his name?  Dr. Kevorkian?  Anybody read this?  So over the last 18 months that there's been a concerted attempt by a hack if you like who took over IoT devices that were vulnerable and destroyed them using software about ‑‑ I think about ten to 20 million devices because this hacker who believed they were doing the right thing, whether you agree or not, said that by destroying consumer's devices and getting them to say it's broken and take it back for a refund was the only meaningful way to highlight these bugs, these terrible flaws with these devices and actually create an economic incentive for companies who manufacture something about it and guess what two manufacturers did.  It seems that vigilante is actually the only thing standing between us and chaos.  That's not necessarily the lesson I want you to go away with but that's how bad things are about.

>> Unidentified Speaker:  Maybe one of the outcomes would be to encourage regulators to be responsive to what they need to do.

>> Maarten:  Thank you very much for that comment and thanks for the discussion.

I think we have one more comment before we go into the next section.

>> Unidentified Speaker:  Thank you for mentioning that Benedicto because it's exactly the do you tell of care documents the cybersecurity council published this year proves is that there are so many regulations already out there or consumer protection agencies regulations out there that are simply not used on the products we buy.  That's one comment.  The other one I want to repoint out that in the anti‑spam Best Practice Forum of two years ago there was something about a book called future crimes from Mark good man from the Singulair university in San Francisco or Los Angeles.  He pointed out as a potential solution to the threats from all the these bugs in software and he made a comparison to malaria fighting in the world that there were hundreds of thousands of people putting data on malaria around the world and that actually helped scientists on the way to cures, which is around the corner actually at this point in time, I believe.  If we were to do something like that on software by all these people out there trying to find bugs and software and collect that at a single point, this is Mark's idea not mine but it's in the BPF as a recommendation, if that's something that could be organized by companies like Microsoft together perhaps with governments and setups and sort of a neutral entity in which they come together then we may speed up all the bugs in the software, ten, hundreds of thousands of times faster than happens at this point in time.  I think that may be a good idea to explore in the next year.  We left somewhere in the mix of the two.  Once because of the discussion I just thought of it and bring it back to the table.

>> Maarten:  I think Christine wanted to add something as well earlier.

>> Christine:  And I'm even ‑‑ you touch a point that I touched the other day in another panel, panel about C certs.  I think it's ‑‑ Google is already doing something like that with its open source software network so they have now continued testing of the major open source software and they even presented at the time if it was discovered they had that system in place it would take ten minutes to find it.  There are already some initiatives.  In the case of Google they are focusing on the open source software they use.  Of course.  Then you have another incentive is we needed to have‑not a place for people to report but people looking for bugs.  Because we have a market where zero days are expensive and as a market driven by government policy.  Kind of perverse effects of policy.  One of the things I wanted to comment on the ecosystem, touch on it later in session but I think one of the key problems is not only that we don't have professionals.  The university is putting professionals, data companies have to retrain.  All the major companies can do that.  Then on the other side we have businesses that are not software business or were not until now that are just software shops.  They have the early '90s behavior of software companies and not 21st century behavior.  This is happening in IoT and this is also happening in the OEM carrier model for cellphones.  Part of the problem we have today with the market for cellphones is the EOMs, people that do the mobile phones, they still think in that very old Telecom mind‑set, I make it and specialize for the carrier and nobody touch that anymore.  They need to realize you're not just EOM for cellphones.  They need to give adoption for that software to be updated.  So many stakeholders are making bad decisions and just piling up into all of this.  I think it's a very complex problem to deal with in the future.  This alone would be a policy challenge for years to come.

>> Maarten:  Thank you very much.

Moving on to the next slide I think we're going to jump into a new topic.  Regarding the ‑‑ oh, I see one more question at the back.  I think we have time for one more.  Go ahead.

>> Unidentified Speaker:  Thank you ‑‑ I'm (?) IT devices were already covered by consumer protection law.  That is not the case in Switzerland and I mean consumer protection does not keep they want from being part of the bot net and I'm not even sure that introducing such would change anything about the motivation of the device creators.  So I don't think that's the road to follow.

>> Maarten:  Thank you for adding that.

I'm going to jump into the next section.  I want to thank you all for adding discussion here to what we have in the document.  What we'll do is take some of these comments and integrate them in the document moving forward.  If there are issues that we're not exactly able to close on here we may pick them up on our mailing list.  I highly recommend you join it if you have the ability to contribute a little bit of time at finding good solutions for each of these problems.  Now, at the end of the year we started thinking through what areas were that we could work on in the next few years if the best practices forum is renewed.  The way we did this, we actually asked people what our biggest challenges were in terms of cybersecurity and in particular those challenges that we could meaningfully in a multistakeholder way.  If something is a pure technical issue only be addressed by one actor probably isn't ‑‑ IGF probably isn't the best place to discuss it.  We came up with a number of different areas.  For the meeting I divided those into policy and governance issues, technical issues and then one that particularly stood out, fostering a culture of cybersecurity and core values.  That was interesting because we spent a fair amount of the time talking about education, about whether a cybersecurity truly meant, one of the most interesting threads in my view to read up on the last year was whether or not an Internet shut down was a cybersecurity issue and I actually found myself persuaded by some arguments brought up on the list and I think others at least had some things to think about afterwards.  But it all led to the fact there is actually disagreement on what cybersecurity means.  The definitions that do exist and have been built aren't universally accepted.  I'll walk through the technical issues really quickly.  Internet of Things came up in quite a few submissions.  Critical infrastructure, Internet resources, a number of very specific types of attacks that manifest themselves in a way that affects multiple stakeholders to address them, cybercrime and ransomware, AI, in particular the possible discriminative results of using those algorithms when it's not really understood how a decision is made.  Mobile network security abuse and a lack of education in end user awareness.  And finally policy and governance issues, development of internationally agreed cyber norms.  The lack be of frameworks, international corporation of legal principles, state stability and peace in cyberspace which was definitely a big part of the main session yesterday.  That was quite interesting.  Increasing awareness of risk management processes and one particular concern that I thought was very interesting was the fact that people look too much for solutions that solve it all and so they don't pursue the ones that get us to 80 percent.  That was raised as a very specific challenge which I think was quite interesting.  And then awareness of criminal justice practices.  Now, when we ended up looking at the work we may do in 2018, two specific things really stood out.  First one was culture of values and norms.  We talked about defining cybersecurity, making sure that stakeholder groups understand it the same way and identifying what values are underneath and that can be something that then leads to assessing, debating, and improving on cybersecurity norms wherever they're developed.  We have two experts in the area on our panel.  I'd like to get them to talk a little bit about what they see as the future.  And in particular I would like to challenge them with the question how communities that may not be states can contribute to these initiatives.  Civility society and the technical community.  I'd like to give the word first to Alexander Klimburg.

>> Alexander:  I'm the director of the global commission for cyberspace initiative.  I'm the head of the secretariate.  We are a multistakeholder endeavor that aims to developed norms and policy initiatives to help advance international peace and security issues in cyberspace.  This BPF document is pretty exciting for us because it marks the norms of responsible behavior as one of the key areas for future stakeholder conversation and the quote you have in it, it's necessary to ‑‑ in order to establish a set of principles and values understood by each stakeholder group.  That is pretty much the language we have been using within the UN group of government experts and other governmental experts and other organizations that have been dedicated to the so‑called international peace and security discussion in cybersecurity.  I think it's important to advance that norms are nonlegally binding voluntary agreements that can be made by any stakeholder group.  They don't only apply to the states.  Not only states make them.  ISOC has created a norm called manners for rooting and there are many other similar forms of explicit and implicit norms currently being exercised.  So they're not laws but their agreements and principles and they're always somewhat up to interpretation and provide soft incentives and soft disincentives for adherence.  This is a continuation of Christine's question how do we incentivize good behavior, adopting PCP threat.  Lobby questions that come up we sometimes think in terms of regulation or contract design.  Another solution are norms.  Norms as I said are highly welcome conversation ‑‑ addition to the conversation here.  Norms have been used in the UNTG context 2013 and 2015.  In 2013 the UNTG report effectively endorsed regional organizations to help develop their own norms and confidence building measures and 2015 even put forward some concrete norms of their own.  Those norms were for instance thousand shall not interfere with critical infrastructure or thou shall not attack certs or thou shall assist another state mitigating cyber interference.  All these norms are only applicable in peace time.  In war time of course different laws apply.  So norms were developed for states in this context.  That was part of the problem be.  UNTG realized it needed to open up a little bit and be accessible to a wider group of stakeholders.  That was also part of their 2015 report, which I will spare reading to you in full.  It is a bit dry.  But one of the two mandates of the global commission of stability for cyberspace is the UNTG recognition they have to expand stakeholder representation.  In particular it was necessary to be able to inform the norm development process that occurs within the UN first committee international peace and security community.  So we believe that one of these norms for instance is the one that we have just recently introduced which is the call to protect the public core of the Internet.  And this is what we find so particularly exciting this call to protect the core of Internet is potentially connectible to something that you have been discussing here today which is principle.  And further form of incorporating common belief.  Particularly do no harm principle.  So for instance, I would to quickly read out what that norm is.  It's extremely short.  Usually a good sign.  It reads without prejudice to the rights and obligations state and nonstate actors should not conduct or knowingly allow activity that intentionally or substantially damages the general availability integrity of the public core of the Internet and therefore the stability of cyberspace.  Of course big question is what is this pub core formed out of?  And this context we had a nice and involved discussion that's also in our BPF submission about an inner and outer core.  Inner core is clearly identifiable as naming functions of the Internet, DNSPGP, et cetera.  That's kind of clear that really does ‑‑ that really is crucial for the proper functioning of the Internet as a whole.  The outer core is a bit for fuzzy and that can include a whole range of different assets, including for instance cables or Internet exchange points or for instance even certification systems.  So that's not exactly clear what falls into this area.  This someone of the things we're hoping for feedback from from both this community and other communities to find out exactly what assets, what services are included here.  But it's also a chance, a clans to think more about a principle based approach to these issues.  We think that the outer core can be used as a point of departure to talk about a general precautionary principle, both for state and nonstate actors.  For state actors it's important to consider when they're engaging in lawful activity in peace time, under law, they don't do something that damages the core for instance by wide scale interruption of rooting for instance.  If we want to look at a recent example just look three, four weeks what happened with a certain AS in Russia.  That's one example.  Also applies to nonstate actors.  For nonstate actors it's quite simple.  If you are offering a product or service used to cause disruption then you should commit to enhanced level of due diligence on your systems to effectively make at least somewhat plausible that your service or product will not be used for such a nefarious purpose.  That basically means you're committing to a do no harm principle.  Effectively taking a high level of care into consideration on the basis that just might ‑‑ you just might be essential to the operation of Internet, at least for a short period of time and therefore have an obligation beyond those of your stakeholders.  Such as your share holds.  So we are hoping that, for instance, something like protecting the public core could become something like a do no harm principle for core service and product developers that could make stand next to the end to end principle, universal access principle, open standard principles and other many principles being developed do no harm principle, cause, et cetera.  And effectively incentivize actors to take an extra level of care when their products and services might be of particular importance to the ecosystem as a whole.  Sometimes of course it's quite helpful to spell these things out in writing.  So thank you very much.

>> Maarten:  Thank you very much for that contribution and sharing some of the exciting work that your organization is doing.

Another organization that spent quite a bit of time the last few years actually talking about norms and in fact proposing a few has been Microsoft.  We have Kaia here who can tell us a little bit more about what they have done.

>> Kaia:  Sure.  First of all I would like to echo Alex and also praise the work of the commission.  I think the work you have been doing is great.  And having ‑‑ and you are one of the few organizations that are organizations around (?) groups that sort of actually does bring in this space multistakeholders together.  I think the reason why Microsoft a few years, I guess 2012, started talking about international cybersecurity norms was largely because this was a debate that was held by governments for governments, a little bit like Alex referred to.  And we felt there is a need to shine light on the process.  There is a need to ‑‑ for those decision makers to hear voices from the industry and others and so we sort of started making a fuss and proposing a few things.  I think it's great to see the sort of norms inclusion here.  I think we see it as a critical important contribution to international peace and stability in cyberspace and you probably ‑‑ I'm sure you have, heard us talk about the digital convention this week but I think thanks more of a long‑term process and the need to come to a set of agreements in the next few years around norms of behavior for cyberspace for states and also nonstate actors.  Is critical.  I think some of the ideas we proposed as part of this was to, you know, take the UNGG ‑‑ the sort of their 2015 report that put forward the 11 norms as Alex referenced as a starting point, for example, and sort of look at how these could ‑‑ you know, have sort of informed multistakeholder discussion on how this could actually be implemented.  I think states have looked at it.  I feel there is an obligation and as part of the resolution that was passed in the U.N. for them to report back on implementation of those norms but there's sort of I think very few countries have actually done anything about it besides sort of commit to it in theory and oftentimes they're written as fairly vague statements.  I don't think they are quite as usual.  There's a level of interpretation I think it would be important for us collectively to sort of investigate those.  I think the other option a little bit like Alex was saying I think is also to try and identify what are other areas where norms would be needed.  I think one of the goals that Microsoft put out was sort of a norm of noninterference in electoral processes.  That was sort of one that was highlighted in the last year.  There are definitely others.  Core of the Internet is something we strongly support and the ability of having a conversation with academics with a Civil Society to see what's important and around the world because I think it was very ‑‑ the debate so far has been focused on a narrow geographic scope honestly is something we would like to see.  Maarten:  I see a question here at the front and then we'll go to the remote questions.

>> Sybil Subramanya:  The industry participation and developing norms and industry conversation, but the only predominant nonstate actor in the multi level process is the industry.  It has always been when governments talk about a government only process or a multilayer process in reality it's not just government only process but a government, an industry partnership process in many ways and historically it has been the case.  When it's expanded as a multistakeholder process then the Civil Society is broad end and there is tremendous amount of balance.  I think we should have the shift from consulting just industry to consulting everyone for a balance.  Thank you.

>> Maarten:  Thank you very much.  We'll get to your question in a second.  Any remote ones at this point?  No, okay.  Then go ahead.

>> Unidentified Speaker:  I have a very specific question.  My name is (?) Science professor from Kosovo, looking at specifically the technical issues that you have showed just a couple of moments ago, I'm quite surprised I would say to not see specifically the issue of e‑mail security and looking at how we are conducting businesses nowadays, everything is done via e‑mails, not fool encrypted end to end, I think this is a particularly sensitive issue and I hope other cybersecurity experts would agree because this is what's causing trouble I would say in the cyberspace, the most and it's about the nature of the business we are doing nowadays which goes completely via e‑mails and it's completely wrong how we are using e‑mails in general.  So we are having encryption between the e‑mail servers but not having end to end encryption from the user to the end user.  So I think this particular issue, this should be addressed and if not I would like to have an explanation why you don't include it as technical issue, this specific cyber issue which is e‑mail security.

>> Maarten:  Thank you for raising it.  The reason why it wasn't included because it didn't come up as any of the submissions.  I'd encourage you to still send in more detailed feedback if you have it.  Perhaps it's something we can tackle as the process continues.  Thank you.  It's definitely an interesting addition.

I think Matt here wanted to add something to the norms condition as well.

>> Unidentified Speaker:  Thanks, Maarten.  The challenge of the norms is one that Siva just noted which many times they're norms but not necessarily geared towards or developed by a multistakeholder process particularly in cybersecurity.  I wanted to draw your attention to one set of norms that were developed through multistakeholder process that deal with human security and rights.  That's the norms that came out of the freedom online coalition working group one on an Internet free and secure that developed 13 recommendations and many of them are norms about how long cyber policy should be developed without taking human rights into account, just to give you a flavor.  And the second one is development of crucial security related laws, policies, and practices should be from their inception human rights respecting by design.  There's a real focus on that.  The interesting thing about these recommendations is the words put into them has been supported by the 30 governments of the freedom online coalition.  This is a part of the body of work that we can use to leverage to bring about an opening of the cybersecurity space and to bring more multistakeholder engagement into that.  I recommend you look at free and secure.  Thanks.

>> Maarten:  Thank you for adding that Matthew.  A second area of potential work for the next year that was brought up in our meeting at the GCCS was that we could work on the digital security divide.  This is a subdivision of the digital divide where historically it's been about users not able to access the Internet due to various limitations and as that has been more and more addressed there is a concern that users either do not have the funding or don't have another way of accessing specific security measures may actually be using Internet that's less trustworthy than others.  I'd like to give the word to Matthew to tell us a little bit about what ideas he has around to the area that could potentially be useful for the BPF.  Given that we're running out of time a little bit I'm going to ask everyone to really limit it to two minutes over the next few speakers.

>> Matthew:  This is a really interesting challenge that came up out of ‑‑ people are referring to this increasingly but came about some work I was doing with the Internet Society on the Internet futures report that was released in September.  This is the fear that we're working very hard to address the access divide.  When you start to consider the challenges globally and different levels of cyber awareness and the differing levels of development and the differing levels of financing for putting in place cybersecurity systems and processes and frameworks, it raids the specter of security divide.  Not just a digital divide, not just an access divide but a security divide that in turn can parallel the progress we're making on the access divide.  I think this is not only a security divide issue but also SDG issue.  For example, if you take SDG two which is about zero hunger you can ‑‑ there they're calling for doubling agriculture production.  Much of that is going to have to be done through embedding technologies into those production processes.  Many control systems you can imagine that those systems will be incredibly vulnerable without the appropriate level of security, which could in turn impair attaining the SDG.  This issue of the security divide has implications across the board.  I'll leave it there.

>> Maarten:  Thank you very much, Matthew.  Somebody I'd like to ask about this is Christine.

>> Christine:  Thank you, Maarten.  I would like to make just two brief points and one point is that a lot of the problems we see in Brazil, Brazil is a developing country, we have so far only half percent of the population that uses the Internet.  From this more than half just uses cell phone.  The current trend.  One of the things hosts PR organization, we also conduct since 2005 all the national surveys about ICTUs in the country and if you try to cross questions for the households that are house security ‑‑ how aware they are with security, it really relates with literacy.  Not only digital literacy.  More than ‑‑ it's really how educated they are.  If they have education formal if they don't and then coming from our perspective that we produce a lot of end user material that is still important you can see that even more companies today really the major thing is fishing, is targeted fishing and you go for the human element.  We still need them.  It's really hard to do user awareness when the user does not understand technology or the risk or threat.  And this goes then to have better technology but then it goes back to what we discussed before that we needed to have better software, better ecosystem for smartphones, smartphone that could be updated and not like ten.  In Brazil we still have most of the smartphones being sold they have at least the end right for three years ago.  So they are coming to the users without the ability.  So I think it's really more of an ecosystem as I said before.  OEM realize they are selling software and thinking about creating a better ecosystem.  I think literacy is really a challenge for the digital security divide and I think we should think about that.  Thank you.

>> Deborah:  Thank you.  I wanted to highlight two types of communities that might face more risks in digital security.  The first relates to the point I made earlier that when data breaches of sensitive personal data occur, certain communities are more at risk such as women and people who face discrimination based on their sexual orientation and gender identity.  To give an example from Brazil, from San Paolo, there was a database containing the records of 650,000 patients that was made public, including patients who underwent abortion procedures and in Brazil abortion is illegal even for people at risk of carrying Zika.  If those identities are made public and these women are then revealed to have their identity to have had an abortion there's a much more severe consequence.  And the second is that women and people who face discrimination based on their sexual orientation often proactively attacked or harassed online and examples include threats of rape, death, cyber stalking, hacking ever e‑mail accounts on mobile phones and these have consequences offline as well, especially when someone's address is made public and threats are made against them online.  While there are lots of measures that could be put in place to prevent such threats, I'll wrap up soon, the last year's BPF on gender and access actually showed that one of the threats ‑‑ or one of the barriers to Internet are these threats online.  Sometimes the threats aren't worth being I don't know line and being exposed to them.  I'll stop there.

>> Maarten:  Thank you very much.  These are two areas we plan to consider working on for the next year.  The BPF is renewed.  I see one comment from Walt.

>> Walt:  Thank you because I have to run off because I have to speak in a session that ends in 15 minutes.  I'm going to jump ahead to the rest of the program because three very short comments.  As you see already we're running out of time.  What I would advise to do is ask the MAG for more flexibility on best practice for the whole program because topics came up this year that need a whole session to discuss and not just being mentioned.  Two or three slots that could be filled in during the year while the process from the MAG is going on as it does.  More flexibility in the program for best practice in session work in general.  Two, if we have a success, let's learn to celebrate it.  Let's put it out there because that makes us more attractive to others to participate in the future.  How do we identify success?  How do we disseminate it and then reach out?  Three is more commitment from the MAG once they decided this is our topic of the year in the Best Practice Forum then we need to commit on reach out so it is not that we're on our own afterwards but that needs some help.  Fourth is let's identify a case that we can work on together and work towards a common goal identified up front.  IoT or Artificial Intelligence or like the e‑mail security gentleman very good example.  If we identify one very early on in the process we may be able to get other people on board.  Thank you very much for being able ‑‑ good luck with the report and we'll definitely be Wim and you finalize it.

>> Maarten:  Thank you.  There were some comments on the side.  The gentleman next ‑‑

>> (?)  I'm a member of the first board where I'm in charge of training and education and whether it comes to kind of cultural values and digital divide, I feel there is a lot that these two have in common.  Deliver training in so‑called difficult location.  What I see is people have little knowledge or bad technology but what I often here is that when we call the big companies or reach out to so‑called developed nations we don't get an answer.  And when I ask in the so‑called developed nations or big tech companies why don't you answer, I get the answer back saying, well, they always ask such strange questions.  And that for me is a sign that we have a cultural gap.  We don't seem to understand each other.  I think this is something we really need to address.  We still need to learn each other's language.  I would really like to see this aspect flowing in.  It's not about the best dictating those people and how they have to do and run the Internet.  It's also not those people telling us you have to do it this way but we have to find a common way to work on this common infrastructure.

>> Maarten:  Thank you.  And I was just informed we're running out of time.  What we'll do is if you quickly go to the next slide, there is an additional session tomorrow at 1:30 where we'll have time to discuss next year's work.  I would recommend if you have suggestions or discussion, please come there and happy to address them.  Otherwise please sign up on the mailing list so you can bring up things we didn't have time for.  Be happy to engage discussion there.  I would like to hand it back to Markus for a second to give a few closing words.

>> Markus:  Just to say what Maarten said.  We will have this session tomorrow and maybe also decluster the issue.  It's not binary thing whether we are going to continue as best practice for cybersecurity but it is about making suggestions to the MAG then for issues that may be taken up also somewhere else in the context of the AGF.  May be a main session or whatever.  It is good.  It's tremendous work these experts have done over the year and if we can come up with some recent suggestions for future work and not just say what we want to do as Best Practice Forum on cybersecurity, these are interesting issues but present the issues in a recent manner for consideration to the new MAG in the new year.  And we can continue the discussion also in substance if there are questions.  Good to have vibrant discussion and it was an excellent discussion we had.  Unfortunately we have to vacate the room.  There will be other people coming in.  Sorry to cut down on the questions but it's always better to leave when you still have appetite and not completely bored stiff with everything has been said but not yet by everyone.  So thank you all for participating.