IGF 2020 - Day 4 - OF24 Online Safety Technology: Towards a Global Market?

The following are the outputs of the real-time captioning taken during the virtual Fifteenth Annual Meeting of the Internet Governance Forum (IGF), from 2 to 17 November 2020. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

 

>> KEVIN CUNNINGTON: Hello, everyone, and welcome to the Online Safety Panel.  I'm your chair for this panel.  If I may, I'll spend a couple minutes describing why I'm here today and why I'm passionate about this subject.

I spent many years running the UK's government digital service, and acting as the head of profession of the 17,000 digital data and technology professionals we employ here in the UK Civil Service.  And I've recently taken a new role, and in that role, I meet on a quarterly basis with other European envoys and we talk about key areas of collaboration.  To give you a sense for that, those areas for collaboration are interruptible digital identity, an ethical framework for AI,  and a number of initiatives around better data sharing and protection of data.

We all worry and work on how to develop more digital capability within our own civil services and saving the best for last, we work on making the internet safer.  That's the context of me today.  You'll hear from our other panelists that we all make commitments to make this work forward.  My job is to make sure it's a top priority for the European digital envoys.

I'll give you a background on what we might want to talk about.  Most of you will know people are generally very positive about the internet.  80% say it's made our life better for us.  There is a dark side of the internet that we've all experienced, whether that's terrorism, cyberbullying, trolling or the impact on children.  Again, we are here to talk about that today. Oddly enough, technology can help within this space.  A good example from here in the UK is the home office has developed a machine learning algorithm that detects terrorist videos on the internet and takes them down.  A good example of helping to solve the problems we created.

Last two points, safety tech is very much on the rise, it's 35% year on year in the UK.  We now employ thousands of people here in the UK and contribute billions of pounds to the economy.  We are at least saying that safety tech is global.  Probably unusually so, within government it seems to be a second ambition and regulations across the world.  Actually, UK safety tech companies export 50% of their ‑‑ 50% of the time they do work.

And the three objectives for this session today are, one, we want to share the work that we've done to define a measured safety tech sector in the UK and abroad.  Two, we want to explore the extent to which the challenges and opportunities in the UK reflect those across the world.  And, finally, the third objective, discuss how we might take this forward internationally by collaborating together.

Our agenda is 25 minutes of panelists speaking.  That will be followed by a Q&A for about another 30 minutes.  And we'd ask people to put the Q&A questions into the Q&A questions on Zoom.  That makes perfect sense.  And then, finally, we'll have a wrap up session, and we'll get each of the panelists to give us their personal commitments and their thoughts on how we turn discussion into action going forward. With that all said, it's a pleasure for me to introduce our first speaker, I introduce professor Mary Aiken, and she'll be talking to us about online safety.  Mary?

>> MARY AIKEN: Thank you, Kevin.  Today we're going to talk about safety tech.  We're all familiar with health tech and Fintech and now we have safety tech.  We know the digital advances drive our economies; we know they'll enrich our societies.  But to harness the internet's advantages, we also must confront the online threats and harms that can propagate.

We need to ensure the safety and security of those online.  We need to factor in those who are vulnerable in order to maintain thriving democracy in society where freedom of expression is protected.  So online safety technologies or safety tech is defined as any organization involved in developing technology or solutions to facilitate safer online experiences and to protect users from harmful content, contact, or conduct.

The online safety tech sector consists of companies whose products or services on online platforms deliver safer outcomes for users.  For example, through moderation or filtering.  So you may ask well, what's the difference between cyber safety and cybersecurity.  Fundamentally, cybersecurity focuses on protecting your information, your data.  And cyber safety or safety tech focuses on protecting people.  The point is your data is never going to suffer from low self‑esteem.  Your data is never going to feel the need for revenge.

This is about factoring the human into the cybersecurity and cyber safety equation.  We know that it is critical that our networks and systems are robust, resilient and secure.  However, it is equally important that people are psychologically robust, resilient, secure, and safe in cyber contexts.  At no time is this more important than now in the middle of the global pandemic. So cybersecurity focuses on protecting data and information from cyber attacks.  Safety tech focuses on protecting people from these psychological factors online.  Everything from abuse to online harassment.

In 2015 we conducted a review of online safety technologies.  And the good news is that we found a thriving safety tech sector, which complements the existing cybersecurity industry and is valued in the UK at more than $500 million and is growing year on year.  Funding is currently being researched to investigate the U.S. and the rest of the world in terms of their online safety technologies and our results will be published later this year.

The good news is so far there's evidence of an emerging and thriving sector in the U.S.  To wrap up, online safety is a multifaceted problem with no single solution.  It's a global issue with many countries concerned about its potential for harmful impact on security health and importantly on social cohesion.  Tech solutions are also global in nature.  We want to work together to create a safer and more secure cyberspace for all.

Safety tech, importantly, supports five U.N. sustainable goals: Good health and wellbeing, decent work and economic growth, industry innovation and infrastructure, responsible consumption of production, and peace, justice and strong institutions.  Safety tech can deliver an internet for human resilient and for solidarity.

>> KEVIN CUNNINGTON: Thank you, Mary, now we'll hear from our second speaker. I'd like to introduce you to Professor Simon Saunders, the director for emerging and online technology at the UK regulator Ofcom.  Over to you, Simon.

>> SIMON SAUNDERS: Thanks, very much indeed, Kevin.  Hello, everybody.  As Kevin says, I look after emerging and online technology at Ofcom.  If you don't know, Ofcom is the UK's independent regulator of the communications sectors in the UK.  As it says up there somewhere, we're making communications work for everyone.  Which probably means if there's any problems with this online platform, it may be our fault.  I hope you won't hold it against us.

More seriously, the role we have at Ofcom and my team is to keep colleagues, really, informed of technology developments across all of our sectors.  Traditionally that's meant the broadcasts and the postal sectors.  In the context of today's discussion Ofcom has an increasing role in keeping users of internet platforms safe online, while ensuring that those services can continue to improve in the usefulness that Kevin indicated at the beginning that everybody sees from that.

That role comes from really recently acquired duties, which has appointed us as the regulator of video sharing platforms that are based in the UK, but also from the fact that government has announced that it's minded to appoint Ofcom as the regulator for online harms for all of the relevant services consumed by people in the UK.  So we're figuring out what that means for us in terms of the relevant technologies and what we need to know and to enable regarding those technologies.

In that context, safety technology, safety tech, is especially relevant.  So we need to understand how platforms will accept and moderate and distribute content, and to identify how they identify which content has a significant risk of being harmful to users and how they will deal with that content when it is identified that way through one means or another.  So technologies in terms of automated content classification and matching to known harmful content and how that is distributed and standardized and kept appropriate and current is critical to us.

Similarly, you know, there's an opportunity for systems that enable appropriate content to only reach the appropriate users.  So that includes things like age verification and assurance systems, includes parental controls.  And potentially wider issues of our digital identity and how we express our wishes, desires, and openness to view various sorts of content. I mean, in hopefully only extreme cases it's relevant for us to understand how those platforms distributing content of various forms interact with the internet structure and the content that hasn't been responsibly handled by platforms can be addressed any harms that may arise from that.

Understanding that tech and how it may evolve in the future is important for us.  For example, in setting an expectation on platforms of how they're going to exercise a duty of care to keep their users safe.  We'll need to find, as the regulator, therefore, ways to measure that effectiveness and to give the platforms a clear and informed guide to what level of performance is appropriate at a given time.  That's going to be a tricky thing to do given the wide range of platforms, the wide range of content and the wide range of user expectations there.

As well as the actual kind of capabilities of the technology, however, it's also important for us to understand how easy it is for the platforms to access and implement those technologies.  So it's no good for us to know that there's some especially capability platform that has some amazingly effective content classification technique if that technique isn't available to any other platforms.  Or if the implementation and the running of that technique would require some resources which it's not reasonable to expect, particularly of smaller platforms. As well as understanding the technical capabilities, it's great for us to see the emerges of a market of independent safety technology providers.  And equally of platforms that have prepared to make available their technologies to other platforms to help to raise the potential of the industry as a whole.  So a kind of safety tech as a service potential that it would be great to see emerging over time.

But, you know, safety tech as a whole is a very new category with one of the first regulators in the world, certainly, to be creating a specific regime around regulating the online safety of platforms.  So we may not be, as a regulator, just to watch how this evolves and interpret it, we need to play our part better.  We're still considering what that actually means.  It could involve actions like regulatory sandboxes to try things out, setting research challenges, engaging and even building trials and test beds and proof of concept projects to really get under the skin of this technology.  We'll probably need to find ways of identifying metrics of success and how to benchmark systems against those metrics.  We may need to play a role in making available sample data sets so the platforms can improve over time and perhaps define APIs for consistent interoperability of those different systems.

A whole host of activity said, which is not necessarily typical for a regulator to engage in, but we think might be necessary in this world.  Otherwise we need to give some thought to technical standards and, you know, there's a balance to be struck there in terms of using technical standards to provide clarity and a wide market for the providers of safety tech while ensuring that there's wide applicability of the best of breed approaches and the setting of standards doesn't in some way stifle the pace of innovation to the benefit of users in that.

Last but not least, what we really need is an international dialogue on these issues.  There's a healthfully thriving sector in the UK, which it's helpful for us to have locally.  Overall, we want the best outcomes for users wherever they technology comes from.  So this dialogue and what we might do as a follow up to it is really a great opportunity to get the ball rolling for ongoing international discussions.  I really look forward to those.  Thank you, Kevin.

>> KEVIN CUNNINGTON: Thank you, Simon.  I see we've had the first question in the Q&A.  It's a gentle reminder to start filling out the Q&A, otherwise I'll end up asking all the questions.  Before we do that, I'd like to introduce our three final panelists, ask them to introduce themselves and speak in this order.  Ian Stevenson, chair of UK online safety and tech industry.  Next, Roni Gur, vice president of marketing for L1ght.

>> IAN STEVENSON: I'll share of the online safety tech industry association, which I help to form after my own company started moving into the online safety world.  And I felt there was a gap to bring the online safety industry together.  Not just to talk amongst themselves, but also to communicate with policy makers, customers, and to contribute the public debate.

Where I guess we really want to offer a voice of hope, because too often I think the conversation is about online safety are characterized by those who want change, those who want the world to be better, and those who are telling us why it's too difficult or too expensive to actually achieve that.  And I think that technology can bridge that gap.  Members of the safety tech association and other companies, some of whom are here today who already produce products and technologies that can solve individual parts of the problem in really compelling ways and help move things forward.

One of my observations coming into this sector is that everybody in this industry is motivated by creating safer experiences online by keeping people safe, by creating psychological safety.  All businesses are driven by safety as well as by perfect.  I think online safety is developing really rapidly, and often business, especially small companies, startups, spin outs are sources of innovation that can drive that sort of growth.

For innovation to be effective and well‑directed we need to be part of a much wider community that determines what we as a society globally want technology to do, both what we as a society want to do and what regulators are going to ask it to do and legislation is going to force to happen.  So we need to work together to set standards, develop metrics for success, and create environments for training and testing technologies with real world data.  I'm sort of delighted that in terms of a number of my points, Simon Saunders from Ofcom has stolen a march on me which shows an alignment we're developing in the UK.

I think it's important as well as a community we're collaborating with those who safeguard other rights.  So, you know, whilst I think freedom of speech and privacy are often used as excuses for inaction, there are genuine conflicts between online safety, freedom of speech and privacy and we need to have that as a debate.  Businesses and investors can only invest in creating solutions when it's clear there's a market for them.

Clarity on what regulators will be looking for, and support from governments and users of safety tech will all be vital to getting the investment into the sector that we need for it to grow.  And policy and regulation needs to be innovation friendly.  I'm going to repeat some of what Simon said again here.  It can be accomplished in part by focusing on the outcomes of the application of safety tech, rather than what the technology is.  Because that leaves scope for new inventions to be created.  By recognizing that what a regulator should be asking platforms to do today may well be very different from what it should be asking platforms to do in two years' time because the world will move on.  New technology will become available.

So we see this as a very dynamic world where regulators set today baseline and stretch goal with an expectation that next year the stretch goals will have become the new stretch goals for what people should be achieving.  That also means balancing APIs against the need to enable and support innovation.

We exist to facilitate conversation and collaboration.  We focused starting activities in the UK simply because you have to start somewhere.  But this industry is international by its very nature.  We want to be part of the global conversation in the community and I think events like today are a fantastic opportunity to get these conversations and collaborations started.  I forget who I'm handing over to.

>> RONI GUR: I think I'll grab it.  You won't need to hand it over.  Hi, everyone, my name is Roni Gur, I'm a VP for L1ght.  Our AI technology is able to detect and eradicate online toxicity such as hate speech, bullying, shaming, child sexual abuse and materials.  We're U.S. based with a center in Tel Aviv.  I'm happy to be part of this panel.  I'd like to reflect on a few challenges and opportunities.  I think that there's similarity between all of the panelists and kind of the voice that's being heard, and I'll get to that in a second.

Let's be very honest.  The safety tech industry is very, very young.  And big tech has progressed very rapidly leaving us maybe just a little bit behind.  This is no surprise I think as social networks and big tech gives narratives like move fast and break things.  And if that's the case, our motto as a safety tech industry needs to acknowledge the rapid effects of online harms.  We also need to adopt some kind of motto.  If I were to suggest one, I think you've heard it from other panelists here as well, we have to do better.  We can only do that together.  And that sounds very, very cheesy, and I'd like to try to explain why I think it's also very true. Safety tech is new, it's young, it's up against vast amounts of online danger.  But we also need to acknowledge that government and private sectors can definitely work hand in hand and complement each other.  What governments can or can't do private industry can or can't do and vice versa.  And I think that together we can get creative about how this ecosystem should grow and what governments can do to make it grow.

So I thought about three main points to this.  One is understand the ecosystem.  You heard this before with other panelists.  The safety tech ecosystem has various players, right?  It has users, adults, children, me, you, anyone who is online.  It has regulators.  It has safety tech providers like myself and others on this panel.  And have the platforms themselves, where content is being shared and people are getting abused.

And I think it's interesting that in a panel that's discussing online harms, there are people from academia and policy and safety tech providers, but not from the platforms.  And I also think it's very important to make sure that while they're part of the conversation, it's also important to differ between them.  When we say platforms, we tend to think about Facebook and Twitter and maybe even point fingers.  What I suggest to do is make sure that we understand that there are bigger platforms, but there's also smaller medium platforms.  They both host online harms, they both want to eradicate it and they both need a cost efficient way to do it.  If you are a smaller platform and you need a cost-efficient plug into a safety tech provider, you need to be aware of it and you need it to be helpful for you.  And if you are the bigger platforms like Facebook or Twitter, you can develop whatever you want.  But then only you have it or only you have access to it and usually platforms are just very much overwhelmed by what they find, meaning they don't really want to find it or deal with it because it's not their core product.

So I encourage governments to understand the need to move together by mapping safety tech ecosystem, getting familiar with technologies that are out there aimed for creating safer online experiences.  And understanding that there are different players with different needs.  I also encourage two other points.  The first one is incentivize.  Excuse the very American phrase,  throw some money at it.  Safety tech needs to be incentivized.  Whether it's funding academic research, subsidizing safety tech development, government or industry grants, sandboxes, anything you can make it worthwhile to be in the industry that is up against major forces that create online harms.

Which leads me to my last point, it can't be so easy to create online harms, but so difficult to eradicate them.  And I'll explain.  Regulation is really important.  We all know this.  But we all know that it also takes a long time.  So I suggest to certify, if not legalize.  For example, you know, codes of conduct or permits for certain types of work.  We've been in business for two and a half years.  We've raised $50 million.  We have partners who are law enforcement agencies and still we have trouble with certain content that can't be uploaded to certain servers, Amazon, Google, things like that.  It would just make it very difficult to develop.  And there's still the fact that child sexual abuse of materials is hard to tackle and eradicate because of its natures and because of various rules in various countries. So I think if regulation takes a long time, and we need to start walking that path, we also need a short and medium term goals like certifying.  To conclude, there's a lot to do like certifying, and I'll hand it over.

>> DEEPAK TEWARI: I run a company called Private.ly in Switzerland.  We aim to keep kids safe online, giving them the means to be able to be protected and to be part of their own safety through AI that works on their own phones or within the apps.

The angle I'm going to take in this conversation is more around about what has been my experience ‑‑ our experience from an internationalization point of view.  We started off, even though we were a Swiss company ‑‑ the UK was the first market where we got attention and traction because naturally ‑‑ there has been a conversation in that market, which we missed everywhere else.  I'm testimony that it came from the UK and we've been able to then ‑‑ having developed our first customer et cetera, in the UK, been able to show what the technology is capable of to other people in the rest of the world and have been able to develop from there.

For us, the UK, especially in the context of this conversation, the sort of top leadership and if we're able to share what different safety tech companies have been able to do, where regulation is headed.  Regulation is another big part.  Talking to regulators in Italy or Switzerland or other places.  I think there's a very big difference in perception of what constitutes online harm.  Where does the liability lie?  Is it with the telecom operator?  Is it with the platform operator?  It is very different if you ask different people.

So there is no clarity.  So the best way in which ‑‑ so if you're developing, for example, a consumer product, parents would think either the platform is taking care of it somehow ‑‑ it's Facebook, Instagram's problem.  So there is a little bit of ‑‑ so very clearly the growth of this sector, we need to understand that there have to be ‑‑ there has to be established duty of care, including the telecom provider, for example.  What is their responsibility in the online harms discussion?  Not only for young people, but also for older people.

The other thing is ‑‑ one perspective, the name of the company is called Private.ly.  Something very peculiar or typical about us is that we've focused on trying to bridge this gap between privacy and safety where people might say the platform provides solutions, but we cannot guarantee your privacy.  There is a diversion between privacy and safety.  Because to provide safety measures, you might need to violate someone's privacy.  And that's the goal.  I'm here to tell the panel and people who are hearing, we've spent the last few years developing tech that sits on the phone, it be super private, we're not the only ones.

What regulators and others must know is that the ‑‑ the ability to identify harms.  That can be done pretty privately without violating anyone's privacy.  So if someone ‑‑ if the platforms are saying this is not possible or it's something because it's not core to their business case.  They do not want it to come in the way of user experience.

Where we go from now is very important because we ‑‑ what we've discussed in this panel, for innovation you have to look at smaller tech.  These are the people ‑‑ the safety tech companies, their core bread and butter businesses is building these technologies.  How do we bridge the gap between what we're producing and make sure this is being implemented, taken on, promoted into bigger platforms.  Either ‑‑ or at least ‑‑ I mean, at least what becomes available and known to everyone is that the following tech exists. For example, the ability to identify kids within a game in real time exists.  Is it implemented?  No.  Where is the gap?  It's somewhere in the business model.  It's somewhere in the regulation.

>> KEVIN CUNNINGTON: The perfect segue to get into the questions.

>> DEEPAK TEWARI: Thank you.

>> KEVIN CUNNINGTON: We've got three questions, but I thought I'd start with one of my own, if I may.  Hopefully, the panelists can all see the questions, can you?  Give you a chance to swap up.  There's one specifically in there for you, Simon.

If I may, I'd start to ask Roni and Deepak.  What are the challenges we have identified in the UK reflect those of other countries?  I know you both touched on that, but I guess could you expand on that just a little?

>> RONI GUR: Sure.  All right.  Thank you, Deepak.  I think it's really interesting to look at the UK as, you know, kind of like the prototype that really emerged in safety tech.  I think it has to do with two major opportunities that the UK saw and really grasped.  The first one ‑‑ sorry to be a lawyer ‑‑ is regulation.  And the online harms white paper made so much segue ‑‑ made so much room and space for dialogue around online safety that even if this regulation has not passed yet, it still created a bed for, you know, these ecosystems to thrive.

I think we're also seeing buds in Australia, and I think it's a really good place to start and see where the opportunity kind of lies.  I think that created the space of the mapping of the ecosystem or the networking because a lot like Deepak says, a lot of it has to do with the fact that sometimes the regulator doesn't know what technology can be used and therefore does not enforce it on whoever that needs to be enforced.  Or does not incentivize it to be used.

And I think the UK kind of lit the match regarding the regulation and the mapping.  And I think while it made a lot of headway, maybe still a little bit to be done around both the incentivizing or data access but definitely has a lot to ‑‑ is definitely down the road and the ‑‑ very inspiring for the international communities to take a look, learn, and achieve those as well.

>> DEEPAK TEWARI: Did you want to say something?

>> KEVIN CUNNINGTON: I was trying to introduce you.

>> MARY AIKEN: I wanted to make the point around online harm.  I work worldwide, and the UK is absolutely leading the world in the terms of conceptualizing online harm as a spectrum.  A spectrum of harm.  Everything is connected.  We can't continue to look at problems in silos.  Cyberbullying over here, harassment over here, mis and disinformation over here.  Everything is connected.

When we join that up and start conceptualizing online harm as a spectrum, then we can start conceptualizing frameworks in terms of how we're going to tackle it.  The brilliant news is in the U.S. a bill ‑‑ we talk about online harms in the UK.  But in the U.S., a bill has just been proposed in May which is called the online harms bill.  So I am just delighted to see this and I want to see more countries jumping in.  Maybe that's something as a group we can work towards and creating this premise of online harm worldwide with a view to addressing it.

>> DEEPAK TEWARI: Just to add to that, Mary, there's this ‑‑ the role in leadership, in regulation as well.  I think that part is missing.  The whole notion of duty of care and where the liability lies in the wild west so to say, that ‑‑ the first attempt has been made in the regulation which came out of the UK.  I'm seeing in countries I'm working at now, there's talk about that.  There is some lead to be made or some progress to be made there.  Otherwise, it's difficult to point who should be responsible for ensuring that that online harm ‑‑ it's not just a CSR item, so to say. Unfortunately, it ends up being a CSR item, which it should not be.

>> MARY AIKEN: We have to think about the Global South, we have to think about developing in markets and we have to think about how we transpose learnings from different countries and different sectors to these developing markets.  That's our collective responsibility.  That's our duty of care.

>> KEVIN CUNNINGTON: Very good.  There are a number of questions in the Q&A.  I wonder to give us a bit of a breather, Simon, there's one specifically in there for you.  I don't know whether you had a chance to read it.  Would you mind recapping the question and provide an answer.

>> SIMON SAUNDERS: With pleasure.  The question actually covered two probably related but distinct points.  Given the very wide multiplicity of social media and gaming platforms, the large amount of traffic and the range of devices, it's very ‑‑ the question says it's almost impossible to have a consistent and protective regime for all.  It's definitely challenging.  I think I'd refer back to what Mary was saying about the way you think about what online harms might embody, you know, rather than looking for something that's very tailored to every individual platform and every individual type of content and defining in microscopic terms what harmful or, indeed, what good looks like.

I think a more sort of principled based approach to the kinds of harms that we're seeking to mitigate and sort of arrange of how egregious those harms might be from sort of unquestionably harmful and often illegal content through to harms that, you know, are socially concerning and adapting according to that.  So I think more principles based and outcomes based approach helps with that.  It doesn't make it easy, but it helps.

The second part of the question was saying that safety tech providers, you know, are keen to explore protections via proof of concepts with internet service providers who act as the funnel for these things and what appetite is there is for those ISPs to work with safety tech providers.

The premise I'm not so sure about, clearly ISPs have an important role in delivering content and being a point of sort of mitigation ‑‑ probably in the more extreme cases where as I mentioned in my introduction where you find the platforms haven't exercised a duty of care appropriately.  But, you know, ideally, this protection would happen at source and the content wouldn't find itself on an ISP network.

In terms of their appetites to work with safety tech providers, I'm sure Ian and other members would argue they are already providing their technology said to ISPs, and indeed to the platforms themselves.  It tends to be behind the scenes for understandable reasons.  But certainly, you know, making sure there's a good understanding of the technologies that are available and the opportunities for ISPs and others to make good use of that is, I think, part of the good work that's been going on in the UK in forming a safety tech industry association and even highlighting the existence of safety tech as a sector in the first place really helps to draw attention to what's possible there.

>> RONI GUR: Can I add something to what Simon just said?  I think that the appetite of ISPs, internet service providers, has become bigger for various reasons.  It could be profit.  An abused user doesn't return to a platform.  Could be something more cynical like they don't want PR scandals or whatever it is.  And to be honest, the biggest change that we've seen ‑‑ we work a lot with web hosting providers, right?  They really don't want at the core to be hosting child sexual abuse materials.  It's not good for them, not for their brand, not for their board, not for anyone.

I think once they open up to technologies, that very simply can help them identify, notify the property authorities and how easy and seamless it is, they become more open to it.  And I think COVID, sadly, we saw an uptick in online harms, online abuse, child sexual abuse of materials being spread around.  I think that ISPs are beginning to understand that there are kind of like the gateway and they have the opportunity and the power to make sure that it doesn't spread more than it is. There definitely is a shift and I hope that it continues.

>> KEVIN CUNNINGTON: I was going to invite you for the next question but you're welcome to go for this one.

>> IAN STEVENSON: The traffic that goes over ISPs is very often encrypted which limits their ability to play that role.  I agree with aggregators such as hosts, cloud platforms and CDNs as well have a role to play here.

>> KEVIN CUNNINGTON: And then I see Deepak you responded to Neil in the Q&A, do you want to give us a brief recap of Neil's question and your answer, is that okay?

>> DEEPAK TEWARI: Definitely.  Neil was posing a question around the problems posed by the encryption technologies and as we know a lot of online harms, imaging based online harms are on the account of self‑generated industry stemming from bathrooms, so to say, that's the way he quoted it.  He said ‑‑ he was bringing the point with sort of encryption and that becoming the rule of law, that whole thing gets weakened.  What is the way around it?

I mentioned to him that there are technologies come -- including the ones we have developed and we, of course ‑‑ safety net, the company he represents has developed technologies like this.  Images coming out of the camera can immediately detect if there is nudity and it can interject or warn the child or do whatever with that information in real time.  Of course, that doesn't mitigate the entire problem.  This will only be mitigated when platforms are able to ‑‑ when I say platforms, I mean device manufacturers ‑‑ are willing to integrate this technology on the phone, on the camera itself. But till such time, yes, there will be a gap in the market, definitely.

>> KEVIN CUNNINGTON: Very good.  The final question was from Andrew.  I might ask you to have a crack at this, Ian, you've had less airtime than some of the others.  What do panel members think is the best way to ensure we have a joint of international approach to online safety tech that delivers for users?

>> IAN STEVENSON: I think it's really important that the progress is being made is debated internationally.  I agree with Mary, the UK has done fundamental work in understanding and defining harms, the spectrum of online harms, the different types of online harms.  If we could gain international agreement behind that or international input to creating an even better version of that, that would be a really strong starting point for making the conversation about online safety global.  It's really hard to talk about online safety until you have agreement as to what the harms you're trying to mitigate against are.

>> KEVIN CUNNINGTON: Perfect, thank you.  We're running a little bit ahead of time.  If there are no final questions, now is your chance.  I think what I might do is take a little bit longer and ask the panelists to talk about their personal commitment to this subject going forward. I guess as importantly, how we turn this conversation into actions that helps us achieve all the goals we talked through in the panel session today.  Who would like to go first?  Sorry, I'm just assuming someone would ‑‑

>> IAN STEVENSON: Look, through the online safety tech industry association, we are committed to being a part of this conversation.  And I'd encourage everyone, whether you're in the UK or not and whether you're in the public or private sector to look us up to engage with us, to look at some of the events we're looking to participate in those.

You know, our commitment is to try and make industry collectively a constructive partner in these conversations globally so that hopefully if we were to reconvene at a meeting like this in five years' time we would be looking back and talking about successful deployments of practical innovative technology to address some of the key harms.  I'm sure we'll still be talking about the missing pieces of the jigsaw puzzle as well.  That's the point I'd like to get us to and we really want to be a part of that conversation and we're open to engaging with anyone and everybody who wants to join it.

>> KEVIN CUNNINGTON: Mary, would you mind?

>> MARY AIKEN: I think for me as a cyber psychologist, you know, we've been sleepwalking our way into an age of technology.  We adopt each technology as lemmings following off a cliff.  For me, it will be progress when we can mitigate its harmful effects.  I think Ian's point is a really good point to look towards creating an international standard for online harm.  We're doing some work at the moment in terms of creating ‑‑ with Ofcom, with Simon's group ‑‑ in terms of looking at taxonomies of harm, and we're also considering how ‑‑ those taxonomies are considered in a developmental context.

We're all familiar with the stages of development, six months an infant should set up, at five they should be capable of whatever operations.  But there are no stages of cyber cognizant developments.  What age do you give a child a smartphone?  How do kids get the best out of technology? So I think that area in a global context is incredibly important.  The internet was founded on the construct that all users are equal.  This is not the case.  Not all users are equal.  Some are more vulnerable than others and children are particularly vulnerable.  I'm committed to creating a more secure cyberspace.

We've touched on the point, in a civil cyber society we should have three aims.  One is an aim of privacy.  The other is an aim of the vitality of the tech industry.  And the third aim is one for collective safety and security.  And the point is about balance.  None of these aims should have primacy over the other and that's the road map.  That's the way we'll achieve best practice going forward.

>> KEVIN CUNNINGTON: Deepak?

>> DEEPAK TEWARI: I'm approaching this exactly from an engineer and technologist point of view.  Whatever we are developing, we want to be able to measure the impact.  The tech that we're developing today, we can tell you that about 60,000 kids are using it and it has a certain measure of reduction of online harms.

So all the efforts that we are doing now, my goal would be to see how it is creating an impact, is it able to reduce online harms as we know it?  Are we able to identify situations, mitigate them, make kids more resilient?  And technology for technology sake is not great.  When it results in measurable impact ‑‑ so my personal commitment would be that everything that we are doing, we would come back and be able to demonstrate the impact each of these technologies has on the well‑being on kids. By definition that would mean you would be able to replicate that impact elsewhere so it becomes a case in point.

>> RONI GUR: I would love to piggyback off that and share that every board of directors meeting that we hold starts with our number one goal in the company, which is how many lives did we save.  Then we can talk about profit and customers and all of the rest of the things.

I think that's unique for a private company.  I think we want to create an open together community, even if, you know, there are private ‑‑ even if it's in the private sector.  We want to collaborate in that way.  And I think kind of our commitment to this thing is we will show up, we will open our books, and we will commit to working together and putting we can, which is mainly technological but could be, you know, resources or ideas. Because, again, we have to do it together.  I think when one thrives, everyone will.  So that's our commitment.

>> KEVIN CUNNINGTON: Thank you, Roni. Simon, if you may?

>> SIMON SAUNDERS: It's always tough when everyone else has spoke.  I'm happy to take my turn on that one.

Listen, in terms of commitment, the key thing I can offer at this point is dialogue.  We're hungry to hear more from the providers of this technology.  The technology you're working on, Deepak, I'd love to see those examples, and Roni, of where you've made a difference to people's lives.  Tell us about how we can think about how you do measure those outcomes and what is an appropriate way of measuring and your views on, you know, what the state of the art looks like today and what could change over time, and indeed, what we could do to shift that gradient upwards in terms of where the bar could reasonably be.  That dialogue is important to us.

It's important to us that that dialogue is truly international.  The platforms, the sources of content that we're dealing with are international and I'm quite certain the innovators for that technology are all around the world as well.  So we'll continue to engage very deeply with our home‑grown providers facilitated with Ian and our colleagues in the government, we'd like to hear from the rest of the world as well and I commit to listening to those as well.

>> KEVIN CUNNINGTON: I saw we got a couple late breaking questions.  If I may, just defer and ask the moderators to find you an answer to those based on the panelists' views.  If I may sum up, we set out to meet three points.  We wanted to understand whether there was agreement that these are global issues.  I think that came up perfectly clearly.  These are global issues.  And that we need international collaboration to make ourselves most effective in dealing with them.  Very productive workshop.  Only remains for me to thank the panel and also relay their genuine sense of commitment.  You can see from each of them how committed they are to making this a better place.  Thank them for their attendance today and thank you for attending.

>> MARY AIKEN: Kevin, can we take it we have a mandate to go forth and to do good?

>> RONI GUR: I'm on Mary's team.

>> DEEPAK TEWARI: Thank you all, bye bye.

>> RONI GUR: Thank you so much.

>> MARY AIKEN: Thanks to the participants and our attendees for listening in, thank you.