The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
>> KRISTIN LITTLE: Hello, everyone! I believe we will begin our session now. It's on the hour. So I wanted to say hello from wherever you might be tuning in and we are here for this IGF 2021 20 minute lightning talk entitled rebuilding trust, a path toward more holistic cybersecurity. My name is Kristin Little, and I'm here with Nishan Chelvachandran, and he is a founder and CEO of a company called Iron Lakes. It's a company that empowers businesses and civil society to address and solve challenges through innovations in AI, spatial computing and, of course, cybersecurity.
He also happens to be the Chair of and a Co‑Chair of two IEEESA standards association industry connections programmes which are programmes that bring together people to discuss various issues prior to possible standardization, and one of those is called trustworthy technical implementations of children's online and offline experiences, really short name for that one.
And the other one is AI‑driven innovations for Cities and people. So through this lightning talk we welcome your questions and your comments, just put it in the chat or raise your hand. We will be keeping our eye on that.
And now Nishan, hello. As a former U.K. police officer and a high level cybersecurity advisor for the U.K. Government, you have years of experience with cybersecurity operational activity in the U.K. public sector. I wonder if maybe you could give us a brief description what is meant by a cybersecurity, and what that encompasses, and maybe give us a glimpse or a post card of where this issue is going.
>> NISHAN CHELVACHANDRAN: Good morning, afternoon, evening to everyone here. I like the intro video and that puts what I wanted to say very aptly, so, perhaps, I'm redundant in what I'm going to be talking about now thinking about trust and whatever. As you say, I have got interesting experience and I liked how you said it just so happens that I'm doing this work with IEEE. I think it was a bit more by design rather than coincidence, of course, but as you said, cybersecurity, it's a term that I think is thrown around more and more, and I think a lot of people perhaps know of the term now compared with five, even ten years ago.
If you said cybersecurity, people may not know what that is. I think that element of unknowing is still there. So traditionally cybersecurity, people refer or think of it as what I would define as IT security. So the security of any technology or IT deployment, and usually you think of IT security and then you think of antiviruses and hacking and all of these things, which, of course, is an important part of cybersecurity.
When we are actually, at least in my definition and the broad definition of cybersecurity, what we are actually talking about is in terms of cyber is really the intersectionality between technology and humanity. So whenever there is any kind of interaction between those two spheres, if you want to define them that way, so the cybersecurity, the cybersecurity element is securing those interactions.
So whether that's through a technology security implementation with the various encryption protocols or whatever it is in securing data but looking at legal and governance frameworks and accountability and all of these things to define how these processes are used so not just securing the technology, but how is it used, why is it used, what the technology is actually doing itself, securing the individuals' use of that.
So if you are going to be using a particular solution or platform or service, then same thing, how do you access information? How is your information stored? Who uses your information and whatnot.
As you mentioned with two of the particular initiatives I'm involved with IEEE, one being the AI for Cities and people, so, again, that very much touches on that. So how is AI being used in the cities and communities context in terms of delivering public services and that kind of thing, and not just how Cities are deploying it, but looking at best uses and also failures and how do we learn from that to create the unified and standardized approach for delivering those services and from the trustworthy technology implementations program, which we should come up with a shorter name. Perhaps the audience can suggest something a bit catchier, but the idea of that is it's a very broad program, really looking at technologies in the child centric face or young people facing technologies.
And already as you can imagine that ranges from education to gamification, to the use of a device in my such way. We are talking about online and offline, so it could be that there is an interaction with a technology that doesn't necessarily require an Internet connection, but being in a physical space where there is use of a technology or something on the back end.
And we have got some collaboration, we are going to be doing some collaboration with some research institutions and we are looking to get more people on board with that. I could spend the whole time talking about that project because it's very detailed so I will put a pin in that for now.
I hope answered your question. I feel like I went into cybersecurity and I may not have answered the question. Oh, just to give, I was just looking back at the post card, the glimpse of where we are at.
So as I said, cybersecurity being quite overarching, then really curved upwards and even in the last ten years where we look at the use of devices and how people use devices and, you know, the amount of devices people use, but looking beyond that and looking at how technology is being used by Governments or even private entities.
The algorithmic use and processing of data for decision making, how certain services are being delivered to individuals, how data is being collected on individuals for whatever reason. That's not just from a public service point of view, but thinking on your consumer interactions, for example and I think the pandemic in particular, I think, has really expedited that, and really, I think, I don't want to say bruit force, but I think a lot of people and a lot of systems that, perhaps, weren't necessarily fully integrated or fully adopting the use of technologies are now or have, and a lot of things that were Indonesia whether it was remote learning or working are now very much mainstream.
But, of course, with that brings a lot of other challenges. So it's a bit of cat and mouse really. Usually the governance and security side is chasing the cat of technology. Also the mouse of technology, because the cat chases the mouse. I'm getting my messages mixed up, but, yes, we are at the stage where we really, now that we have this real high impact or high level of impact from technology with society, we really need to be able to bridge that gap or bring the governance and the standardization and the trustworthiness at least in line with the design and deployments of the technology rather than deploying the technology and seeing where it's failing and trying to fix it and how to Band‑Aid solution afterward.
>> KRISTIN LITTLE: I see. Okay. Great! You definitely did answer the question. Thank you. And that's interesting how you need to start with the design of things rather than putting Band‑Aids on. I'm wondering what really keeps you up about this. What worries you potentially about where we are going with cybersecurity and maybe especially in relation to children given that you are heading up the group and maybe we can call it trustworthy technical for kids. I think it was something like that.
What are you worried about happening potentially?
>> NISHAN CHELVACHANDRAN: I don't want to be the naysayer or the harbinger of doom because, of course, I do believe that technology can be a force for good, but I think as with most things, there is a dichotomy, and for all of the good that technology can bring, there is, perhaps an equal measure of not so good that it could facilitate. So what keeps me up. I think it's really running before we can walk.
I am all for lightning levels of progress, especially when it comes to perpetuating progression in the Global South and underrepresented groups and, you know, and the UN SDGs and the really important things we need to be focusing on in my opinion in the world, but if we run before we can walk, so, as I said, the technologies are developed. We tend to have this technical solutionism approach to technology where we find new technology or we create new technology and we try and stick it in somewhere.
Like I mentioned before, we deploy it and it runs really well and usually when we start seeing failings we try to figure out what the problem is. What scares me with that, and traditionally that may not have been as bad a problem if we are talking about certain data sets and databases, and information might be leaked but in the grand scheme it doesn't impact you that much.
Now, what we are talking about, on the precipice of full virtual presence, the conversations of meta verse, we are talking about Governments and agencies having full data sets of personal biological data or all of that sort of data not just to a mention or not forgetting things like remote learning and use of algorithms to determine what information is being delivered to individuals.
So if we are relying on remote learning and certain things that our children or young people, are being delivered, then who is governing the delivery of that information, and what sort of information is being delivered? And, of course, the accuracy of the information, and everyone is familiar already with disinformation and that concept.
That really worries me. The train could run away, you know, run away from the station. And at the moment, we don't necessarily have adequate brakes.
So for me it's about taking a moment and actually thinking about, not just thinking about how we can actually do things better but actually doing that. And that, of course, it's easy for me to preach and say, of course, we need to think about what we are doing, and a lot of these challenges aren't easy or simple, but that's also why I think we really need to think about this in a transdisciplinary way.
And I'm listening to my point earlier about cybersecurity being holistic because cybersecurity has stemmed from technology and a lot of people like myself who are engineers and technologies know the technology and that side of things, but if we are thinking about human intersectionality, we need to be including anthropologists, psychologists and teachers, policy makers, you know, everyone really from the spectrum to really convalesce and really figure out what the problem is and look at things from a different perspective, especially because the technologies we are deploying are being used or at least they will be used by everyone, not just a small subset of society.
>> KRISTIN LITTLE: Got it. Okay. So you are worried about running before we can walk essentially, and going through things too quickly, making things that then need later to be fixed and not actually including many people in the discussion that actually need to be there to discuss these issues. So I guess if you could have, like, a perfect world, what could make the situation better now right now? What would you propose? Maybe an example of something that's happening that you know of? I know you have so many examples of things you have worked on and experienced.
>> NISHAN CHELVACHANDRAN: As I said, of course, a perfect world, utopia, I think that in itself is a dangerous concept. I think the pursuit of happiness is perhaps where we should be as opposed to actually being at that end point, but that said, I think I have touched upon it before when it comes to collaboration, and also I think, especially thinking about it from a children's perspective because a lot of the product services, platforms and games, education platforms, whatever it is that that are being used or designed for kids is just that. They are being designed for kids. It's usually an adult, it's usually someone like me who has an idea of what a child might want or use, and it's rare that the child or the young person is actually involved in that process and that evolution.
So really redesigning, redesigning the design process is one, is one good example of that. Really rethinking how do we create something that is not just fit‑for‑purpose, but is a bit more future proof. How do we engage with the different stakeholders as I mentioned before?
How do we bake in security principles and processes at the design stage rather than a Band‑Aid solution after once we have already built whatever it is that we are building? So as you mentioned, for example I think from an IEEE perspective, of course, the examples are programmes like the industry connections program, because, of course, that brings together these stakeholders to ask these tough questions and to work on what this could look like, because a lot of this stuff hasn't been designed.
It's very conceptual in its abstract. How do we take the abstractions whether from research or case studies and build, create the framework that can be buildable. This work can lead to standardization work which governs and steers industry.
What is important as well is to ensure, when I say collaborative, it truly is collaborative. It's not just a matter of IEEE and heretics like myself rattling the cage and demanding change. This also comes from Government, from stakeholders, from NGOs who are advocating for the various different topics, but also then, you know, the market and the industry. The companies that are working in this space.
I think companies without being too controversial, I think the industry generally has a tendency to turn a blind eye or at least conveniently, you know, take the path of least resistance and find the loopholes or the gray areas of certifications or compliance or regulations in order to deliver or create a particular product.
And when you are looking at terms of service or terms and conditions now are a perfect example of that. I can count the number of people on my hand, on my fingers, the number of people that actually read through all 45 pages of the terms and conditions for a music streaming platform. And that links to the element of trust, right, because a lot of time you think, well, it's from this brand or this manufacturer or it's doing this thing so what’s the worst that can happen. Of course, I'm going to agree to that.
And that, I think, is exploitive because why do you need to hire an attorney to go through terms and conditions to help you figure out whether or not you should be using a service? It should be in a plain, understandable way. So that's really if there was a gift or like I say this pursuit of utopia, it would be that, just resetting, recalibrating, going back to playing straight forward transparent ways of doing things where people are making things understandable, not necessarily explainable because a lot of the technologies are difficult to explain unless you have various degrees, but understandable in how it's being used and who is using it and why.
I think a lot of people are quite happy to share their data, for example if they actually know who is using it and why they are using it. A lot of the time that doesn't happen.
>> KRISTIN LITTLE: Okay. That's fascinating. It reminds me of food labels that someone mentioned before that we could have instead of the 46 pages. I realize we are quickly running out of time. So I wonder if there is just one very short, even Tweet‑sized wrap up that you might want to give to put this all into a nutshell ‑‑ oh I'm sorry, can I just interrupt you for one second? I see that Nelli has a question. Do you think that there is a shift how we think about distinguishing and connecting security and safety concerns? Maybe we will end on that. I'm sorry I didn't see that earlier.
>> NISHAN CHELVACHANDRAN: That is a good question. That is a shift. I think the biggest shift is actually how we think about, you know, the security and the use of our data. So before we always used to talk, we still do, talk about privacy, protecting our data and anonymity. We don't let people see what it is that we are doing. It's safe to say now with the current deployment of technology that agencies, Governments, public entities, companies, whoever, have our data, so now it's shifting from privacy to agency.
So it's not about protecting or anonymizing the data, but governing who uses it and why and how. And giving, you know, and thinking of consent. So, yes, if a Government or a private entity has my data by whichever means by my interactions with that service or solution, that's fine. But then for me giving my consent to use it but also that consent, you know, still being available through the process. It might be that later on I decide that actually I don't like this company anymore or I don't like this service anymore or I don't necessarily like this particular element, so I withdraw my consent for the use of my data in this type of deployment.
So I think right now it's binary. It's yes or no. Do you agree? Yes. And the floodgates are open and they take it away and do whatever they want with it. So, again, so there is nuance. So that's very much, I think, that's where the biggest shift is. There is a lot of other things that we could probably go into around security and safety, but I think it's, once we, I fully accept that mindset, I think the tone of the conversation and also the focus changes because, again, this links back to the wider cybersecurity issue where it's not just a technology deployment or, you know, anymore, it's beyond that.
It's looking at governance and the how and the why more importantly, more than anything else. So if that answer isn't really a Tweet‑sized answer, but if I was to give you the 150 characters or less, I would say transparent trustworthiness is what I think we need to have.
Transparent collaborations between multiple stakeholders and parties where it's beyond vested interest and also having the uncommon and underrepresented view within the discussion. It's not a problem if you have naysayers as well as the aye sayers in the room. It shouldn't be there. It shouldn't be echo chambers of everyone agreeing to design something.
We need to have conversations not whether can we design it, but should we, and if we should, which a lot of time I believe we should, then how do we do it in a way that's fair and transparent and representative and secure and safe. This is really, you know, where we should be at. I don't think enough of that is done.
We have a lot of platitudes and declarations of how great things are going to be or we are going to do this, that, the other. And then we get the white paper and nothing happens after that. So, yes, we really need to make things tangible.
>> KRISTIN LITTLE: Well, thank you so much. That was an excellent listening to you and hearing about this. It seems like trust is incredibly important and we can do that through designing together and bringing stakeholders together, and just having straight forward consent and having everything be transparent and clear, even if it's not everyone agreeing with everyone else.
So really, thank you so much, Nishan. This has been a very interesting conversation and thank you so much to everyone who came, and especially thank you Nelly for your question, and we look forward to seeing you all in various different sessions going forward.
Thanks so much! And thank you very much, Nishan.