IGF 2021 – Day 1 – WS #271 Youth Talk about IoT Security and AI misuse

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> We all live in a digital world.  We all need it to be open and safe.  We all want to trust.

>> And to be trusted. 

>> We all despise control.

>> And desire freedom.

>> We are all united. 

>> JULIANA NOVAES: Okay, I think this is working now.  Hello, everybody.  Good afternoon and welcome to our session.  Our session is youth discussion on AI, on misuse of AI and IoT technologies.  But I guess the title speaks for itself.  Our objective here is to have a discussion on how cybersecurity's concerned in the debates regarding artificial intelligence and Internet of Things.  Our objective is to understand what are international standards that could apply to IoT and AI, what are good practices that could be developed, and what are the current challenges that we still face in this field. 

We have chosen to have a special youth perspective on the issue because we understand that young people are often an underrepresented group in the debates about artificial intelligence and the Internet of Things, even though we are active users of those technologies, perhaps the biggest group of users of technology.  So, our idea here is to have a discussion with a specific focus on youth, but that doesn't mean that you have to be young to be part of this panel.  This is an open discussion in which everybody's opinion is very open. 

So, we have four panelists today.  One of them is going to be online, but we have three of them here with us today.  So, I'll briefly introduce them.  So, we have Savyo Vinicius de Morais directly from Brazil.  He's the Vice Chair of the DCI Working Group on IoT Security by Design, has a master's degree in computer science.  He's a Professor of the Federal Institute in Brazil and his work focuses on supporting the deployment of existing cybersecurity standards and best practices for IoT, as well as identifying and bridging those gaps. 

We have Nicolas Fiumarelli, directly from Uruguay, a Computer Engineer who graduated from the University of the Republic of Uruguay.  Currently, he's studying computer science also at the same institution.  Nicolas is part of ISOC's Uruguay Chapter and he's a Co‑founder of the Youth IGF Uruguay Initiative and also is an activist on several topics, including emerging technologies, such as quantum communication and IoT. 

We also have Ihita Gangavarapu from India.  She's a scholar in India working on Internet of Things, security and smart cities.  She's also part of ICT Standardization bodies and also a member of the ITU Generation Connect Visionaries Board and a founding member of Youth IGF India. 

And we have Oarabile Mudongo here close to me.  He's an AI policy expert and recipient in the Center of AI and Digital Policy.  He has previously worked as a policy researcher with ICT Africa and currently pursuing a master's degree in automotive facial recognition system and algorithm governance. 

So, just to briefly explain the dynamics of our session today, we are first going to have a round conversation within our panelists today, so each one of them is going to speak for ten minutes on AI and IoT.  So, we're first starting with AI, then moving on to IoT.  And after that, we're going to have an open discussion with all of the participants using a platform called Mentimeter.  I don't know if you're familiar with that platform.  We're going to project some questions here in the screen, and you're very much invited to interact with us on that.  So, I'm passing now the word to Oarabile Mudongo, who will be speaking about AI first. 

>> OARABILE MUDONGO: Yeah.  Thank you so much for the opportunity and to be part of this dialogue.  I'm really here to just get to learn from everyone and get to understand how we can address these kinds of issues together. 

As my colleague has already mentioned, I've been working around issues related to artificial intelligence for quite some time, you know, including the project around AI surveillance in Africa as well as how the implications of AI, particularly affecting African societies around intersections around digital equality and how we can address those implications. 

But coming back to the topic for today, you know, I want to believe that this session really opens up a dialogue where we are able to discuss, you know, emerging and cross‑cutting issues around trust, security, and the stability of AI technologies.  And I think it really opens the need for us to start really thinking about assessing and addressing the serious risks that this technology poses to Human Rights, privacy, and also the potential implications around how they perpetuate the digital divide, particularly from a Global South perspective. 

But I think about AI in the context of technology that promotes public good in the public service, where technology modernizes processes where policies also speak to issues in a meaningful way to benefit how society operates.  In addition to that, I cannot really see AI evolve in a way that magnifies our ability and how society's ability to use personal information in ways that can really intrude personal privacy interests by raising, you know, the analysis of personal information to new levels of power and speed.  And such an example of this is basically how facial recognition systems offer this kind of preview of the privacy issues that we are really trying to grapple with today.  For example, we've been seeing different states across Africa adopting AI‑driven surveillance technologies, and most of the benefits and purposes for those technologies are kind of really skewed.  You can't really differentiate what's legal and what's illegal there. 

You know, I really wanted to think of this topic as more or less where AI technology's really transforming the areas of our society and how it opens these new avenues.  And I think it has also shown us these fresh possibilities of simplifying our lives in the society, but more importantly, how this technology can also deepen these existing inequalities, particularly those that have access to technologies and those that don't have.  But more importantly, I think speaking in the context of how this technology affects the society, we need to start thinking about addressing these issues within the community where young people are involved in this policy dialogue, where they are given a space at the table to start contributing towards policy development processes.  That speaks to artificial intelligence. 

But then, giving a little bit of a background issue to how we've gotten to really start talking about the need to degrees of how AI perpetuates digital inequality, or possibly perpetuates digital inequality, and some of the possible implications in the society.  I think, so far, the developments in AI have been predominantly driven more by private sector.  And of course, there is, or there has been some growing interest by different governments to start ‑‑ which has actually really ‑‑ which is opening up new conversations to really start thinking about how can we develop AI strategies that speaks to the needs that we see in our societies.  And I think these strategies will subsequently, you know, help improve and grow governance processes in our governments as well. 

But while such developments can be seen as positive steps towards addressing these issues, I think we need to also address the gaps that we see also in our society in terms of the power of how the power of AI to augment skills and also the resource deficit that we have been witnessing and seeing also, particularly in developing countries where we have these technologies being developed and changing how society operates, but at the same time, having this kind of imbalance between those that don't have the proper skills to really meet the standards of the industry. 

But in the context of COVID‑19 pandemic, also, we have been seeing various developments being established by a number of governments as a way or a measure to contain the spread of the virus, including the launch of digital contact tracing technologies, but towards those technologies, we also note the overlying aspect of big data, where there's platforms basically gathering data from users, possibly having the implications of datification and this overlaying digitalization we are seeing in the society.  And what's really scary about these developments is, actually, the current data privacy legislation landscape we are seeing also in developing countries where there isn't really much work being done to develop active data privacy frameworks to protect user privacy and issues like that. 

But in terms of really thinking about the key challenges here that we are seeing within the AI realm is, perhaps, you know, issues around monopoly and centralization of big powers within private sector and big technology companies where only a handful of tech giants have access to resources and are able to build AI technologies that are data‑intensive, and they are not really being accountable to anyone. 

And the second challenge I think we've been seeing in the industry is maybe around issues of data privacy, which I've already alluded to, and AI governance.  We lack proper frameworks to regulate the use of AI technologies, which might have, you know, long‑term implications around bias and discrimination, which, of course, we have already, you know, witnessed such kind of incidents. 

The other thing I think we need to look at here is perhaps, you know, addressing issues of industry norms, where really big companies that are developing AI technologies and those that are actually adopting them in the society, they kind of really ‑‑ we need standards that embed the systems ‑‑ we need the systems to be embedded in societal values and norms where they respect privacy issues there, and perhaps standards around accountability and transparency as well.  But in terms of really interesting developments that also we have been seeing in terms of how these challenges are emerging, in Africa particularly, there has been some emerging regional initiatives supporting strong data governance policies that enable data innovation.  For example, as recommended from the African Union a digital transformation strategy to support Africa's region for growth in the digital era.  And what's really interesting about such kind of strategies is that some of these advancements might be likely to shift the dynamics of the Internet power and how industry operates and how governments operate or adopt those technologies. 

So, in terms of really approach, I think I would touch base on that.  I can give it to the next speaker. 

>> JULIANA NOVAES: Thank you very much.  From your speech, what you mentioned about digital skills gap is a really important aspect that sometimes is neglected in the discussions about AI and IoT, because when we're talking about cybersecurity, we're usually focusing on the corporate side and the technical side, and we forget about the user.  And when we're talking about the youth perspective on IoT and AI, it's really important to discuss that as well, because in a lot of cases, the young person is the end user, and that person does not necessarily have the skill to protect his or herself online, so that's also a very important point that you touched upon. 

I'm passing the word now to Nicolas Fiumarelli, who is also talking about AI. 

>> NICOLAS FIUMARELLI: Yes.  Thank you so much, Juliana.  Well, I will go deep in a little bit more so you do understand what are the youth concerns about artificial intelligence.  Because as you know, youth are a very active internet user.  We know that 70% of the population using the internet is a youth.  And, well, they have data worries, because sometimes they are activists.  So, what has AI that is, like, this biased thing of AI that youth defy, right?  So, I will start discussing a little or defining what is artificial intelligence so you can have a sense of the different approaches and different things. 

For example, AI can be described as the technique that enables a computer or a computational systems to mimic any type of intelligence, right?  But what is this intelligence, right, is that the machine is capable of solving a specific problem, like the humans.  But, well, today you know very well only defiant problems can be solved by these artificial intelligence systems.  But most of the people talk about artificial intelligence, but there is another concept that is machine learning, right?  Well, the machine learning is a particular type of artificial intelligence that refers to algorithms and techniques that learn by themselves, confronted with this data, right?  So, what Oarabile talked about, this big data concept and all these observations and interactions that the machine has with this running world, to be able to construct these specific representations of the reality to give some things to the environment, right?  So, these specific characteristics enable us to use computers for new tasks that we never, or would be like impossible to code manually. 

So, what happened?  There are different application scenarios of the artificial intelligence.  We have heard about the speech recognition, for example, or the personalization of websites based on user interests.  So, it may filter in applicants screening this clinical diagnosis.  So, a lot of things that has risks, but at the same time, has some bias things.  So, we will talk a little bit more about this, because there is a processor that the artificial intelligence uses that is a collection of this data by people ‑‑ surveys, pictures, maybe comments on chats. 

So, these chatbots learn a lot from the users and try to preprocess data or try to feel out some patterns so they can respond, or at least have some representation of the reality and try to do the thinking of a human in terms of responding to that, right?  So, they are processing data, so the training of the machine learning is based on a lot of data, the big data.  But this model, the model that used the machine learning or the artificial intelligence inside, it's like a black box, right?  Sometimes it is very difficult to explain what the algorithm has decided or how the algorithm decides something, and this is terrible, because for example, there is a thing that is very ‑‑ it's a full conversation now that is a little autonomous with consistence of some frontiers that use facial recognition.  And what happens if this machine should, to people, there are several things that are like ethical things, right, that we really need to start to talk.  It's like, the nuclear weapons.  If we don't talk about that or know who has the power of these technologies, we will be behind.  So, it's not a matter of controlling these technologies, but to know, what is this power?  So, we need to control this. 

And there are some challenges, right, because these data collection could be based on incorrect or biased collection of things, from data documentation.  It is known that, for example, chatbots began to be like (?) because it's all based on the data that they collect.  So, for example, I know an example is, yes, the facial recognition, it is known that for the color skin is very important because it is an intricate part of the mathematical inside of the facial recognition.  For example, one consequence is that 30% of black people is mismatched with other ones.  So, maybe if you were to enter a soccer game and they define you as a bad person, but you are not that person.  It is other one, but they are getting confused.  Like, this 30% is very bad.  So, there is this group there.  So, same happened with the chatbots. 

Sometimes, for example, interpretation could be some different interpretation.  For example, it is known that for women and for men, there are several phrases used in comments in Facebook, or when the algorithm uses this big data, for example, it is more common that say, "Yeah, she needs to take care of the kid" or "He is the strongest one." So, these comments, these things that the algorithm process all the time could have another representation and a bias.  So, machine learning models can become very complex, and artificial intelligence decisions are hard to understand.  So, the lack of this transparency as Oarabile was mentioning, may lead it a lack of accountability. 

So, the other process is explainability, right?  If an autonomous vehicle makes a decision; for example, you will have an accident because of the speed is calculated and you don't have a chance to avoid this situation of an accident, so the car recognizes this and needs to make a decision, maybe turning to the left or turning to the right.  Imagine the situation that it's not possible to avoid the accident.  So, you will turn to the left or to the right?  Maybe at the left are three people, but at the right I see one people.  But what happens is, I don't know, it goes to the right, so you will kill, sorry for the word, like three people.  So, these things need to be decided.  And someone needs to explain what was the decision of the algorithm.  So, the AI explainability is something that we need to be taking in mind because it will be like the future of these things. 

So, that is enough, I think, for introducing, like, some examples or some concepts of this AI misuse, or at least these bias things that will happen with artificial intelligence. 

>> JULIANA NOVAES: Thank you very much.  I think you tackled important issues regarding user privacy and the bias that can come from the misuse of machine learning techniques.  Well, we are passing the word now to Ihita.  Yeah, I see her on the screen.  She's going to be talking about IoT. 

>> IHITA GANGAVARAPU: Thanks a lot, Juliana.  So, before I talk to you guys about IoT, I would actually want to link up IoT and artificial intelligence, and then we move to understanding as to why security of Internet of Things is important and what are the challenges as to why is it so difficult to secure them, right? 

So, before we start off, let's think of a scenario.  Let us think of the human body, right?  We have senses.  We have sound.  We sense sound, light, touch, smell.  And all of these senses that we have and all of the sensed data is sent throughout the body through a network of nerves.  And in the nerves in the brain, the nerves in the brain, they process this data.  And based on whatever action is taken, like for example, you have a movement of a muscle, so that is your actuation.  So, this combination of the sense data to a processing of the sense data to get some insights after which taking an action or the actuation is a combination of IoT and AI, which becomes the complete system.  So, that is how I would want to explain how IoT and IT is linked.  And that's the reason why we have got both topics in today's session. 

Now let's think of a situation where you have a manual thermostat.  So basically, if it gets hot in the room, you have to manually adjust the temperature based on your comfort.  Now, if you were to introduce senses, right, if you were to convert into an Internet of Things, an IoT device, you will introduce a bunch of sensors.  You will have a temperature sensor, you will have a humidity sensor, a carbon dioxide sensor.  So, all these sensors will collect the sense data, and accordingly make changes, like for example, reduce the temperature if it's getting too hot in the room.  Further, machine learning can be used on this data.  If, like, for example, if you have a threshold on the carbon dioxide level, the level of CO2 in a room can determine the occupancy of the room and that is something it can do.  These sights and predictions is where machine learning and artificial intelligence can come into play. 

So, we are seeing, of course, an exponential increase in the manufacturing, the deployment, and also the usage of IoT systems.  And with IoT, we are entering this realm of cyber physical systems.  And what cyber physical systems are, basically, are the integration of computation, networking and physical processes.  Basically, in each changes that are made are made in the physical world. 

So, if I were to give you an example of this, if you were looking at a swimming pool, and in the swimming pool, there is an actuator that works, and actuation happens.  The actuation is the opening and closing.  So based on the concluding level, the actuation happens.  Let us say that this entire system gets affected, like through malware or cyberattack.  Then, even if the chlorine level in the water, it may not act properly and it's going to release a lot of chlorine, so it can affect the person's health and also can lead to the injury or loss of life. 

Another example, like my colleagues just mentioned, was an autonomous car, driverless car.  So, the sensors, you know, they are used to detect the obstacle.  In our case, it could be a human being.  And if the system is compromised, the car will hit the person, right?  So, when you're looking at IoT systems, it is really important that to secure these, especially because IoT systems are very intertwined with our lives now at the rate at which they are increasing and being deployed. 

So, when you're looking at the threats in general of IoT, you're looking at two different dimensions.  One would be the device itself gets compromised because of, let's say having a (?) for example, or it gets malware through the software update.  The other vector that we have is when the IoT device becomes a threat vector, so it is used to launch other ‑‑ (audio fading in and out) ‑‑ and most were attacked because of ‑‑ and they were used as threat attack vectors to attack the DNS server, through Denial of Service attacks.  So, multiple requests were sent, although resources were depleted, and it could not respond.  So, it is the Denial of Service attack that was performed. 

So, these are just some examples that can have very detrimental effects on our lives and our societies, and that's why it's really important to secure IoT devices. 

Now, one might ask, where do we secure them?  Because if security can save lives, can make it all easier, why don't we just secure it?  IoT devices are also called resource‑constrained devices, where they are constrained on the storage or in terms of processing, so this makes it difficult to do any kind of encryption.  Let's say, for example, if your device is sensing temperature data and you would want to securely send it over the wireless interface, then you will have to encrypt it.  So, the encryption has to happen on board the IoT device.  So, that will take up a bunch of memory or storage, and that is a challenge with IoT devices. 

Then the other challenge we have in terms of cost.  You know, functionality versus cost.  You would want to buy a low‑cost IoT device with the same function.  So, lots of manufacturers use low‑cost sensors, so that is another issue or challenge.  Then in terms of deployment scenario.  So, if you're talking about smart cities or just IoT devices deployed in remote areas, for example, to sense the moisture levels in an agricultural field, then these are some places where physical tampering of the device is also possible.  Then you might not go every time to manage the device.  So, and the other issue is when deploying in such remote places and just talking to other devices, or your router or to your cloud remotely, then there are chances of a man in the middle attack or somebody eavesdropping on your personal information or your data. 

Then the other challenge we have is the threat landscape.  So, when you're talking about IoT device, you know, we're talking in terms of the device itself, then you talk in terms of the communication network, then you're talking, and the data goes to the cloud, so your landscape is very distributed.  So, securing every interface becomes a challenge. 

And the last one, as per, you know, from my understanding and my research, is the fragmentation and the standards.  So, we don't have a consistency in the standards across the globe for IoT security as in terms of what are the requirements to secure IoT devices and how should it be done ‑‑ what are the certification and labelling mechanisms?  So, there is a fragmentation.  This is a topic I will be picking up in a couple of minutes, after in the second half of today's session.  This is where I believe that young people become an important stakeholder, as rightly pointed out by my friends, Nicolas and Juliana earlier.  So, it's important, like when we're designing standards, to have an open, transparent and multi‑stakeholder model so everybody's perspective ‑‑ the manufacturers, the retailers, the consumers ‑‑ everybody's perspectives is on board.  And this is where young people's, young person should also have a say as to how the standards for security should be designed for IoT systems.  Thanks. 

>> JULIANA NOVAES: Great.  Thank you so much.  Now we have our last speaker, Savyo, who's also going to be talking about IoT. 

>> SAVYO VINICIUS de MORAIS: Thanks, Juliana.  So, good afternoon, good evening, good morning, everywhere you are.  I'm going to talk a bit about one specific use case of IoT that is the home IoT scenario.  This was my research during ‑‑ this has been my research during the last three years.  We have some concerns in the scenario, in the ongoing, in the current model of operation in the home IoT devices is that most part of the devices available in the market have the cloud‑based operations, which means that even if I'm in my house and I want to switch off my (?) I have to connect to the device's manufacturer, and it sends back the request to the device.  When I say this in technical terms, it's not that bad, but in different words, the manufacturer of your (?) knows when you are sleeping, when you wake up, when you are watching movies, so you shut down your light.  So, this is really complicated in the point of view of privacy and other things. 

When we think, for example, in scenarios where we can have many sensors from the same manufacturer, like for example, in one home that has this type of operation architecture based in the cloud.  So, if you have, for example, moving sensors, switching lights, your TV, your music player, and so on, the manufacturer, itself, TV deep inferences about your routine and what you like to do. 

More than this, there is also one problem in reliability.  So, please imagine that your door locker is connected to the internet and uses this type of communication.  So, your internet goes down and you cannot leave or get into your home.  So, this is also a problem.  And you just can't rely on this type of device.  And more other things, not only the individual scenario, but the collective scenario.  For example, using also the muses of artificial intelligence algorithms, someone can have information about the community and other things, even dropping this data from the network flow.  Someone, for example, in the ISP that runs some type of algorithm that can have, for instance, a type of services based on their request, those devices go to the manufacturer website, the service provided by the manufacturer.  So, we have also these collective risks and others. 

Some devices also allow you to operate them only in the home network, but this is the minor part, actually.  But even considering the exclusive usage of this type of communication in the local network, we also have some problems, still have problems in developing the standards for communicating, for configuration, and mostly for configuration, because the end user has not inspected security, or they almost don't know how to connect the things to the Wi‑Fi and then start using.  So, the point why people are having the preference for these type of operations, cloud‑based operations, is that it makes easier the configuration of the device for both operating the device from your home or from outside the home.  And this is mostly because of the usage of the network and the address translation by the user before, that times the device that are behind the home router, the home gateway.  So, it's hard for the end user or even for me, depending if the ISP uses CG Net.  So, these kinds of things also reinforce the use of these cloud‑based operations. 

But coming back to the point of configurations, we also have some protocols that support this type of thing, but the thing is that they are also insecure.  So, we have for example, universal plug N play and the PC that supports this type of configuration of device, but it has like ten or five years that we know they are insecure.  So, we still don't have these kinds of protocols for the devices or even for the users that are communicating, for example, connecting and configuring, like, by a user interface, by a smartphone app.  There is also one problem. 

And moreover, we also have problems, for example, if you have one security surveillance camera in your house and you have to access it by your browser, for example, you still can't trust in the https certificate because these kinds of operations in the (?) browsers associate the certificate that encrypt the whole communications with a host, maybe DNS name, and this kind of DNS name has to be in the public domain.  You can have a, for example, mycamera.local domain name for your camera and associate it to a DNS certificate for creating an https communication and then encrypt that communication with your camera or with any other sensors.  So, these are the main problems. 

We have some other things that we can talk more later, but this is a starting point.  So, thank you. 

>> JULIANA NOVAES: Thank you.  Just commenting briefly on what you said and also on what Ihita said earlier, I thought it was super interesting that you mentioned that one of the problems with IoT security is, well, manufacturers using cloud services to make those services available.  But sometimes, the companies that are employing these kinds of devices, they're not really concerned about what's behind the architecture of IoT, so they're only concerned about reducing the costs of the devices so that they are not really worried if those devices come with the proper security or if they are properly secured in terms of the architecture in which they are involved in. 

Well, we are now, technically, moving on to our Mentimeter session, which is a space for you to participate in our discussion.  However, I saw that there are some messages in this chat.  So, before we move on to the Mentimeter activity, I just wanted to give the floor maybe to two people, one on site and one online, if you have any questions.  Feel free to either unmute yourself, if you're online, or just come to the mic.  Well, two people, so we don't take much time.  So, if you have any questions, those people that are typing in the chat, feel free to do so.  Pass you the mic. 

>> So, basically, it's on?  Are you able to hear me?  Basically, from the discussion, there are two terms that are being used interchangeably, IoT and AI.  The question is, when does IoT becomes AI?  Because as you are aware, AI is the device being able to learn and operate independently decisions based on its data that has been (?).  There comes a time where an IoT device at some point learns and understands these operations, whereby it now operates autonomously.  That now becomes AI.  So, my question is, when ‑‑ you partially answered it, I know, but I want an answer from the panelists.  When does IoT become AI?  Because to discuss IoT in the silos, whether you like it or not, you're going to bring AI.  When we discuss AI, whether you like it or not, you have to bring IoT.  So, in these two terms, when does IoT graduate into AI?  Thank you. 

>> JULIANA NOVAES: I think I'll give the floor to whoever wants to take that. 

>> NICOLAS FIUMARELLI: Yes.  When we prepare this session, we don't think of the idea of mixing the AI and IoT, like the artificial intelligence inside the sensor, right?  We shall say, like, on one side we saw the IoT security.  On the other side, we saw the artificial intelligence misused.  Maybe we have received criticism around that.  But we know there are bias things or safety things in the artificial intelligence that need to be solved, or at least to be achieved.  But yes, I think that it could be a combination of IoT and artificial intelligence in some manner.  I don't really know, really think that maybe the intelligence of the artificial intelligence or the decisions will be, like, inside the sensor because of the capacity and the constraints, but maybe at some kind of time, all these sensors' data came to a gateway and these processes will operate or process all this data will take some decisions based on artificial intelligence.  So, just to clarify that.  I don't know if you want to answer. 

>> OARABILE MUDONGO: Not really an answer, but just to reply as well, I think they exist interchangeably.  That's why they are termed emerging technologies.  What distinguishes this is that IoT deals with, you know, how devices interact with the internet, whereas, you know, AI is about the data being fed in these devices to operate.  I'm not sure if that really explains it.  Who wants to take it? 

>> JULIANA NOVAES: I think Ihita also wanted to comment on that. 

>> IHITA GANGAVARAPU: Yes.  Thanks, Juliana.  Yeah, so I just wanted to say that you can link up IoT and AI according to my perspective.  So, what I think is, for example, if you have sensors put on your body measuring your blood pressure or your blood pumping rate, or you know, various sensors looking at different parts of your, different functionalities of your body, then all this data goes to, let us say a cloud, or a platform or a server, where all this data will be processed, in realtime, let's say.  And then inside is a meter of this.  So, when the insides part, the prediction part, the analysis part, and eventually actuation part is where machine learning comes into play.  And if we want to build a link, like Nicolas was talking about, how artificial intelligence could run on the devices itself, calling it a smart device.  In IoT, there is a lot of resource constraints, like I mentioned, like in terms of power consumption, in terms of storage overhead.  So, it makes it difficult.  But if you were looking at any other smart device which allows for a lot of computation, then you can run machine learning and artificial intelligence or algorithms on it. 

To do the processing and the smart device itself does the actuation.  So, the instructions that you give it, based on the data it senses, it performs right then and there.  So, that's the thing ‑‑ that's where I think is the relation between IoT and AI.  And similarly in terms of security, if I would give an input, if the data being sensed by your sensors of the device get corrupted, your database or search that you're using to train your machine learning models will get corrupted, eventually affecting the predictions or insights that you're getting out of the models. 

>> JULIANA NOVAES: Thank you.  Then Savyo. 

>> SAVYO VINICIUS de MORAIS: Thank you again for the question.  I completely agree with you, there is a big connection between the two scenarios.  By the way, the machine learning reinforces IoT in the point of view of usability.  So, if you have to go and operate manually one Internet of Things device, it will have lost a lot of potential for making life easier. 

But in the current moment, we still have, as the other panelists, so much source restraint, and we have just a few obligations, like if this and that services or any simple configurations linking sensors and actuators for the home IoT or even for wearable devices like smart watches and so on, these need to be a concern.  And this also goes into my point of local operation, because this gives more reliability, and it removed the inferences from the manufacturers.  It keeps the inferences about your life in your local network with you.  You can decide to put it to the cloud, but you can opt for it.  This is not ‑‑ it should be your option, not from the manufacturer, from the current IoT model.  But yeah, I think that in a few years, when we have more computing power in the home IoT devices, we need to be prepared for this concern and make also the device more secure for anyone who tries to get unauthorized access to that device, just can't.  Thank you. 

>> JULIANA NOVAES: All right.  Thank you.  I'm just going to allow one more question from ‑‑ let me see here on the list.  Fred?  You can unmute yourself and ask your question.  And I would just ask please to make it brief so we can move on to the Mentimeter.  Fred, are you there?  All right, so I guess Fred ‑‑ oh, yeah.  I see.

>> FRED: Can you hear me? 

>> JULIANA NOVAES: Yes, please go ahead.

>> FRED: Okay.  Thank you very much for the opportunity.  So, mine wouldn't be a question but a little follow‑up answer to what Ihita give.  I think Ihita explained it very clearly. 

So, when you take AI and IoT in itself, it can actually stand separately in silos.  There are certain instances where you will be able to bring them together.  Because when you take IoT in itself, we are talking about the embedded devices.  So, those embedded devices can be programmed to perform just one task.  And the AI in itself, it's also like a software, an algorithm that you do, you program, and that algorithm kind of analyzes some data, the test data that you are giving to it, and then once it does that, you do the test and then you pass all the test that you've assigned it and you think your model is ready, then you can deploy that model onto a machine to to do the analysis for you any time that there is similar data coming into it. 

When you take the two instances and you are looking at IoT being skilled to be able to perform that same action on a very large scale, then you can apply AI to the IoT, and at that point, you get the IoT device to be able to perform a lot of analytics on the data that you are receiving from the AI, the embedded system, or the IoT system, and that is where the AI comes into play. 

Then when you are looking at the security, I think the last speaker mentioned something, that is, he was talking about Edge computing, where your data is resident on the local device that you are using, but you choose to sync it to the cloud.  So, it is also very important that that aspect is given to the users to decide whether they want to sync their data to the cloud or they do not want to sync it to the cloud, and that would be looking at the data privacy and security.  Thank you. 

>> JULIANA NOVAES: Thank you very much for your comment, Fred.  Since it was a comment, I'll just move on to the Mentimeter in the interests of time.  But thank you very much for your considerations.  I think they're very relevant to our discussion and have a lot to do with what we're talking about on the relationship between AI and privacy and security. 

So, I've sent the link for the first Mentimeter activity on Zoom.  We can also see, for those who are on site, there is a code on the screen.  You can see it there.  So, we can go to www.menti.com and access the same screen you see here. 

So, first question we have is, what are the best approaches to establish processes for meaningful public participation in the development of national AI policies in Africa?  If you want to give us a little bit more of context on what the question means and what you're expecting as participation from the audience. 

>> OARABILE MUDONGO: Yeah, when you look at the process of AI development across the globe, they're mainly driven by multinational corporations and researchers and activists from mainly global soft countries.  And we are saying, that has to change; we need to see a global balance where stakeholders are involved in this global development processes.  But more especially, I think, in relation to this topic, we have been really seeing a lack in AI national policies in Africa, and we really don't know what could be the gaps there, and we're trying to understand meaningful ways that we could promote public participation, particularly for all stakeholders in this process. 

>> JULIANA NOVAES: Thank you very much for the context.  We have now a voting situation in which we have seven points for "Develop a common ethical and human‑centered basis for AI." And the second place is to "prioritize and support research in AI." In order for us to continue the debate on this, I'm just going to invite anyone who has voted in this Mentimeter and wants to share their perspective to come to the floor, either by raising your hand on the chat or by just stepping to the microphone down here.  So, if you want to participate on this discussion on the approaches to establish processes for meaningful public participation on AI in Africa, just feel free to either comment, either raise your hand or come here to the floor.  I'm going to allow one person or maybe two, depending on the time. 

I see that Nancy has her hand raised on Zoom.  So, Nancy, if you want to unmute yourself, and yeah, take the floor

>> NANCY MARANGU: Thank you very much for the opportunity and greetings from Kenya.  My question is, how are we factoring in the aspect of inclusion?  What are we doing for youth with disabilities when we are talking about these emerging technologies?  Are we including them in our workings?  And how can we make these technologies accessible to them?  Thank you. 

>> JULIANA NOVAES: Yeah, good question.  I think your question has a lot to do with the development of human‑centered AI, which is also one of the topics that we are discussing here in the establishment of processes for meaningful public participation.  So, if anyone wants to comment on that, feel free to do so.  We are good?  Okay.  All right. 

>> OARABILE MUDONGO: Yeah, Nancy's question is really timely and important, and I think this is one of the key issues that we often really don't think about in terms of policy development, particularly when we talk about how emerging technologies affect the society.  So, I think it's really high time that policymakers start thinking about how we can develop inclusive policies to also cater for people with disability, or you know, issues around accessibility there.  So, in terms of what is really happening, I'm not really quite in touch with any other policies that are speaking to issues around disability from where I come from, but I'm really, you know, happy to hear from what other participants think here. 

>> JULIANA NOVAES: All right.  Here is the link for the second Mentimeter.  We have also the same thing, so there is the link to the chat and there is a code here so you can access it through the platform and just type it in the code.  So, Nicolas, if you want to introduce the question. 

>> NICOLAS FIUMARELLI: Yes.  Because maybe you cannot see the screen because it is really small, but which of the following scenarios do you think infers a higher risk at the level of AI bias for the future and why?  So, here we have four different options, like multiple choice.  The first one, A, is skin color is biased in facial recognition.  That is one of the things I mentioned in my talk at the beginning.  So, if you think that infers a higher risk, please mark on the Mentimeter. 

The second one, B, is AI is likely to show language bias and reinforce the existing prejudices.  That is what I talked about, some chatbots recognizing women and men differently because different context, so that is underrepresented.  So, maybe that is a risk. 

So, option C is an overrepresentation of certain factors in the training data sets.  This means there are factors that are overrepresented, so maybe our minorities are not taken into account. 

And D, the last option, is extrapolate what is true of individuals to entire group.  Like, for example, people with disabilities are a small group, so maybe from one region or the algorithm identifies some patterns from this group and assume that all individuals of the group have the same characteristics and then have some errors, right, in this algorithm.  So, we'll see.  For now, we have D and C.  So, the most important are the overrepresentation or the extrapolation between individuals for the entire group.  Two Cs.  And someone put a comment there ‑‑ "All of them and their ramifications are extremely dangerous.  It's like comparing four poison and which is a killer." And we have C and D, so overrepresentation or underrepresentation of the data is the most risky things for the audience.  "This source of other problems." Okay.  Maybe we could have comments or continue to the next one. 

>> JULIANA NOVAES: Just a disclaimer: There's no right or wrong answer, right?  The idea is for us to discuss these things.  So, who made the comment, "All of them and their ramifications are extremely dangerous.  It's like comparing four poisons"? If that person wants to speak up, I would love to hear what you have to say about the topic.  I guess it's somebody very shy. 

>> SAMIK: Actually, I wrote that. 

>> JULIANA NOVAES: Yeah, go ahead.

>> SAMIK: I think it's like, yeah, as I wrote it, as simple as a poem, you know?  Because, like, where you come from and the reason, or like, what you represent, the minority, the majority, you know.  So, like, it's like that.  Everything related is dangerous, I think.  You cannot just focus on one, A or B or C or D.  So, yeah, that's the point. 

>> JULIANA NOVAES: Okay, if ‑‑

>> If we don't have a question, we can continue with the next one. 

>> JULIANA NOVAES: You have a question, yeah.  If you have a question, feel free to go to the mic and ask. 

>> Okay.  So, thank you for your presentation.  In relation to the comments about privacy and discrimination of artificial intelligence can sometimes be with aspects such as facial recognition, how can we ensure that the technology and tools used to collect data are adequate to legally identify the user in order to promote and respect and protect human rights? 

>> NICOLAS FIUMARELLI: Well, there are some practices to ensure this could include all of the voices or representations of the populations.  So, these techniques are like adding more overrepresented data of minority groups.  That could be a solution. 

So, if you have, for example, I don't know, someone with violet hair that maybe is not so common globally, then you need to add more data, if you are talking about a face recognition system or if you are talking about, I don't know, density of the hair on your head.  So, you need to multiply the quantity of data from the minority groups, I think. 

And with the overrepresentation data, it's the same.  So you need to remove some kind of this data from the majority groups, so to have more space for these smaller groups.  Those are some techniques used nowadays and there are some practices happening that aims to do this, like a general thing.  Because if not, we will end with millions of different algorithms that are really biased. 

>> JULIANA NOVAES: Thank you.  You've got a question.  Yeah, feel free to come.

>> DAVID: Hi, all.  It's great to be here.  I'm David from United Nations Climate Change.  And I'm actually listening to this with great interest because a lot of the solutions that can be applied to our AI algorithms can be part of the solution to climate change or to mitigate climate change.  The question is now we are at the Internet Governance Forum.  From the governance perspective, I see a little bit we have a probable future, like if we just let corporations walk away with it, and you know, youths don't have any privacy.  But what would be the desirable future and how do we get there?  What are governance policy measures that could be put into place today on an international but also on a national level to bring us to a desirable future? 

>> JULIANA NOVAES: Yeah, great question. 

>> OARABILE MUDONGO: A possible scenario in this question that comes to my mind, from the positive side of it, would be a situation where we are able to use AI to assess and predict the risks and damages related to climate change, in that case, by application of algorithms, which would then allow us to assess the impact in the environment.  That's just the closest I can really think about that.  I'm not quite really, you know, much knowledgeable in that area, but I'd really like to pass it on to other colleagues here. 

>> JULIANA NOVAES: Just stepping out of my role as moderator here, I really liked your question, because I think when we look at technologies such as AI and IoT, we have to have in mind that this is human‑shaped, right?  Technology does not develop by its own.  It doesn't go on a path which we cannot interfere.  The same way we sort of created this, we can also use it for our own good.  So, we cannot look at AI and IoT from, like, a deterministic perspective and just say, okay, we cannot control that, so let's try to, I don't know, deal with the fact that this technology's just taking our data and maybe using it for purposes that are not in our benefit.  I think the fact that we are here in this forum means that we acknowledge that, that we don't think this is the way forward.  And I think an important thing that we can do in order to really use that for good purposes, especially when talking about the environmental cause, is just be aware that there might be devices.  But you can try to think about good practice to mitigate them and engage in these policy discussions and also in good corporate practices that exist and that are being developed year by year, because there are discussions happening.  And yeah, we can shape it in the way that we want, if we want to.  So, if you want to add to that. 

All right, so, moving on to the third Mentimeter.  Thank you, Nico.  I already sent the link.  So, Nico, if you want to introduce the question. 

>> NICOLAS FIUMARELLI: Well, again, now with artificial intelligence, what do you think is the best way to reduce bias in machine learning algorithms and why?  And the options are: A, design AI models with inclusion in mind; B, perform targeted testing.  This is test exactly these minority groups that you want to be included.  And C is improve AI explainability.  This is the way, what the algorithm has decided and why, right?  So, if you improve explainability, you could ‑‑ it's like the other two points.  So, you can have a sense of reduction in the bias. 

>> JULIANA NOVAES: Thank you, Nico.  We have already some comments here.  Whoever wrote a comment, if you want to share.  I don't know who's snoring, but I hope it's not my mic.  (Snoring).

>> A multifaceted strategy needs to be implemented that would address all three, namely, A, B, and C. 

>> JULIANA NOVAES: Aaron has his hand raised.  So, Aaron, I'll give you the floor.

>> AARON BUTLER: I hope you can hear me.  I was the one who wrote that multifaceted strategy.  And the reason why I thought of that is I was reading the options, and in reading them, I was reminded of something I had read from the philosopher of ethics of artificial intelligence, Nick Bostrom, and one of the things he mentioned is that if we're to have AI being used, for example, in the public space, in roles that would in the past, let's say 50 years ago or so, would have been filled by humans, then one of the requirements, the acceptable requirements, is that the system would have to satisfy the social requirements of that usage in a public space.  But a lot of these social situations and problems are very complex.  And out of the three options that were there, given all of that, I think we would need to use all three of those, at the very least, in order to try to get at some of the problems we've been having leading to, you know, violation of human rights, discrimination, and so forth.  And that's why I wrote a multifaceted strategy. 

>> JULIANA NOVAES: Thanks, Aaron.  Couldn't agree more.  I don't know if I have anyone else here.  Yeah, I'll just send the link for the fourth one.  Oh, okay, yeah. 

>> Hi.  Hi, everyone.  My name is Anna from Brazil.  I'm a law student.  And I was the one who picked A, B, C, as well.  I was wondering specifically about AI in the decision‑making process.  Where I come from, we do not use it for decision‑making, but I've been reading about some countries that do use it, as the U.S., and have really big problems with Point C, because sometimes states, they have the right to hire private companies that don't release how this analysis is being made, this analysis of data.  And I was wondering if you guys have any input in how can we use AI in the decision‑making process, if it is possible to do it in a non‑biased way, or at the very least, if it is possible to do it just for assessing a judge in a way that is not a threat for the victims or the people being accused of crimes.  So, yeah, that's my question, and thank you.  Excuse me. 

>> JULIANA NOVAES: Thank you for the question.  Nico? 

>> NICOLAS FIUMARELLI: Yes, I could answer.  Well, I think there are two separate things, because decision‑making processes are when the states wants to take some decision for the future of the country or they need to realize how to proceed on something is one thing.  But for example, for a charge, when you have a charge there and you need to make a decision with all the information, maybe you have a crime or something and they have photos, has the charts, and with all the information, they need to decide.  So, this bias thing will appear in all of the things I mentioned.  So, at the end, it's a complex thing. 

I don't really hear of any judge or processes in the law using these algorithms now, but for sure, it is something that we need to take in mind and explore to see how to, with these multidimensional things, we could deal with the bias thing. 

>> OARABILE MUDONGO: Yeah, in addition to what you're saying as well, isn't it interesting that oftentimes when we talk about bias and discriminatory AI technologies, we often react after the incident has happened, yeah?  How about we then start addressing the issues at a product development level, where technology companies train their product and software developers about the importance of embedding ethical policies or practices in their product development so that it saves us the time of having to be fighting these litigations in court?  To me, it really doesn't make sense at all.  And I think it speaks to the ignorance that big technology companies often lack oversight in their product development.  And I think it's our role as Civil Society to start speaking out about these issues, and that's just what I think. 

>> So, hello, I'm from Poland and I'm a physics student.  And as I understand, like, always the bias on the artificial intelligence depends on the data, like the data selection.  But actually, isn't like we always need some bias, let's say, but it's as we need to select the right data.  So, for example, the data that represent some human value, so we can speak about this human inserted artificial intelligence because, for example, it asks that this program work in a way that is consistent with our values.  So, actually, we always need the bias, some kind of bias, but as who needs to select this data, right? 

>> OARABILE MUDONGO: That's an interesting question, actually.  The same question you're asking is, more or less, like you as a white person developing a facial recognition technology that is going to identify me as a black person, and that very same technology fails to identify my black skin.  So, what I'm trying to say here is, I think product developers and people that fill the systems with data need to be cognizant of the inclusive aspect or element of it in how they train the algorithm, yeah? 

>> JULIANA NOVAES: Yeah.  Can I just intrude?  Sorry.  I think the point that you brought is really interesting, because while, of course, if you put rubbish in, there's going to be rubbish out.  That's the principle, right?  But also, the people who are assessing what's rubbish or not should also be aware of their own biases.  So, that's why one of the points that is most interesting in this discussion is that we should not only look at the data but also the people who are assessing that data.  So, we need to include people who are part of minority groups also in the development process, also in the data analysis process so that these people who are conscious of the biases that affect them can be able to intervene and, well, basically, get rid of them.  But yeah, very good point.  Thank you for bringing that up, and I'm passing this. 

>> SAVYO VINICIUS de MORAIS: Thank you.  Thank you for the question, actually.  I'm not exactly the AI guy, but okay, come on.  There is also a fundamental point in the point of view of AI that is not only in the training of AI, but what features, what aspects of one person, of one data you are taking into account.  So, for example, when you are trying to recognize one person, what should I look for?  It's the way your face, the color of your skin, your hair, or something else.  So, we still don't know how to completely analyze a lot of things to use as a feature in classifying data and having inferences with AI algorithms, and this way, we still have biases in our decision of the points, but we still don't have a complete understanding of what should we take into account.  (Audio and video frozen)

>> JULIANA NOVAES: Move to the next Mentimeter activity.  Nico, if you want to. 

>> NICOLAS FIUMARELLI: This one is also from Savyo. 

>> JULIANA NOVAES: No, the screen.  Thank you. 

>> NICOLAS FIUMARELLI: Okay, so, thank you again.  This question ‑‑

>> SAVYO VINICIUS de MORAIS: Thank you again.  This one back from you, to understand as the Vice Chair of the Internet Standard for Security and Safety, Vice Chair of the IoT Security by Design Working Group in the Dynamic Coalition.  How can we address the type of operation, the problems around the operations of home IoT devices or older types of devices, wearables, or smart cities and so on, but mostly in the home IoT.  So, how can we handle with the problems with these kinds of things?  And what is your main concern regarding this?  Go ahead. 

>> So, hello.  I am a security specialist in IoT.  My main concern is about how we can use the same system in multiple services.  Because usually when you are dealing with this kind of technology, they are company‑related, so you need to use their systems and their infrastructure.  So, my main concern is about that, like, oh, if I have the opportunity ‑‑ if I bought this, can I change the system?  Can I use it to my own interests, at my own infrastructure, not depending on some company or some brand to be able to use this technology in my house.  Yeah.  Thanks. 

>> SAVYO VINICIUS de MORAIS: So, you mean something like the freedom to change the femur of the device or even updating my device, the hardware of my device? 

>> Yeah, actually, not only that.  Just, if I want to keep the device exactly the same, but instead of sending to the servers, the company's server, I can send it to another company or even to myself.  So, it's that that matters the most to me, the possibility of selecting what system you will use to process this data. 

>> SAVYO VINICIUS de MORAIS: Ah, thank you.  That's the point.  You have the opportunity to choose to where the data is going.  And as I mentioned ‑‑ just one last comment on this point.  As I mentioned, the DCHOS.  This is also one invite for everyone who is interested in cybersecurity topics, IoT Security by Design is only one of the topics.  We have also procurement and digital education, initiatives for cybersecurity, not only for IoT. 

But for sure, I'm going it take those points, like the attack on ours ‑‑ maybe this comment was in the point of view, this attack coming from the device's manufacturer and the access of power also from the manufacturer.  I think that is it, but I hope we have a point. 

>> JULIANA NOVAES: Thank you.  Does anyone want to add to that technical question?  Well, we have two comments in the Mentimeter, so I guess if the people that added them want to speak up, please do so.  Otherwise, we move on to the next Mentimeter question.  Which is also from Savyo.  So, if you want to ‑‑

>> SAVYO VINICIUS de MORAIS: So, back to the point of changing the way we operate the IoT devices in our home.  I have suggested some options on how to face this issue, like the older options or keep the thing as it is.  But the main point of this is the configurations.  It's like, should we improve the user interface and the user experience while configuring a device or outsourcing the security deployment for your ISP, for example, educating the end users to do a well configuration or using Plug N Play protocol, whereas MDMS or any older SPC or if we have any other suggestion.  So, please, feel free. 

>> JULIANA NOVAES: We've got one comment on educating end users.  Okay, more persons now.  If you want to step up, talk about it, feel free to do so.  This is a forum, so you're very much invited to speak.  We don't want this to be a lecture, so, yeah. 

All right.  So, I guess if nobody wants to speak, we can move on to the next one.  Now to the sixth one, which is from Ihita, so, if you want to introduce the question, please. 

>> IHITA GANGAVARAPU: Yes, yeah.  So, my question is about, like I mentioned previously, what are the challenges with securing IoT devices, with the fragmented standards that we have currently?  So, do you think it is required to have standards for security of IoT?  So, I'll just take up two quick questions after which we can have a discussion on the next two‑three ones we have.  Great.  Thank you.  If somebody would like to talk about why standards would be required, just to put in a perspective? 

>> JULIANA NOVAES: Yeah, go ahead.

>> Okay.  Hello, again.  I would like to argue that we need standards, including to the last question, because when you have these standards well known in the industry, the user will be able to ‑‑ if they understand the functionality of one system, they will be able to use it in others, too.  So, if you have these fragmented types of connections and systems, OSs, U Axis, the user will need to learn everything from the basic, again.  So, if you have standards also during the configuration process, the person would be able to learn how to configure one system to be able to configure others, too.  So, yeah, that's it.  Thank you. 

>> JULIANA NOVAES: Great.  Any more comments? 

>> IHITA GANGAVARAPU: Thank you. 

>> JULIANA NOVAES: Yeah, Ihita, if you want to have follow‑up comment on that. 

>> IHITA GANGAVARAPU: Yeah, I would just like to add to that.  I think those points, that's exactly what we're looking at, you know, standards for security of IoT.  And I think previously on the panel we mentioned digital skill sets and literacy.  Bringing in standards, these norms, will make sure that even if you're not very well skilled with the devices, the devices still have baseline minimum security requirements so that it doesn't compromise or affect the functionality of the device.  It's something that people might not be aware of.  So, I think standards, that's one main thing, just like my friend just mentioned.  So, yes. 

If there are no more comments, can we move to the second question, since we are running out of time? 

>> JULIANA NOVAES: Yeah.  We actually have one more question by Sarah.  I'll give you the floor, but please, if you can be very quick because we need to wrap up.

>> Sarah: So, I just wanted to add that we need to have standards, because if we don't, then we leave the security aspect to the developer or whoever is building a product, and that's not safe, which means we will find ourselves again in a place where we are complaining that some of these things have not been cared for.  Thank you. 

>> JULIANA NOVAES: Thank you for your comment.  I completely agree, stepping out of my position as moderator again.  Well, we need to wrap up, so I'll give, well, 40 seconds for each one of you to have final remarks.  And thank you already in advance for attending this session.  Nico, please stop sharing your screen. 

>> IHITA GANGAVARAPU: If I could just ‑‑

>> NICOLAS FIUMARELLI: Please. 

>> JULIANA NOVAES: Yeah.  Go ahead, Ihita. 

>> IHITA GANGAVARAPU: This was in regards to the second question.  I think we have answered the second question, since we were talking about pros and cons.  So, yeah.  Nicolas, please, go ahead. 

>> JULIANA NOVAES: Yeah. 

>> NICOLAS FIUMARELLI: Okay.  I just want to say that this is very interesting because in our session on the artificial intelligence, at the same time as IoT, I really want that you don't be like overwhelmed because of that.  But our main idea was like to discuss these emerging technologies from the youth perspective, right? 

And yes, I think that we need to, for the AI, go to the global standards, and people is the one that create these standards.  So, that for the AI.  And for the IoT, I think we have several examples of different protocols that have been made from different standardization bodies like the AITF, for example, for the problem that you say, well, there are some protocols like the software update and the Internet of Things that trust execution and protocols that are more with the manufacturer.  So, how to maintain this and make the environment secure.  So, for example, when the sensor starts, if it is not updated with the last security things from the manufacturer, then the only thing that this device can do is to update themselves.  So, just responding that there exists some kind of protocols that are being developed for these concerns. 

>> OARABILE MUDONGO: And in closing, I will just say, we can't talk about the stability and security of emerging technologies without talking about policymakers and also the inclusion of other different stakeholders, because that actually brings the element of inclusivity in policy‑making processes.  And I think I'll just emphasize that. 

>> JULIANA NOVAES: Thank you.  Yeah, Ihita, you can go first. 

>> IHITA GANGAVARAPU: So, as my closing remarks, I would like to say that since we are all sitting at the Internet Governance Forum, which is a multi‑stakeholder platform, all of us, and this session specifically, we are encouraging the young people to contribute to the domains of artificial intelligence and IoT.  So, when we're talking about standards, I would personally also ‑‑ and I'm sure the panelists would agree ‑‑ that we want to encourage the young people as one of the key stakeholders to contribute to standardization mechanisms around the security of IoT, and especially and also artificial intelligence.  And it's important that, you know, there are lots of these organizations, for example, ones with the European Union, the one, you know, in India, also we have a few, then we have (?) in the United States.  And all of these organizations do open up for inputs from various stakeholders.  Please make sure that you are contributing to this process to eventually create very robust and inclusive and great standards for the emerging technologies. 

>> JULIANA NOVAES: Thank you. 

>> SAVYO VINICIUS de MORAIS: Thank you.  And I will be really fast because we are up on time.  So, about this topic on standardization, please join a session that I am speaker tomorrow.  I'm going to talk about one security protocol that I'm proposing in the IGF for security management and management of IoT in home networks, about the AI in the IoT.  I think that we really, really need to keep talking about that in the joint of this topic, too.  So, that's it. 

>> JULIANA NOVAES: Thank you very much for the closing remarks and also thank you for attending and actively participating in our session.  Have a great evening. 

 

(Session concluded at 1830 CET)