IGF 2019 – Day 3 – Estrel Saal C – BPF Internet of Things (IoT), Big Data and Artificial Intelligence (AI)

The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 



>> THE MODERATOR:  Ladies and gentlemen, good morning and welcome to this session of best practice forum on IoT, big data, and artificial intelligence.

I am BPF cofacilitator, together with Alex and Michael, and we also are BPF consultant.

I want to just share few information about the structure of the session.

We have a welcome session and then we will have three more section where we will discuss some opportunities that data and artificial intelligence can address too challenges and then we will have another section on policy challenges that will be moderated by Alex.

Then I now give the floor to BPF consulting.  He will give us more information about the best practice forum.

>> ALEX COMNINOS:  Good morning.  Thank you for the introduction, and hi all.

I am consultant for this best practice forum.  I will give a very brief introduction, and I would like to focus on the simple question of what makes this session different than the other sessions around and on the agenda this week and why is it called a best practice forum.

It is one of four best practice forums which are part of a kind of intercessional program by the IGF and organized by the Mack.  What does this mean?  It means that the Mack earlier in this year, probably February/March picked out three or four teams and said that it's seen there's really a lot going on, a lot of discussion, and a lot of best practices are happening in different fields with different steak holders so it would be good to focus on what's going on, what are the challenges, and what is going on in different fields around best practices.

So that's also why there is a Mack leader which is leading the team and then supported by co‑coordinators.  One is Alex and the other one is Mike Nelson.  He could not be here today, but he's ‑‑ so he is following remotely, and I'm sure that later on, you will hear his voice.

So specific for the best practice forum is also they start working during the year, trying to come up with some ‑‑ trying to define the issues that are ‑‑ that should be looked at, then produce a rough document.  You might have seen it or not have seen it, but there is some draft report online that focused on channels, opportunities, and best practices.

And also this discussion today should be seen rather as part of the process of bringing people together to focus on best practices and to focus on exchanging experiences rather than other panels or other works of our panel discussion of informing and getting more feedback.

So it's very important examples and remarks that are given today will be in the report and we will also do our best to let it feed into a final output document that will be published in a couple of weeks after the IGF.

So that's also, I think, why today you will see this, and I think we see this very much as a discussion, a round table discussion, and not as a really workshop.

So that also mean I'm looking forward to have all the panelists being active, involved in the discussion, but I think I can speak for the coordinators, we are looking equally or even more forward to the input and the ideas from the audience.

That was the brief introduction, and I give it back.

>> THE MODERATOR:  Thanks.  I will moderate in just a moment the section related to the opportunities.

Okay.  So this session will try to debate how the technological things of data and how can address challenges that otherwise will be more difficult to address.

So we want to share your views, your best practice, your user keys, on which application can combine data and artificial intelligence to help solving a problem.  This is a question that came, but we want to share, we want to debate to you user case, best practice, your review on how do you think this physiology can help to address more societal challenges.

And we have a speaker here.  Maybe we can start with David to share your reviews about this opportunity.  You can just introduce yourself and then share with the community your views.  Thanks a lot.

>> DAVID SALOMAO:  Thank you.  Good morning.  I am Salomao, David.  I work for the Communications Regulatory Authority.  We regulate the postal and telecommunications sector.  It's not common to be a hired researcher from the university, but this is a case now.  Things are changing very rapid and fast pace, so we have to not only adopt but also start doing to be able to regulate the telecommunication and postal sector, although the postal doesn't move at a very rapid pace compared to telecommunications sector.

We have been doing research in IoT, big data, and especially in AI and machine learning.  One of the biggest problem that we have, natural disasters.  They are very common.  Every single year, at least from the months of February to March, we have one or two cyclones.  Last year, we last two big cities.  The second largest City, we lost to two cyclones.  And we know that problem will cycle.  It always will happen.  And we are trying to get ready using different ways and different technologies.

One of them is IoT and big data.  We are trying to combine both of them to be able to first get data about everything which is happening around the country, not only through at lites but through IoT, putting drifters on the sea to understand how the current moves, how the weather is behaving.

We're also putting stations around the country to be able to understand what is the exact temperature in one specific place in real time because it's really difficult to acquire all of this information, store in a database, and be able to analyze what a different parts and changes that happen currently.

So with all of this data, at some point, you think that you can analyze it quick and give you actually information about what's going on, but then we realize that we will get lots and lots of data that will start looking at big data and analyze this big data.  That's where the challenge comes.

Data that you harness from IoT when you store in a system, it is a lot of information.  It is a lot of patterns.  I mean, if you're measuring temperature, different pressure, quality, all of these things at the same time, I mean, you get like tons and tons of data that you don't know what to do.  So we are trying to find ways to analyze data from IoT, which has not been easy because it is not a straight pattern on how to do it.

After we have learned from the data, we try to understand what we are going to do with it.  If a disaster happens, what do we do with all the information we have?  It's not only the data from the ground but also working on telecommunications, we have lots and lots of data that are from people.  I mean, from custom, from subscribers.

Ethics, I will say which is not good, ethics were put aside during these two cyclones because when you have tons and tons of data from city archives from people and you need to understand where they are located before the disaster and after the disaster and to be able to send disaster relief, we have to leave the ethics aside and just trying to pinpoint where each and every device was before the disaster.  So that's how we use big data now.

Although we saw that you are trying to locate somebody because of a disaster, then you realize that the solution that you have developed is actually very, very dangerous because you can actually be able to understand where somebody was or where somebody is at any given time without needing to speak with the mobile operator.  So today we can do that during disasters, but we understood in the future we need to work in some series of regulations to avoid people to misuse such a tool.

Thank you.

>> THE MODERATOR:  Before we go to the next speaker, I want to introduce you to our online moderators.

Then I can give the floor to Christine.

So introduce yourself just a little.


Hi, everyone.  Good morning, and I'm very pleased here to be sharing this floor with all the distinguished experts and also friends and fellow stakeholders of UN who are concerned about this topic.

Firstly, I'm from FIOT open lab, and I take care of business development, specifically strategy and also building up an international ecosystem.

So what does this mean actually for us at FIOT?  We are a nonprofit platform that's invested by a local research institute as well as the government, and we are a neutral party, and we try to do our best to help this IOT/AI technology development to promote this technologies in our community and as well as set some industry standards to help.

Because what we see is that IoT applications are very diversified and even fragmented today.

So regarding this topic, I think there are many important things that I think David also mentioned, such as helping to do this management of natural disasters, et cetera.

So I bring maybe some examples that FIOT lab, we have been involved in, to help rejuvenate local rural villages that we see in the Fujian province, which is the southern part of China.

So I think what we want to do this rejuvenation of these rural villages is actually probably the first two SDGs, okay, no poverty, no hunger.

This is very difficult, clearly, because it's a global issue.  We need many efforts from multi stakeholders to make this happen, but we believe at FIOT we take one step at a time, whether it's just promoting a small part of a technological advisement or helping to this IOT/AI and big data technologies, we help to make it into a standardized model at a demo site, which we then can implement and help role out across to other provinces and cities in China, so that's what we are doing.

So I think something related to also what David said just now.  In this Fujian province, also we experience hurricane season.  Lots of flooding.  And in this leasing village, we also I think we all see mentioned in the BPF white paper, we have managed to use IoT censors and also this to collect data.

So we want to collect the data in case of when the water levels are increasing, we can have early trigger alarm to say that perhaps this region is going to experience flooding.

So then, you know, we can trigger the emergency teams to help the evacuation earlier because a lot of these areas are very rural.  They are very far away.  So maybe by the time they get news, it's too late.  So we want to have this news early.  And so we have to use a lot of AI and also algorithms to do this early detection.

The second model that I want to also share today with all of us is this ‑‑ we have a site also in mountains.  It's a very scenic area.  I highly encourage to visit if you can.

We have a lot of land that are actually abandoned.  It used to be very popular for planting southern crops, but lately I think it's abandoned because I think the younger generation, they go to the village ‑‑ they go out from the village into the cities for better employment opportunities.

So how do we help to, you know, turn this land into future sustainable use, or even to help the existing farmers in the place to continue, whether it's to create jobs for them or to help with this ‑‑ them to create a revenue and livelihood in this area?

So we have using a very simple, simplistic IoT censors, to help this, you know, check to make sure that we can turn this wasteland into fisheries.  Again, this is very special because I think the local Chinese culture not only ‑‑ you know, they are no longer just hungry.  They want to eat fresh fish.  So there's a huge demand for fresh fish, and clearly the population is so big.

So we try to use very simplistic IoT technology and also IT related techniques to help them make better farming practice for fisheries.  And we also help them to have remote monitoring so that the expert in the cities, who are the fish experts, can help to inform and tell these farmers, you know, what to do, how to ‑‑ what's the best practice for fishing and if there's any disease outbreak, how to manage them.

So I just share these very simple examples, and I think I leave the floor to other experts.  Thank you.

>> THE MODERATOR:  Thank you.

We give the floor to Raymond Onuoha.

Can you just introduce yourself.

>> RAYMOND ONUOHA:  Good morning, everybody, and thank you for having me.

My name is Raymond Onuoha.  I am a research consultant with the region for the global south.  Research ICT Africa.  And in the past year, within the research ‑‑ the regional academic network on IT policy, focusing on developing policies, digital policies, with regards to artificial intelligence and imaginative technologies across the region as they gradually begin to take hold across the continent.

And in the last year, I particularly have focused on the issues surrounding the protection and privacy protection with regards to harmonizing these policies and having a more critical front and living with some of the risks that technologies would compose.

Why the convergence of AI, IoT?  There has been critical intervention points that they have provided in the region.  As we know, the digital revolution, these technologies bring has been promoted as arguably the greatest enabler for systematic development by pulling in data, making realtime information available in critical times.  The development of these technologies can assist in critical area such as agriculture and environment, also makers to better understand issues and design policies and find new ways to make progress on the various dimensions.

Not just making progress, but to have data, to be able to analyze them, understand how much progress has been made, in what area, and who and who has been covered and who is left out or included or include in that development.

Just talking on the SDG3 with regards to good health and wellbeing, I would just like to tell a small story of the critical intervention point with regards to the convergence of these technologies are displayed within the development economy space, and that is with regards to the Ebola outbreak in West Africa around the late 2013.

So I will just say, short story about that, just depict that picture.

So when this outbreak hit West Africa in the late 2013, the world was literally caught unprepared.  So the consequence is that there were over 30,000 cases that led to approximately 11,000 people being dead and a lot of billions of dollars being lost across the global ecosystem.

And so information, information was very critical to the fight against, both for the respondents and also was needed for timely data about the disease that's spread and for the communities who needed our help protecting themselves and their loved ones.

But as we need them and clearly the technical, the institution and the homeland systems required to regularly guard, transmit, and analyze, use, and share data, were not sophisticated or boosted enough to support the response in a timely manner.

Realtime data, as I said is, very critical in trying to fight some of these challenges when they are happening in society.  There are peaks and valleys in the Ebola dramatically different from the disease spread.  And this raised important questions about why it was so difficult to track the disease.

Digitized data and information did not constitute the norm, especially within the region, they did contribute fully to the Ebola outbreak.  There are quantitative and qualitative differences in data.  That information flows in the response.

A critical intervention that was developed at that point in Liberia was the deployment of the mobile phone based communication system that was called M hero.  This was launched by the national health ministry there.  This enabled realtime connection between the central ministry staff and the front line head workers and the two‑way realtime information exchanges between these using basically the mobile phones, like we know, across developing countries, the key to connectivity platform is the mobile phone and this helped the workers and interventionalists to coordinate and get information realtime and the platform allowed them to receive critical information in realtime and helped to a very great extent in fighting the Ebola outbreak.

So this is a great example of how even though the convergence of these technologies have not gained so much foot hold in the developing world, but they are already serving challenges especially with regards to health and wellbeing.  Thank you.

>> THE MODERATOR:  Thank you.  Then I will give the floor to Olivier Bringer.

>>OLIVIER BRINGER:  Thank you.  I'm working with the digital department of the European Commission called Digital Connect, and I'm in charge of internet governance policy and also investment in internet technologies.

So first before replying to your question, I would like to say that we really support the process of organize these best practice forum.  The idea to have intercession of work, discuss with the community, and come to some sort of concrete output is very good for the way to internet governance works.

Having said that, on the point of how data AI addresses challenges, I would like to make a first remark is that these three components are really the key pillars of the digital transformation.

If you think about the digital transformation of healthcare, if you think about the digital transformation of the mobility sector, what are the key element of that?  It is IoT, sensors, connected objects, it is the data which is shared between those censors, people, and machines which are able to process and make sense out of the data.  And increasingly, it will be artificial intelligence.

So we invest of course a lot into these domains, both on what I would call the policy and regulatory side and we'll certainly come back to the ‑‑ for example, the framework I put in place around data protection or what we have done around the free flow of data and several other regulatory initiatives.

But we also invest a lot in those technologies under our current research program and even more under the next budget of the European union.

So that would be hundreds of example.  I will take only one, which is called the IoT large K pilots, which are pilots, as the name says, where we test these technologies in specific use cases.

So there are a number of use cases we have chosen.  One is for example agriculture.  Another one is active and healthy aging, so how to allow with connectivity to support elderly people to stay at home.  One of small cities, and of course there are a lot of connected objects which can help make the city more livable for the citizens.

Ehealth is also one area, and transports.

And what's interesting with these large K pilots is it allows to test technologies of different levels of criteria, further develop the technologies.

But also link it to the use cases.  Link it to how the technology can be used and provide benefits to the people in the different sustainability areas.

And it allows also to discuss and raise questions about what should be the framework for these technologies to be properly implemented.

So it raises issues about how we manage the data.  I found it very interesting of the first example of David of if there is a disaster, how do we manage personal data?  How do we use the location data in exceptional circumstances and in the future, how do we use them in normal circumstances?  So when you implement the technologies, you ask yourself these questions, and that will feed into the policy making process.

So I think that's why these type of large K pilots are interesting.

And then another point which also I find interesting is a new area of reflection, which is called collective intelligence.  So we are with social networks, with the mobile phone that we have in our pockets, we have a huge connectivity among people, and people are intelligence.  They have views.  They have access to knowledge, and they are willing to share it.

We have censors with us in our mobile phones, in our houses, which provides useful information.

So how can we use all this useful information to improve the way, for example, cities are working, to improve public transportation in a city?  To improve healthcare, for example?  And this is an area where we will ‑‑ so we are starting to think how we could invest in, and we will invest certainly in the next ‑‑ in the next ‑‑ in our next ‑‑ programs in research and innovation, see how we can exploit these collective intelligence to serve sustainability challenges.


>> THE MODERATOR:  Then I will give the floor to Evelyne.

>> EVELYNE TAUCHNITZ:  Okay.  Thank you very much.  And hello to everybody.  Thank you for having me here.

I am a senior researcher at the institute of social ethics in Switzerland where I am writing about the risks and opportunities of technology on peace and conflict.

So my background is like both in peace and conflict research but also ethics and human rights, and I'm also affiliated as a research associate at the center for technology and global affairs at the university of oxford where I'm coordinating a new program called global peace tech, which is very similar to my own research at the university of lucern.

Basically also in a way similar, as all of you said, like trying to launch a different like pilot projects in a sense with examine specifically not only the risks that new technologies pose to peace and war but also like trying to leverage the opportunities.  And these can be very diverse, so to say.

So this is like kind of a network which assembles different researches from different fields.  Some of them are addressing more kind of direct calls of work, like direct violence.  Others are more in the social economic area or in the cultural area as well.

So because like when we think about peace or violence, it's like peace is not only security, but it's a bit broader, also, with social economic and development issues and so on.

So like the examples that I was thinking of, well, one was actually in the feel of humanitarian add and conflict crisis, but you already heard quite a lot in that domain, so I'm going to think of two other examples.

When we think about best practices, maybe just as a side comment, I think it's important first to think about what technology does in a way.

And I think there are two big opportunities that technology presents.  One is that it's simply like makes already existing solutions in a way more efficient and more effective, means it reduces the frictions the transactions caused so we can do things that we've din doing so far like crisis response more gain efficient correctly.

On that hand however I think it's also interesting to see that technology really changes the allocation of effective power.  By that, I mean, like social power, economic power, political power.

So this on the one hand might be a risk because it concentrates power even more in the hands of a few, but it also is a huge opportunity to use this disruptive force of technology to empower people that have not had that much power in the past.

And then I can give the example of an initiative at oxford which is called global women's narratives project.  And so what is being done there is that the narratives of women living in conflict zones are collected, and that is a lot of data.

I mean, different countries, different narratives.  So what we are thinking right now is to connect det to that global peace project to make it searchable with AI and actually to be able to search for certain patterns in that database.

And I think that's interesting because it gives a voice to women in conflict regions that are often just portrayed as the victims of wars, but it makes them more dedicated advocates for peace because they're getting their voice heard.

And it would also, if you think of what kind of patterns could it actually search.

Something I always find sad is that we tend to look at the bad examples, especially when it's about peace and war, like what has gone wrong, like all the differences of ethnic and religious origins and why people fight with each other.  But I think it's also important to look for good examples as well, and then we could search for like patterns of what do we actually share.  Like what do these women share across religious or ethnic or royal or urban, different cleavages that exist in society, what they share.

And if you could get the message across that no matter, actually, like on which part you live in a way, like your life world is quite similar, I think that would be a pretty strong message you could build on.  So like creating the right mindset for peace in a way.

And then, yeah, that is the basic, and I could just give a short, short idea which is not done yet but I think it will be a really nice and huge opportunity.  Is like research on field and water politics and I remember this is very challenging with Egypt and Sudan and Ethiopia and who gets how much water for what purpose.

I think it would be great to look into what AI can do for that, because you have different uses.  And it's very conflictive who gets how much water and at the same time it's very dependent upon weather conditions and seasons.  If agriculture needs the water, it needs it, in a way.  And of course because everybody always would claim that we have the right solution and we have the priority and if you have like a more kind of neutral maybe system, or if the trust would be there, that the AI system would be working well, I think it will be really kind of a good example of where these new digital technologies can also help to prevent future conflict.

>> THE MODERATOR:  Thank you.

Do you want to introduce yourself first?

>> EMANUELA GIRARDI:  Yes, sure.  Hi, everybody.  Thank you for inviting me.

I am part of the AI expert team of the Ministry of the Economic Development in Italy and the design on AI strategy, which is hopefully be soon released.

It's there, but they haven't published it.

And I founded an association to bring artificial intelligence to people because artificial intelligence is really like a game changer, like everybody says.  It's very disruptive.  But people really don't know exactly what AI use, even if they are using it every day from Google map to Netflix to Spotify and all the different application.

To be able to explore the huge benefits to AI and what are the risks that are embedded inside it.

The best practice, you already mentioned lots of things, so I think that a couple of interesting things are still to me ‑‑ the main probably opportunities are in the health system, like all of you already said and also Raymond.  I think the one thing that is very interesting is the drug discovery process that AI could bring huge benefits.

Not only because it reduces a lot of the time to market by normalizing the huge amount of data and with AI algorithm, machine learning, so it can reduce the time to market on new therapy.

And I think it can reduce the cost as well.  And this probably will make new therapy and new medicine available to more people around the world, some also in Africa and other countries where probably it's more difficult to access new therapy and it's very expensive, like the Ebola.  It was very expensive to bring the vaccination to everybody.

So this is one opportunity.  And the other one I think that it can help us a lot in increasing accessibility and inclusiveness for people with the disabilities.

And this is I think very, very important because I recently read the research that says there are 70 percent of people that are kind of suffering a sort of disability which can be temporary, like if I break a lake I have a temporary mobility disability.

In this sense, everybody can experience it, from very severe to temporary or not severe ones.  So I think there are some areas of artificial intelligence that can be very, very useful.  We don't think about self driving cars, which will probably come in a while, which will help people with physical disabilities.

Also, if we think about people with visual or hearing or cognitive impairments or disabilities and we think about the AI technology like voice recognition, for instance, we can help these people and I think this is what's really important for this kind of technologies, that they can help amplify human capabilities.

And this is really a problem to me is one of the best things that they can do really.  And if we think it's about something that can transform the environment for people, a person that cannot see, in a kind of auditory experience, and this is really something that can ‑‑ you can make somebody listen who cannot hear or see who cannot see.

And this is something really important.  So I think that these are the things that AI can really amplify human capabilities is one of the best practice that we can use.

>> THE MODERATOR:  Thank you.

>> BRUNA MARTINS dos SANTOS:  Thank you very much.  I am the advocate strategist for coding rights.

We are mostly Brazilian that has been working on feminism and data protection and intercession of those subjects.

Back in Brazil, I guess ‑‑ I'm just going to start like during the kind of setting the scene for Brazil because I think it's important to message that we don't have a nationalized strategy so far and we just approved our data protection bill.

So we're still in a very exploratory situation.

But not going onto the pessimistic kind of approach to those things, then I'm going to mention one very good example that we have to share.  It's called pretty much love serenades, the project, which was a way for us to have more space.  And after the approval of our access to information bill, it was a way of show casing and disclosing some of the suspicious buys or the suspicious things our representatives would do.

In every single case the AI would check the expense of the representatives and whenever rosy sees a suspicious expense, she tweets and so far she's managed to get reimbursements for expense with call and parties and even cases in which Brazils representatives were drinking in Las Vegas.

So this would be one very interesting approach so far and one kind of my friend, this is one project that I'm really proud of.

I would mention that rosy was spying on the roan affair so a Swedish politician was forced to resign after being paid with public money.  So that was kind of the inspiration for rosy.

And then thinking about a second project and thinking about kind of slightly more sad but also really good important initiative was a project called Berning D.  It's a Brazilian startup that back at the beginning of the year, he was trying to cross the cell phone signs from the victims and trying to locate them when a disaster struck.

Whenever the disaster happened, a lot went up to the region and trying to locate the victims and better assess how the situation was.

And I guess I can stop around here and we can move on.

>> THE MODERATOR:  Then we can go to the next section that actually is on policy challenges.  Should be more delicate dialects.

I will start whatever you suggest, and then here.

We have three policy section that we have identified inside the all access.

We just started explaining the policy challenges section.

>> ALEX COMNINOS:  Hi, I'm Alex.  I'm a coordinator for the DPF, and there's a number of policy challenges that arise from AI.  I think the big data end of things, and I think the list is increasing as we speak in terms of the policy research and policy making, but we've managed to split it up into themes regarding ‑‑ yeah, we managed to split it up into themes in the best practice forum, and our policy challenges relate to use up and take.

So in order to have AI be beneficial for the economy, society, and in our personal lives as computing users, we do have to promote use and up take of AI that involves stimulus economically and in society as well.  And then the big issue is trust.  So trust needs to be built by the applications, by developers, by society and users also need to trust AI.

So I think there's a few elements to that is making sure the AI is beneficial.  The second element is also obviously an educated element, if there's misconceptions about what AI is and if yeah for example data protection, if you're not informed about what's happening with your data, you can trust AI big data and IoT for no good reason.  But if you don't know why you can trust it, then you can also distrust it for no good reason.

And then the last is data related challenges.  So IoT creates data.  AI needs big data.  We need datasets to work with, datasets that are useful.  So this question's about the generation of datasets, the sharing of datasets, and also the custodianship of data.

So, yeah, I could move to the panelists to identify policy challenges and examples from their regions, countries, or personal experience.

>> THE MODERATOR:  We can do the same round.

>>SPEAKER:  Thank you.  I will say that ‑‑ started drafting the data policy low for the telecommunications sector due to what we have experienced in the past, but we are part from the government reach.  It's not the government, but it's partner of the government.  It's a big complicated nowadays to be a regulator because you're not only defending the interest of the government, which may change, but you also defend interest of the society at the same time.

So we take a more cooperative approach when we are trying to do ‑‑ or when we're doing regulations and Lowe's such that we have in most of the Lowe's that we are producing in the regulations, the participation of the civil society, because that's the only one way you can have a win‑win situation.

When you approve that document, everyone is well aligned with what you have approved.  So for the document detection, low, for example.  I mean, there is a large concern from the public about literacy.  Even in countries with high level of literacy, the small letters in the country, people just look at them and sign, imagine our context.  People are start getting access to who have access about my data?  How are they going to be handling my data?  Is this going to be used against me?  Can at any point I opt out?  I mean, how this whole process would be done.

So to create a part also means that there would be more of your data about where did you obtain.

For example, the banking sector, they want to have access to, for example, the location of your mobile phone if you're doing a transaction online from bar Barbados, for example, they want to know if you are actually in bar Barbados, they want to go to the telecommunication structure, know the location or last location of your phone and decide if the phone is in or not in.

If the phone is in and the card is in the same region, they will just send you a same region.  That's how they do it normally.  But they want to be sure so the customer don't complain.

So they want to have access to that the information without having a specific regulation for it, but it's to protect the customer, but at the same time the customer doesn't want to give that information, I don't know how you will use it.

So it becomes a roller coaster of discussions about several things and specific things about the regulations.

And then we unload AI to send messages to some specific phone number that we have contacted them, tell them that they will be addressed by an SMS which is created by an AI telling them the best practices and bonuses they are to adopt from each and every operator.

You have to see that 40 percent of the people receiving the SMS, by the moment they receive a special promotion, just goes straight on by.  They don't even analyze it.  And we realize that the AI is good at looking at how much you spend on the end of the month on your phone, trying to see what is the best package that you can actually use in the market, and they are very quick at deciding.  And of course we put somebody to verify before it sends the message, but it brings several changes to the market that perhaps even the regulate them we need to think before doing the regulations.

Each if we do understand how technology work, how the AI work, if we overregulate, the market won't be able to grow.  If we do not regulate, it will become a mass and we will have to become a mass just to pass bills and speculate problems.

So we took a step back and looked at where we want to be in the text ten years when it comes to big data, AI, and IoT.

IoT generates a lot of data but what people don't understand is radio communications decides how IoT going to be in the next 10 or 15 years.

The world cup of the communications is known as the WRC.  It's the world cup of IoT.  That's where all the decisions are being taken.

I will give you an example.  There's a frequency band called 868 where a lot happens to be used.  In that, it says that each device can only transmit one percent of the time, meaning you cannot be transmitting all day.  You have to transmit only one percent of the time.

But for me, which I don't have a very big city or dense city with so many devices, but for me, I cannot track a bus with one percent.  That doesn't make any sense for me.  For my rural areas where there is nothing in the transmitting, one percent is 0.  I don't need that.  I need more than that.

So we're trying to change that piece of regulation.  We have to discuss all of this with Europe and other African countries.  We cannot just on a whim decide we don't want one percent, we want calculations told us that 17 to 20 percent is really good for our region.  But you cannot just on a whim decide to change these things.  Whatever you change, it not only changes your market, but also influences how you operate on every region.

Thank you.

>> THE MODERATOR:  Thank you very much.  I think that's an interesting perspective on the challenges of regulation.  That's a good introduction.

In terms of the three themes, maybe we can go to Christine next and we can discuss use and up take of IoT big data internet of things.  What policy changes are there in terms of use and up take?  What works?  What needs to be regulated?  What perhaps doesn't?

>>SPEAKER:  First I like to mention that I'm not a policy expert.  I'm from technical background and specifically working as an engineer previously but now obviously going to the dark side of business development.

I think that maybe we backtrack a little bit on the topic of, you know, distrust or fear, and it's something which I've read which is that usually we fear or distrust something we don't understand.

So with this I think there are many ways that we can try to promote or help in policy challenges.  Firstly, what I've seen because of my nature of my work, working with governments, not only just in China but across the world also, is that sometimes during implementation they don't understand the end user needs that very clearly.

Either that or they don't understand very specific problems related to the technology very deeply.

And so it is also, I note with interest that actually the company I'm with now, FIOT own lab, we are actually if you will an experiment.  We're an experiment into a new type of business model a new type of collaboration model, specifically our investors, our shareholders, are from the local research institute, the Chinese academic sciences, and also the local government.

So with these two shareholder, we can then become a very neutral party.  So when we try to promote neutral standards, when we try to do things with IoT, AI and big data community, we make very technical implications from a very technical and neutral perspective.

So in the past when we first started our organization there were many companies who tried to say, oh, this is a good idea.  We like the platform that's promoting technology, a specifically a very close permission and close to the end user application, we like this.

But we actually reflected a lot of these companies because we wanted to very specifically maintain our neutral party position.  So then with this neutral party position, we have actually moved on to help promote and ‑‑ or sometimes pioneer some industry standards, more specifically for IoT, and now moving also to AIoT.

So we participate in a lot of industry alliances and also forums like this, and then we very closely collect all these ideas and we also bring them back to our shareholders, local governments, and also research institute to help progress and move these difficulties that the end users actually meet doing trying to roll out technology.

And we tell it to the government.  Okay, these are the problems that the industry is facing.  What policies can you introduce to help either reduce the technical barrier or to help have some, you know, text policies to promote the growth of these start‑ups or companies.

So this is the work I've done from the this FIOT open lab site.

And then something which is also a little more personal to my heart is I'm also the adjunct professor for science education at the Fujian Normal University.

I think science and technology does not need to be very complicated.  We should demystify it and explain it very simply to everyone, not just in the future, our K through 12 steam kids but also I think to community at large.

I think when AI was first introduced, I think it became very quickly the pansy for all problems of the world, but we know that is not true.

And AI, a lot of training and machine learning which is really the hard work goes into it before you can get really robust algorithms and models.

But I think a lot of lay people, such as my grandparents, my parents, they don't know that.  And so a lot of people have certain illusions that, oh, AI is this solution for everything.  But it is not.  And I think science outreach to our community can also help to reduce any misconception and promote better understanding, which I think is the first step towards any good policy making, towards, you know, mutual trust.

And that's my sharing.  Thank you?

>> THE MODERATOR:  Thank you very much.  Next I think we'll have Raymond.  And Raymond if you could still speak to up take and usage.

>> RAYMOND ONUOHA:  Okay.  With regards to stimulating up take on usage of IoT, big data, and AI applications, speaking from developing economy perspective with a focus on Africa, it's not just important.  Before we begin to latch onto the euphoria of technological innovation and revolution, we need to look at the basic analogue foundations that are critical to serve as the platform that can help the region unleash the potentials of these innovative technologies which we risk amplifying.

For us, especially in the African region, on the demand side, more critical, what's most important is to deal with the challenge of digital literacy because people cannot use what they don't understand, no matter how beautiful the opportunities look.  If they don't understand it, then they can't use it.  So persistent digitally trace rates across south Africa, particularly in the rural areas.

Why Africa may seem to be a very young population across the region, in the rural areas, you have a larger proportion of older people who are not digital natives who are less technologically adept than the younger ones.

So this will play a bigger role in this when provided in the right language.  So we need to digitally trace information which will be critical for establishing and enabling digital environment where there will be an inclusive environment on use objective these technologies.

On the supply side, critical for, from my own perspective, enhance technology, support and infrastructure, especially with regards to access.  Not just access but also affordability which is a critical compliment to enabling beneficial assets.

So internet, adoption, and put internet across the region.  The region account for 40 percent of global population, not provided by mobile network according to the connectivitiness in 2019.

In areas with connectivity, the cost of equipment and services remain a huge career with technology production with a prize of close to 70 percent in contrast to a mobile for the ability threshold of two percent, which is a systematic level.

But in contrast to that, the region is operating at close to seven percent, which is highly unsustainable.

So there's a need for community infrastructure from the development of the convergence of technologies, which will be a prerequisite for the development and optic.

Our research is African countries have met some improvement in the quantity and quality of the telecommunication substance.  Africa has a lot to do as far as connectivity to put this infrastructure in place.  Without these foundations, the potential benefits of these technologies will be limited and enjoyed by only a privileged few.

And on the government's side, the most important, not just looking at private/public partnerships with regard to investment, but I think the most critical thing is to improve ecosystem trust within the technology and environment for the government side.

Globally, regulators and policy makers are debating what's data?  Tech fans should be able to collect and install.  The property for which they should be able to use that data, the degree of transparency they should provide about what they do with it, the information they should provide to customers and the risk of this data and they run the responsibility of alienating customers.

So what is critical?  Robust regulatory environments with regards to privacy that individuals can trust and empower us to use AI solutions that requires data to work.  They develop investments globally on transportation regulation as government look on these areas to find ways to curb the information of data.

These are critical imperatives if we are to drive the usage, up take, convergence of these technologies, especially across the African region?

>> THE MODERATOR:  Thank you.  Thank you very much, Raymond.  We have Mike Nelson, another cofacilitator.  He wants to make a comment.  I think you'll hear him, but you might not see him.  He's looking pretty dapper as a thumbnail.

>>MICHAEL NELSON:  I'm calling in from Washington, DC where it's just before sunrise on Thanksgiving Day.  I couldn't join you because I'm starting a new job on Monday, and I have a lot to take care of before then.

Thank you very much to the other moderators for the work they've done to pull this together.  We've got a great panel, and I won't talk very long.  Just want to build on a few points.

I think both of our previous speakers said something very important which is people fear what they don't understand, and they don't use things that they fear and don't understand.

So I think this report is going to help in many ways to address those fears.

People have heard me talk about artificial intelligence and machine learning know that I like to talk about some of the myths of AI, and I think our report and this session are going to address some of these myths.

The most important myth is that AI is ‑‑ people think AI is magic.  It's not.  And as a matter of fact most of the things that people are calling AI today aren't even really AI.  We have to understand this technology is limited, and it's only as good as the data you feed into it.

You also have to understand it's an evolving technology and we can't ask it to do everything when it's still at an infant stage in some cases.

A second myth is that AI is not just about personal data.  We've had a couple examples today about using AI to forecast disasters or improve use of wireless spectrum, and I think those are good example of where we're not collecting personal data.  It's not just about Facebook and Twitter and the highest level of the stack.

And that's a third myth, is that AI really only applies to content and to data collected by individuals in the application layer.

We need to go down the internet stack and think a lot more about what happens at the security layer, an identity lawyer, and even at the infrastructure layer where we're trying to make the network itself work better.

And I think the last myth that I just wanted to touch on is that this isn't just about privacy and ethics.  It's funny how most of the policy discussion is around those two themes, probably more than half of all the meetings I've been to in Washington around artificial intelligence start with that question, how do we protect private data.  It's a very important question.

But the diagram that Alex showed earlier about the hierarchy of trust in our draft report, we need to think about all the different pieces.

If you look at the graph, the first thing we need to talk about is availability of these systems.  If society's going to really rely on them, they have to be available.  And then they have to be reliable.  They have to be consistent and based on good data so that the result doesn't radically change from day‑to‑day and end up misleading policy makers, policemen, individual consumers.

There has to be accountability.  There has to be security.  On top of that, we have to deal with the legal issues around everything from liability to antidiscrimination.  On top of that, we have to make sure there's lots of choice because that really will drive the innovation and give consumers choices to make sure they get the service they really want.

And only at the top level of this hierarchy of needs do we end up addressing privacy and other human rights.

So I hope people will read the report and contribute.  We've gotten a lot of good case studies here today, but there's a lot more work to be done and we look forward to engaging with all of you in the room and anybody else online.

If you have comments for me, the best way to reach me is through Twitter.  Just Mike Nelson on Twitter.  And I've been following with interest what's going on in Berlin.  Wish I could be there, but I'm here enjoying turkey, and as I said, getting ready for a new job.

Thank you very much.  Glad to be able to join in.

>> THE MODERATOR:  Thank you very much, Mike.  You're more technical than me, so you can mute yourself.  Perfect.

Next, we're going to move on to trust, which is the point at which Mike ended and we're going to move on to Olivier from the European Commission to speak to policy issues about trust, big data, and IoT?

>>OLIVIER BRINGER:  Maybe if you allow me just a few words on the first point, which was user up take.  And I very much agree with the previous speaker.  If we want to stimulate it demand, we have to stimulate the supply side.  We need infractures in place.  We need to stimulate the development of technologies.  And we need to stimulate the assets that can be USDA across sectors or inside one sector to address a large sustainability issue if you think about the sector of healthcare or issues related to climate change.

I very much agree also with what the previous speaker said about accompanying explaining AI/IOT data.

I think there's a lot of work to be done in terms of raising the keels with those of you who love the technology and also raising the skills for those who will use them and be confronted with them.

So digital intimacy is important.

And not everything needs to be done by the state.  Initiatives, like AI for people, are certainly useful.  It's quite complicated to explain what AI does.

Another one which is very important I think is the SMEs.  At the web Summit at the beginning of this month.  Half of the start‑ups are involving AI or developing AI.  So sector knows about AI.  There is no problem.  I think the large company will take time but will use these technologies.

The black spot, if you want, or the area where we need to put more effort SMEs, I think.  They are the ones who need to understand these technologies and see how they can incorporate them in their product development processes.  So there is work to be done.  And something you do in your office called digital innovation hubs, we put together hubs where we get‑together academia, large companies, SMEs so SMEs learn to develop technologies and can implement them.

People will not engage in IoT if they know that they will be locked into a solution.  They need to be sure once they use a solution, it's interchangeable with a competing project that will reach the market.  Ceremony with data.

Very important that data are in a model.  Data is done for a specific sector or for human consumption.  It's not obvious that this data is easy to use by artificial intelligence, for example.  So interusability is quite an interesting question.

Now, to come to your point on trust, I think first this needs to be secure, and I think the public at large is well aware that there are security issues leading to IoT for example.  So there is a real effort to be made there to make sure those connected efforts that will surround us and support us are secure and think about the connected car.  It has to be very secure because you will be driving on the highway in a connected vehicle, so it better be secure.

And we have done ‑‑ what we have done in Europe, we have now have a legislative framework about cybersecurity certification.  So we believe very much that certification is a very important tool to increase the level of security and also to warranty that product and services on the market are secure enough.

On AI, trust is very much linked to the fact that we are sure it will be ethically developed.  And that's very much the European line.  We want to develop AI.  We want to invest in AI and make sure it is in line with our values.  Of course it will be in line with our values because it will have to face a number of our regulatory framework.

But now we are thinking with a group of experts about what are the key ethical aspects of AI that we want to be respected, and is there many of them.

For example, human agency.  Very important that it's not the machine that decides, I don't know, you are too old to have access to this insurance scheme.  It's a human being who decide these type of things and can look critically at the results of the machine.

It's very important to have furnished nondiscrimination inside AI so you don't arrive at results which you wouldn't like to arrive at.

It's very important that these technologies are secure, I just mentioned it, and that they preserve the privacy.

When coming to privacy, I think something that's important is that beyond the rules, it's important to have the technologies to allow people to control that data.  So it's very good to have a right, very good to have a law that protects ‑‑ that spells out and protect your right, but it's equally important to have the tools to exercise those rights.

And that's what we try to do.  Modestly, in my team, we try to develop this privacy and technologies.  We try to develop technologies that secure your data, allow, for example, you to manage your identity online.  So that's a key element I think of trust is that the user is certain that he or she is in control of this very pervasive environment we are going to be in already now but increasingly more in the future.

>> THE MODERATOR:  Thank you very much.  Next, we have Emanuela, and I believe she has a brief presentation.

>> EMANUELA GIARDI:  Before the presentation, which actually is linked to the presentation, I just wanted to add something to what Olivier said.  AI is a European concept we are developing.  I think it's probably key to balling this risk and really kind of to increase the access to data and benefit from all the positive things that this data technology is going to bring.

And a way to developing this technology, I think it can be a CLAIRE project.  And it's a very good project.  At the moment, everybody's talking about AI.  With AI, you need massive data and you need to gather together to develop a good AI.  You also need to have like a common base.

And I think these European humans think the AI is probably a good site to help us develop, for Europeans and everybody in the world.

So this is why we are part of this initiative.  This is why we want to build this initiative.

As I mentioned, the reason why we are building these basically network is basically to develop a European AI ecosystem, and I think this is really important because you don't need all the ‑‑ like the government or the researcher or like Olivia was mentioning.  You need a European economic system where everybody's gathering together and we can discuss on how to develop this new technology, gather the data, use this privacy law because it's very important to respect privacy.

One side is information to access, to increase the access to data; and the other side is to do it while giving benefits to everybody.

And in this moment, we needed to create a project like CLAIRE.  It's a very interesting project.

One last thing I wanted to say that's not related to CLAIRE but it's an settlement of something very interesting.

It's a pilot tester, ongoing, on the concept of trustworthy AI.  So basically the AI group experts at the European community is define this assessment list divided into seven areas to define what does it mean trustworthy AI.  So it's legal, ethical, and robust.

And this is important because I think that we need the kind of certification for these intelligence systems, not only in the European but globalization to make sure the decisions that are taken so the impacts on society is something that is agreed and accepted by everybody so everybody can understand the way it's developed, it's transparent, it's robust from a technical point of view and it's safe.

>> THE MODERATOR:  Just an announcement.  We are going to go into lunch to make sure that the audience gets to participate a bit.  I know there's sometimes a ‑‑ yeah, not much chance for the audience, but that is very important at the IGF.  So if you'd like to stay, we shall.  We're going to move on to the next two speakers, and we have ‑‑ yeah, Evelyne next.

>> EVELYNE TAUCHNITZ:  Thank you very much.  I'm going to continue.  I've been in the same line of values which is talked about already.

I think if we want trust and up take of these new difficult technologies, I think the trust and up take is obviously linked because what we don't trust, we're not going to use.

But I think trust has a lot to do with ownership in the process.  So it means like how do we actually decide in a way, like what topics are to be addressed.  I'm a political scientist by training, so that's ‑‑ in a way, what gets us custom, actually have been missing a bit at this conference are critical points and I think it's important to address them because if we don't, we will have problems later.

And of course everybody's talking about inclusion, but I think it's maybe also not only who's included but also who shapes a debate and who's taking the decision.

I think it needs a bit of reflection in a sense of who needs inclusion for the sense of private sector, but do we really want private corporations to decide for what purposes technologies are used.  Would we maybe not want elected representatives of the people, like national parliament too, decide about these issues.

I think there are lot of questions of legitimacy there in a way.

I think it's important to also think about topics that were not discussed, like for what purpose we want to use technology and for what purpose we do not want to use.

That's a thing that has not hardly been discussed.  It could be like what types of uses of technologies we would like to ban?  I'm thinking of autonomous weapons systems, for example, survey systems that allows governments and private corporations to collect our data.

Maybe we do not want that but do not have an opportunity to opt out actively, and that has to do with freedom and self determination and autonomy.

And when it comes to justice, something I was really missing as well is that it's not only about equal access to AI and everything.  It's also about the sharing of the risks.

Everybody's convinced there's going to be great opportunities, and I'm thinking that also, but not everybody benefitted but some may be much more at risk than others.

I'm thinking of developing countries and people in dictatorships who will not have a choice really of how these technologies will affect their lives.  So I think there's a huge need to also addressed issues that have not been addressed so far.

>> THE MODERATOR:  We'll go to our next speaker, Bruna.

>> BRUNA MARTINS dos SANTOS:  Thank you very much.

I'm going to go through some of our work a little bit.  At coding rights, we're starting to develop a feminist framework to assess and design AI initiatives.  So the idea is to acknowledge at such products and technologies should be taking into consideration the importance of considering all the critical voices and discriminatory effects that are caused by this.

To us, generally, a lot of those debates about ethics and AI, they have been around developing something but something that's supporting a slightly global approach.

So in this first moment in comment, we do think this whole AI and ethics is likened a little more sensitive regional approaches, and also somebody or situations that do make companies, similar perspectives and up hold developments as well.

Also, going through those systems, we're talking to ‑‑ and I'm just going to pick up a little on Michael's words before because he spoke about people fearing what they don't understand and also mentioning the needs for tourists and those systems.

But being from Brazil and lat America, my region being one of the main focus in markets for the defense industry, I don't really think there is a need or a space for such choice or such situation.  Brazil right now is starting to implement or becoming more fond of spatial work of nation systems and the application for public safety.

And we have been using drones during carnival, which is a public party too, identify criminals in a situation that citizens are not informed of.  We don't have any consent or there's no space for consent in such situations and there's also no debate whatsoever on the amount of invasiveness of such initiative or how we are in fact shrinking opportunities and shrinking the civic space in a public party and situations in which people don't necessarily act as suspected.

And I guess maybe to wrap up this short intervention, governments come up to us and say that the future is today.  They eagerly want to adopt technologies without any consideration of critical voices and potential discriminatory effects.

This is what we have been advocating for back in Brazil, the need for more information and a more kind of respectful approach to all of it, and that encompasses it differences around us.

Thank you.