This is now a legacy site and could be not up to date. Please move to the new IGF Website at

You are here

IGF 2020 - Day 3 - DC Public Collaboration On Multistakeholder Health Data Values

The following are the outputs of the real-time captioning taken during the virtual Fifteenth Annual Meeting of the Internet Governance Forum (IGF), from 2 to 17 November 2020. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 




>> MODERATOR: Everybody, a very warm welcome to our session.  We're a new Dynamic Coalition.  Data driven health technology, very new, just a few months.  Our group is excited to host you today and thank you for joining us. 

My name is Amali De Silva‑Mitchell and this session, just to give you a quick introduction, there will be two presentations and then there will be a bit of a time for impressions from some members of our Dynamic Coalition and we hope that some invited guests will be able to reach us.  At the moment, due to COVID, we have two of our members either very busy in hospital administration doing that or due to sudden lockdowns in the area there, they cannot reach us through technology to be with us.

We will have to play it by here and deal with that at that point in time.

Our group has a focus on issues, risk, benefits and opportunities to end users of the full value chain for healthcare and associated activities.  That's the focus of our work.


That will come alive through our presentations and right at the end, in the discussion with all of you.  Okay.  We have broken this down into a couple of segments, sort of 30‑minutes each, that's kind of how we have planned this out and so I just want to say that all of this information, all of these discussions, we're going to bring this in to a paper which is going to be our very first paper on technology values for healthcare.  We're looking at services within the healthcare industry, looking at technologies, the manufacturing of drugs, the whole full sphere of healthcare services, product, so forth.

We'll start on this project and then we'll keep updating it each year as we get more information from all of you and that's how we're proposing to move forward with our DC.  Our DC also has two educational components on block chain and machine learning, that's two of four papers, presentations, that we have said that we will be producing for this Dynamic Coalition.  It should be very, very interesting.

We also are going to be having a survey in 2021 and also a poster competition in 2021.  The announcement of those activities will be posted to the announcement areas of the IGF website.  Okay.  Please, I would like to also invite all of you to participate with us in this new Dynamic Coalition and to sign up to our list, the email address is also found under Dynamic Coalitions and we're currently number 7 on that list there, you can find us there, please.

I'm just going to give a little bit about who I am and then we'll start this session.  I am Sri Lankan, I have worked in the technology industry, software, hardware, I was a President of an internet service provider in Canada, a non‑profit, a director of Freedom of Information and Privacy Association of British Colombia and I have worked in the government sector in hardware and software development.  I am an accountant and an economist but I also have background in computer science.  That's a little bit about myself.  Each speak, I will ask each speaker before they speak to tell where you say they're from and a little bit about themselves and then please start the conversation with us, either through representation, impressions, so forth.

Our very first speaker is a founding member of our group here, Galia Kondova.

Galia, you have the floor, please.

>> GALIA KONDOVA: Thank you.  I'm very happy to take part in this exciting work.

I'm Bulgarian and I have an international background, exactly as you do.  I have a Bachelor's in economics, Master's in political science and a PhD in economics and I have had the privilege to live not only in Bulgaria but also in Germany, in the U.S., at the World Bank for eight years, and now I do research at the University here in Switzerland.  It is really exciting to talk about such an important topic and I'm really looking forward to the discussion today.

>> MODERATOR: Thank you.

You are our first speaker, Galia.


I have prepared some slides.  I would like now to share this with you..

The topic I would like to talk to you, the application of block chain in eHealth records and this is a very important topic because now we're looking in many countries for solutions of how the patient should have control over their own data and block chain seems to offer some acceptable technological solutions for providing the patient with control, so‑called sovereign control over their data and disclose it to the extent that is necessary.  I would like first of all to start with a very brief introduction on the most important characteristic of the technology of block chain.  What's so different when we talk about block chain as compared to other technologies, well first as you can see here we talk about decentralization which means that the patient, the institution does not need to have a central place so to store this data, but the storage of the data could take place in a decentralized manner, on ledger that's managed and to which there are different parties that have access to.  This is something that's quite new as a concept as compared to the data models where we have the data centered in one specific database.

We have the categories of the block chain technology, and mainly we have persistency, Anonymity and auditability, that has to do with the fact that all of the participants, in block chain, they have access to the stored data and the data that's stored there can also be tracked back and in this case we can say that the Auditability and the persistency of the data, that persistency of the data means that the information available on the different ledgers is the same, it cannot be changed by any more specific actor.

This has to do with the cryptography applied for storing the data on the block chain, here there are big challenges related to technology and all of the regulations that effect the data, in particular also the privacy, data privacy aspect, and in Europe we have the general data protection and that's why here this factor is very, very important to be analyzed when we talk about the application of block chain in eHealth.

I in this presentation have spent some more time, particular time on looking into the security aspects also of using the block chain for eHealth data because cybersecurity issues, that's of particularly importance, especially the last few months when we have read Articles on the leakage of data, this is a very important aspect related to the block chain technology.

Here is an overview of the different possible block chain applications and now we have many good use cases that provide examples of the applications.  Here you see there is a number of the block chain applications that now we have experienced.  It started with finance, of course, the crypto currency application use cases and then it also quickly went to Internet of Things and we have the possibility of block chain, this is the peer to peer block chain trading platforms, in Europe, there is a huge development in the European block chain and infrastructures, it is considering the demand of having cross‑border efficient forms of information and the data and here we have the HLC industry and this is exactly what our coalition is looking into, HLC and eHealth applications.

Let's move slowly to the health sector.  Here I would like to focus on the electronic health records.  As I said, there are legislations, I can just mention here, Switzerland, where I live, the project related to the introduction of an electronic patient record and this concept of electronic health records integrates an individual's medical health records generated by a health service provider such as a physician, a medical assistant, a pharmacist and private health records generated by the individual.  With this here, we talk about not only the private health records, but also any additional information that could be generated by health service providers.

What is the requirement about the electronic health records, they should allow the sharing of data between authorized providers and an individual but in a data‑protected manner.

Here, because of the sharing of the information block chain comes into the picture and provides some technological solutions, but as I said, it is a work in progress and there are many aspects that need to be looked at.  Here in detail is data in the health records, data types, we're talking in particular.  Here when we talk about health data and we talk about the different LDCs RGs and also the demographics like what is the patient age, where does this patient live, fetch files, photo, some others, other documentation, then we have the evolution, the progress in crate Cal healthcare for this patient and the history, what is, so general information, emergency contact, we have the information and any home monitoring data, collaborative ‑‑ lab active results, medications, prescriptions, prevention, so on.  Here as you could see, when we talk about electronic health records, basically this is a comprehensive set of all of the information associated with the patient and the track record of any treatments, et cetera.

Now, I would like to come to my last slide and then I have prepared a slide that provides an overview of the coverage of cybersecurity‑relevant requirements for these electronic health records by block chain if we would like to apply block chain, for example, I could take this here and there is an analysis, a preliminary analysis that's been conducted in the sense of what percentage does the block chain cover the cybersecurity relevant requirement for electronic health records.  Here you see on the dark blue bar, they're presenting these requirements or categories to which block chain accommodates best.  Here we could see availability, so this is the accessibility of the data and also integrity, the coverage, the integrity, the fact that the data cannot be amended or altered or changed once on the block chain.  As you see here, (poor audio quality).

Here you have the cybersecurity and also here I should say that the GDPR requirement, for the possibility to delete completely some of the information currently in discussion, so I wanted to provide an overview that block chain provides potential for the coverage and the cybersecurity requirements for eHealth records, however, there are aspects that are still open and I assume that the prevalent technologies will have a solution if you would like to think of a serious implementation of the technology.

So, at this stage, I would like to, of course, thank you for your attention and here you see also my contact data and most of all, I look forward to the discussion on these issues.

>> MODERATOR: Thank you very much.  That's very informative.  Interesting to see all of the data lines and the integrity and so forth and, yes, we'll keep the questions right to the very end after all of the speakers have spoken.

Thank you.

Please now we're going to move on to this section called Mosaic.  I'm not sure who is actually with us.  I'm just going to call out names and hope that we have people we were expecting to have.

Okay.  We don't have Yao.  Yao was going to talk about language.  We're particularly interested in the access to eHealth and Yao was going to talk about the importance of translation and so forth so that we can increase access globally.  That was what he was talking about, an important value for us.

You know, hopefully, I'm not sure if he'll join us later, we'll come back to him.

Do we have Lakmini Shah?  We don't.  She was a guest, a counselor, from London, in the U.K., and she was going to talk a little bit about smart cities, but she did email me and say that she was not well so that's probably what that is.  We had also a retired Secretary‑General of family planning association, federation internationally and she is unfortunately, due to restrictions of lockdown suddenly unable to reach technology.

We have also Janna Belote, who was going to be speaking, senior hospital administrator in the U.S., and she just told me a few minutes ago she has to rush off to deal with some issues there for COVID, so forth, which she's covering.

I'm beginning to think ‑‑ Herman Ramos, are you available, please?

>> HERMAN RAMOS: Hello. 

>> MODERATOR:  Herman Ramos, if you take the floor, that would be great.  You have full time because so many are delayed due to COVID and related.

>> HERMAN RAMOS: I'll just start to share my screen.  I have a background in physics and I have been working with technology globally and most recently here in Mozambique and I'm one of the 2020 Internet Society Youth Ambassadors.  I'll just start my presentation.

The development of internet information and technology has been contributing to the digitalization of many activities and sectors, and health is no exception.  The advancement in popular technology and digital computing improved the speed and efficiency which data can be processed and exchanged.  Due to the unprecedented spread of mobile technology, as well as advancing the normalcy application, tens of millions of citizens and people now use mobile device as they communicate and the data.  We have been addressing a high range of challenges in areas such as disease surveillance enhanced systems and health education.  Basically we define mobile health as mobile computing, a medical sensor, communication technology for health.

We have noticed the increase on mobile application solutions because they can address a high number of different health issues.  Also mobile health allows or offers the ability to remove individuals to participate in healthcare, basically individuals can be in communities, they don't need to go to healthcare facilities, at home they can have early diagnosis and can monitor any kind of diseases.

Basically, most common application of mobile healthcare is using mobile phones and communication devices to educate consumers on preventive healthcare services.  However, there is a range of applications with this.  You see here if you integrate the diagnosis and monetary systems, this can offer control measures to reduce the speed of any infections, we're talking about disease, et cetera.

Basically, this offers two types of benefits.  First of all, the increasing access to healthcare outside of healthcare settings.  This is what I was talking about earlier.  We don't need to go to a formal healthcare facility, you can do the self‑testing at home.  The second, after making the test, you can have diagnosed results and that's important for you and for the healthcare professionals that will elicit the rapid, appropriate clinical and appropriate health response to the issue.  In this way, we can see that the diagnosis and monitoring of disease are clear to clinical maturation.

Here we have a table basically that has all of the tests, the diagnosis, it gives the opportunity to give early detection, the automated results, capture, analysis.

Treatment and patient management, there is a linkage here between the results of when the patient has the results, there is a linkage here to the clinical care because the care has a connection with the mobile.  Also we have the disease control and elimination.  After we have the results of the diagnosis, we can apply the epidemic control, in this way, public health can implement control strategies to monitor and to control the diseases.

The application of mobile healthcare is not only for Developing Countries.  This is also applicable for Developing Countries and we know that the infrastructure in Developing Countries is much smaller than in Developing Countries and access to medication is limited especially rural, remote areas, we can look at the cheaper healthcare to provide and create an efficient, early diagnosis of most common diseases, including the wording for academics.

As many area are related to internet, the mobile healthcare also has some challenge, one of the challenges is regulation.  In this way, to address this challenge of regulation, it is important to develop a regulatory framework that is with medical apps, managing ‑‑ that are merging.  We can see that regulatory, they're not keeping the pace with the technological innovation.  Also the cost and clinical effectiveness, in order to guarantee a best integration of this kind of system, it is important to have or required to have evidence and is clinical effective and resulting in cost effectiveness. 

An important challenge, it is the digital divide, because we know in many countries, many people don't have access to internet and then if they don't have access to internet, there is no possibility of enjoying the mobile healthcare.  In this way, it is important to ensure that no one is left behind.

Conclusion, investigations in modern physics sparked the technological development of health systems which with the design of various equipment and the diagnostic techniques, now it is time for the internet, information and communication technologies to create a digital revolution in the health system.

Thank you.

>> MODERATOR: Thank you very much.

Your contact information is there for people to contact you.

>> HERMAN RAMOS: Yes.  I'm open to any interaction and engagement.

>> MODERATOR: Excellent.

>> HERMAN RAMOS: Also we have our mailing list at the Dynamic Coalition.

>> MODERATOR: Yes.  Thank you.  Thank you for mentioning that as well.

That was very interesting presentation and again, audience, we'll keep our questions right to the very end.

I am just going to ask the Mosaic speakers, do we have any others, please, with us?  Is there anyone of the Mosaic speakers here with us, please.

Okay.  I don't think anyone made it.  We have a number of medical people with us, they are obviously very busy with COVID and so forth.

Would you be able to bring your presentation forward, please?

>> JORN ERBGUTH: Sure.  Just let me start it and share screen.

Please allow me to take a little bit of a larger angle and I would like to take a look at AI and while it is a promising technology, it is often misunderstood and I would like to focus a little bit on how it works.

I'm Jorn Erbguth, head of technology in Geneva Macro lab, my background is in law and technology.  I'm not a health expert.  I teach at the University of St. Gallen, Lucerne and the Geneva School of Diplomacy.  I would like to talk about I didn't AI works and why currently AI is overestimated and a better understanding is needed.

We start with some myths about AI?  You remember Brexit?  It is quite ‑‑ I'm not happy at all about Brexit happening.  I don't think it has been caused by any kind of bots or Cambridge and there has been an inquiry and the information commissioner said ‑‑ they have found no evidence so, of course, they have broken privacy law, it is not a good company, but they also lied about capabilities, they did not have the power to influence public opinion in that way.

We must not be too afraid about technology, we don't understand, it and often those powers and risks are overestimated.  Another example, maybe you heard about Twitter social bots that spread fake information about COVID‑19 and about any other topic as well.  The good message is they don't exist.  The bad message, it is people.  Actually, there are some publications that are cited, there are some papers that cite those people, but actually there has not been an evidence of a single social bot.  There are bots that automatically post cop tent, like from publishers, and there is a big ban of bots, they're posting all the time, but this exists, but a social bot that behaves like a human, that cannot be distinguished from a human and is actively pushing a specific agenda has not been identified at all.

People claiming that 50% of the accounts involved in certain discussion, bots, they should be able to at least identify a single one.  There has been a research within governments and he's asking every single publication about social bots, please name me the accounts, I will track them and he is checking them and he did not find any of those accounts that were not human.  Good news is there are no social bots, the bad message, it is humans that posed those.

How we treat AI, it is a cropping of images by Twitter.  When you pose an image A tweet with an image on Twitter, you have to ‑‑ you can add any kind or size of image to the tweet and if the format, the ratio does not fit within Twitter, they crop it, meaning that they cut it and select the most probably interesting part of the image.  The most interesting part of an image is usually a face.  So they will select a face, which works pretty well and the trouble starts when there are two faces on the opposite ends of a face ‑‑ of a picture.  They have to then select which face to take.  Of course, then they can only take one face so there will be a decision and sometimes it is an easy decision if you have a big one in the foreground and a small one in the background, then it is obvious and if they're pretty similar, of course, we know one person is more important than the other one, with you the system doesn't know that and the system selects the same face regardless of where it is in this picture.

This is not proof that there is some general preference to skin color.  Actually there has been a test with 90 pictures, which is not too big, but reasonably large, which has shown that there has not been a specific preference for skin color.

Of course, those systems, they don't behave according to rules so we don't trust them and we expect the worse.  This kind of systematic, how we deal with this type of technology, because it is there, even in a context where it doesn't really matter.  Nothing of the image is lost.  It is a preview.  Imagine when these type of things happen to the use of AI and health and some group of people will be not treated equally and then we have a scandal, and it will happen, and it happens, because this technology, it does not go along with rules and we have to either accept it or use a dinner technology.  There is no way to stop AI from doing that.  It is a basic essence of deep learning.

Let's see what is AI.  When we look at computers playing chess, they can outperform even world champion, the world chess masters.  Is computers playing chess AI?  The calculate, they can outperform any human so it is obvious they outperform humans in a task that animals cannot do.  Are they AI?  No, we don't ‑‑ we don't name them AI because they are too trivial to do.

It is like AI is only the cutting edge technology.  When the technology comes into place, it is not AI any more.

The definition, it is more ‑‑ it is not a scientific definition, it is more like a common definition for some new technology.

AI is when a human cannot distinguish another human from computer.  Okay.

Sounds reasonable.

We have this in a little bit of a different way, when a computer wants to know whether you are a human or a robot, they ask you to fill in a capture where you have some writings that's distorted and you need to decipher it and computers are getting better than humans in deciphering it.  A little bit of an issue, are computers now capable of AI?  Well, actually Google replaced the captures with some softwares that are spying on us to determine if we are just a robot taking content or if we're really human.

How does deep learning work?  In my view, deep learning works like a dog.  You teach a dog, if the dog behaves well, they get a positive reward.  If not, they don't get it.

You present a lot of examples again and again and again and again and that's exactly what you do with deep learning.  Actually deep learning is using neuronal nets that are done similarly to the brain.  This is not a surprise, the behavior is a bit similar, and you have to take this into account because deep learning is really different from other type of computing.  You cannot expect that it behaves according to rules, that it is programmed and people know what they have created, no, that's not the way deep learning works.

How does it work?  First you get a neuronal net and the initial state of the neuronal net, it is random and it needs to be random, basically it is the random ingredient into deep learning.  That's the reason why deep learning is not statistic.  It uses a lot of data, it is similar to statistics, but it is not statistics. If you do a statistical analysis on data, you do with it twice, three times, four times, five times, you always arrive at the same results.  At deep learning, you always arrive at different systems.

Besides the neuronal net, you need training data, and you need the training algorithm.  The training algorithm is basically tuning the neuronal net so it fits the training data.  How does that work.

I just put three examples.  Usually you have thousands of millions of examples.  Of course, in random state of the system, it is quite different.  Now you have the differences calculated and you queue the system in the way that it is closer to the examples and do you it again and again and hopefully, finally the system goes through the examples and gives the right answers to those examples.  This is not really what you want to achieve because you have already stored your examples.  What you want to achieve, it is the generalization of the system.  You have examples like this, that the system has not seen before, and then you check whether the generalization is good enough for those as example, test data, validation data.

You split the training data into training, test, validation data.

To summarize, deep learning, the training is based on data and random.

The system, it is based on generalization which is the same as stereotyping.  It does not it meet any rules but does not obey any rules either, and it is imitating intelligence and you always have to be aware that if there are new inputs, that the system hasn't seen before, so as a result unpredictable.  Of course systems usually behave well and there is a high probability I would say that we'll do it correctly but even new inputs might behave quite differently, may produce unexpected result.

What can these kind of systems achieve?  Here we have a system that can distinguish criminals from non‑criminals.  Wow.  Well, when you look closer, you can do so too.  You will see those people with the white collar, nice, CV photo, they're not the criminals, and those people who don't look so happy, they just have been arrested and those photos from just being arrested, they look quite a little bit different than the photos on the CV.  The system can distinguish a photo of an arrest and a photo of a CV, but they cannot distinguish whether somebody is a criminal or not.  This is basically the issue.  We're not using the right criteria.  If the criteria met us, you shouldn't use them, it is a rule that uses the system that obeys by the rule, plays by the rule.

It can be hacked.  Here is a stop sign and a Google car.  You see the stop sign on the right‑hand side, on the left‑hand side, the stop sign has some patches on it.  Those patches are really big.  The system does not learn that a stop sign is this shape with stop in white letters and the system just picks some points where the stop sign is different, for example, than the speed limit sign.

If you analyze the system and find out which bots those are, you just have to put patches on those spots and you get a different result.

It gets even worse.  Here is a picture of a life boat that's recognized, if you add a little bit of noise to it, it becomes a scotch terrier with a high probability there.

Is research that shows it can be sufficient to even change one pixel of an image to get really substantially different results.  This is the kind of hacking you don't feed to invade the system, you just have to have the input in a little bit and then you have a completely different result without being detected because the system does what it does but produces completely different results.

There is discrimination, of course.  That's often in the headlines and sometimes it is due to gaps in training data.

If there is a case that has not been trained then there is a high probability that the system will not behave correctly.

If you only have white faces in the training data then the system might not correctly recognize people of color.  It is important to have a well‑balanced training set, and even more than well‑balanced, because minorities need to have an equal stake here.  If you have a minority that's 1% or .1% of the population, and the system should work for them as well, it is not sufficient to just have one example out of 1,000 of this minority.

You have to balance the training data, but unfortunately, this is not enough.  Even if you have balanced training data, which is, of course, unbalanced training data, biased training data, it is, of course, one of the main reasons for bias and even if you don't have gaps in your training data the selection of the criteria to recognize a person or a situation is based on random and it is based on machine‑generated stereotypes and even basing decisions on statistics, it is bad because it means that you base the decision on one person, on the behavior of people of his or her group before and deep learning has worked because it is not statistics and involves random as well.

To summarize, the pros and cons of deep learning, the pros is, you don't need rules.  If you don't have rules, you can still use it.  There are some field where is you don't have rules.  Those area, it can be written compared to human action.  If you have rules, use the system that can obey your rules.  It still works reasonably with less than perfect data.  If you have a little bit of contradiction data, the system, it can still perform reasonably well.  Of course, there is a performance that will deteriorate with bad data in a dataset and this is often a problem but it will not break immediately.  If trained well, can being in certain situations provide, produce good results and another thing is maybe you heard about Tusla, it is ‑‑ it drove into a truck.  It can fail in cases where we don't have any question what the right decision is, but since the criteria is situational, auditory, then you can rely on it and can be easily hacked as we have seen it.

What to do with this?  We see that you cannot trust those systems so you call the expert group to develop trustworthy AI.  Of course, you cannot change the technology so they provide some advice that may help in some situations but not really.  A lot of government bodies, oversight communities, UN, governments, et cetera, today the German commission just published a report, a lot of commissions produce a lot of papers, but they cannot change the fundamentals and there is electric transparency, people often say, well, this is a black box system and if we could look into the black box, it would be fine.  You can be glad that you cannot look into the black box.  What's in there, it is really bad.  It is not rules that you think should be there, it is quite a lot of silly rules that in their combination produces good results.  You have good results.  You have statistically good results, but if the rule said that, you wouldn't accept it all.

If you have a case where you want to use this kind of system, it should be made completely accessible for tests and analysis.  If you do so, you can find those blind spots, those areas where they don't form well.  Of course, this will expose the systems, but transparency, it is always good for security.  If you can expose those issues then we have a chance to fix them and high‑level descriptions of the means of transparency, they're not at all sufficient in dealing with those issues.  Not to stay completely negative, AI can make a difference in health.  For example, the cancer detection on radio images, it has huge potential so we have quite positive effects, especially those areas where we don't have specific rules for humans, where only experience counts and how many images can be seen in the life of a human radiologist and how many images can be used for training an AI system.

This number, it is much higher, the system, it is much less, well-developed it might outperform the humans that only have a much smaller training set.  This is a reason why ‑‑ I think there are applications in public health that you have to be aware of the limitations and you should look ‑‑ for certain things, you look for different kinds of systems that are rule‑based and that are not just trained.

So resume, deep learning in health, they can outperform other systems, especially when you look at radio images, radiographs and but they all produce blind spots.  Even if you have a great system, that can diagnose cancer, it will not diagnose cancer with the same quality for all people so there will be certain groups where the system works less well.  Of course, this is not new, you have doctors, a doctor that's never treated a child might not be able to detect some situation in a child as easily as a pediatrician that does this all the time.  The system, they're not neutral, they have blind spots and they fail in situations.  At the same time, statistically thousand perform quite well in a lot of situations.

You have to provide a setting where you can deal with it.  You shouldn't rely on the systems because then you don't deal with these risks and issues, but of course you can improve health by using those systems in a way where you chose that kind of responsibility.  You need to expect bias, it cannot be avoided.

People are out there saying there is a removed buy Y they can remove certain small bits of bias, sometimes they impose even more bias by removing bias because it is imposed opposite bias which is ‑‑ which will not wipe out bias but will basically increase bias.  You just don't know against whom the bias will be.  It might be some groups and we'll look specifically at, it may be a completely different group, but nobody deserves to be disadvantaged by bias.  We need to expect bias, and we need to be able to deal with it.  If we can't, you should not use those systems, we need to have safeguards in place that will be able to deal with those cases, it will sooner or later occur.

We have another issue, we have the issue of the training data.  Training data in the health area is usually highly sensitive personal data.  You first need to train the system and you need it every time again when you retrain the system and you can in a limited way, but in some situations, you can extract part of the training data from the finished systems.  This is a special issue.

You can kind of create training data, synthetic training data, that works, it was proven to work and then you need more data and you reduce the quality of the outcome.  You have to balance whether you want to really do that or not, or you have to ‑‑ well, you have to see if you can justify this by the positive impact.

To be clear, democracy does not save us.  The side secretaries, some can be mitigated by some measure, you will not remove them completely.  You have to deal with them and say if you document every step of the development step, phase, then, of course, you can maybe review some sources, but you will still not have a perfect system because the system is based on stereotype, it is not based on rules.

Thank you.

I'm looking forward to our discussion.

>> MODERATOR: Thank you very much for an interesting presentation there.  I thank all three speakers, you had very different viewpoints on eHealth, very interesting.  Very new material.  I hope our audience really gotten gaged there.

We probably have maybe 20 minutes I think.  We don't have too many members in our audience so why don't ‑‑ I'm just wondering, please, the technical host, can you advise us how we can deal with questions, please?

>> JORN ERBGUTH: People can use the Q&A.  You see the Q&A button below your screen.  When somebody puts a Q&A there, you can look at them and we can answer them.

>> MODERATOR: Audience, please be welcomed to put in your question there and if you want to ask a specific question, it can be Galia, Herman, Jorn, please state their name at the beginning of the question, otherwise if it is to any panelist, then please.

I'll start us off with something to all three of the panelists.  Just wanting to ask about the role of judgment, it is an important quality, especially when dealing with new system, human judgment, it is critical in making a very good decision.  You know, how ‑‑ I'll put it to all three of the speakers ‑‑ where do you think the technology is, each of you spoke on three very dinner area, block chain, mobile tech, and the artificial intelligence, machine learning, where do you think we can quickly see the technology picking up to the human, please.

>> JORN ERBGUTH: I think it is always a human using the technology, it is not the technology running against the human.  Of course, humans can use technology to replace other humans and they always have done that, for decades, centuries, this will continue to be the case.

>> MODERATOR: I have an accounting background and I see repetitive tasks being done quickly, the higher decision level tasks being retained for a long time, however, I'm seeing there is sometimes ‑‑ we have something in accounting called dashboards and this is something that they use very much in healthcare, I'm not a healthcare accountant, we are missing our founder member there in that field with us today, I'm beginning to think that there will be a lot of this dashboard being used.  We know that's a concern to patients.  We have trojan data pockets that suddenly come up there and cause a problem and maybe if the tasks have now been given to staff who may ‑‑ who don't have the very high sophistication of medical experience behind them, there is a risk there for the patient and that's always been something a little bit of a concern for a patient perspective and something we could apply in terms of block chain, privacy, these hidden pockets of data that you have all mentioned.

I don't really see other questions.  I would like to ask Herman, could you please tell us a little bit how you think we should manage ‑‑ I know this is really a sudden question, Herman, how do we manage the linguistic delivery through mobile tech, please?

>> HERMAN RAMOS: I think the technology creators are focused and like to focus on ‑‑ most on the development of technology that don't see the problems of the linguistics.  I think we have a project, in this case, a technology that we want to implement, there's a possibility of if states want to implement these kinds of technologies, these kinds of projects, they must engage with ‑‑ I believe that's where the multistakeholder model, it is important, because we have the engagement of many other organizations and stakeholders in this case, Civil Society, private sector, academia, technical community, I believe these kinds of organizations, stakeholder, if they work together, it is possible to overcome any kind of issues.

>> MODERATOR: It is important to have the multistakeholder approach.  That's very much what our Dynamic Coalition is trying to achieve to put together as many stakeholders as possible so that we discuss, you know, what are the values that we have in common that establishes trust and we have a professor and I'm not up to date exactly on his situation right now, but he has written a new book on the speed of trust and talking about coming to a good understanding amongst stakeholders and how that creation of trust and the standing, it can speed the creation of any of the business processes, whether it is delivery, embracing the block chain technology, artificial intelligence, and even the paper processes as well.

You know, this is ‑‑ this is something, you know, that we're also thinking about, we're saying, okay, if we can have a set of common understandings that will launch us to speeding up the development of healthcare to access universal healthcare access so we would also want to work with the community network and all of the other associated technical and other functions so that we can speed this up globally for everyone.

Galia, I was wondering, you were right at the beginning, you are probably rushing on all of our behalfs, is there something else that you would perhaps like to add, please?


One topic that's of quite interest to me, it is ethics, ethics in artificial intelligence using technology, especially for eHealth.

Here when I talk about this, I consider some aspects like exactly the privacy data, the data privacy, the one aspect, and the second thing, it is about developing models, especially artificial intelligence, using data, patient data, and coming up to some recommendations and suggestions and how ethical, right, could such models be, Jorn talked about the artificial deeper machine learning models and how trustworthy, how ethical are these suggestions driven by the technology and to what extent should they be accepted, implemented, followed.  This is something which I ask myself the question and it bothers me, the question I'm asking myself for a while.

>> MODERATOR: Exactly.

I think the whole area of ‑‑ that runs through all of us, the stakeholders and it is an important topic when we have the question of where do we have commonalities and definitely one medical ethics, what are the ethics of the administrators, especially the ethics of the system builders and are they similar, are they comparable, absolutely.  It is a very important area and hopefully we can develop that further within our Dynamic Coalition, the study of that, absolutely.

It is a very good topic.

>> JORN ERBGUTH: Looking at the systems, block chain and AI, I think they have a quite different approach.  Block chain is made to be trustworthy.  If you have a system that is most trustworthy, it is block chain, because it is secure, it is transparent, you can have more trust because it is distributed, decentralized and it is almost impossible for a single actor to manipulate it.

AI, it is completely different, it is not trustworthy.  Of course, if a system works well, if a system saves your life you don't care if it is trustworthy.  You use it.

>> MODERATOR: Yes.  Yes.

>> JORN ERBGUTH: And if a system is able to detect a cancer that would not have been detected by a human, of course you trust it.  You don't care whether you don't understand it.  It is not ‑‑ it cannot be understood, the reasoning.  Of course, you can understand how it works, but this does not give you an answer.  If you go to court, I can explain to you how courts in general work.

This does not explain the decision to you.  You want to have the reasoning and even if the reasoning is not understandable to you, because you're not a lawyer, then you can see a lawyer, you can ask a lawyer is this correct, can you verify it?  This is the reason why we really need transparency if you just tell people, I give you some wishy washy explanation, it explains to you the general basics, it is not transparency, it does not explain their decision, how it will work for them.

If you just have some general explanation and a lot of people are going for this ‑‑ these general explanations and think they provide bees transparency because the other, it is not possible, but you have to be clear that it is not just because there is some priority information there and companies don't want to expose it, they don't have it.  If you really dig into it, you will discover that those rules, of course they are there, they're very complicated and they're pretty stupid and they're combined in a way that makes sense.  They have gaps and there are a lot of gaps.  In those gap, they fail and you would never build a system like this, and it is not a system that decides on the rules, it is a system, it is a device, it is a dice, they're rolling the dice and that's the reason why the systems are not complementary.

>> MODERATOR: Yes.  Yes.  Exactly.

Yes.  Very glad that transparency is a topic, transparency, yes.

Thank you.  Thank you, Jorn.

I want to add, you know, we're also seeing newer technologies coming in, I came across hologram technologies in all of the brain scan, developing the images of patient's brains, so forth, so I think we'll see a lot of development on top of the technologies that we already are talking about and as mentioned by Galia, ethics, privacy, this is new areas for us in terms of ethics, privacy, these other areas as well as dealing with the AI, machine learning, block chain, and so forth and the mobile technology and how all of the service providers, among the technology deal with this, across jurisdiction, all of these different things.  We have a number of issues to talk about.  I'm just wonder, audience, again, I have not been following the chat.  Is there anyone else, please, who would like to ask a question of our speakers.

Someone else asked us how they can get in contact with us.  Please go to the IGF website under the section of Dynamic Coalitions, we have our email address and please join our list and you can join us there.

Also, if you go to ‑‑ if you want to contact me personally, you can just look at my profile for the meeting, the IGF meeting, and you will find a Linkedin and on the Linkedin, there is an email address.  It is Amali De Silva‑[email protected].

This is ‑‑ I'm not very familiar with dealing with this.  Bear with me here.  I'm trying to pick up any other questions, otherwise ‑‑ let me take a quick click here. 

Someone asked about heavy files, Galia, could you respond to that, please?  It says Galia, about heavy file, can they be accessed in a more efficient way thanks to block chain?

>> GALIA KONDOVA: Yes.  Thank you..

Thank you for this question.  I have just answered it also by chat.  I have mentioned to her about the so‑called self‑serving identity concept through which you can disclose all the special tributes and in case this is health data, large files that includes x‑rays, other health information you are in possession of, in the identity concept, it is a very good concept in which individuals could discuss and share certain attributes or certain information about themselves.

>> MODERATOR: Please go ahead, yes.

>> GALIA KONDOVA: I also wanted to actually also comment on the remark of Jorn on about trustworthiness of the technology.  I would completely agree that trustworthiness, that's the key point in using technology and here we have now robots that are conducting surgery, right, and I ask myself if I trust this robot and if I go to such a surgery, in case something goes wrong in the surgery, of course, nobody hopes for, who bares the responsibility, right?  I think here is an interesting case when you have trustworthiness, right, and ethics, and maybe some legal aspects which comes into the big picture.

>> MODERATOR: Absolutely.  A very, very interesting area.  Absolutely.  Yes.  Especially, I think, we have the new technologies and, you know, as Jorn was saying in terms of the time it takes for the technology to get sophisticated, there is room for error.  Yes.  Exactly.


Jorn, would you be ‑‑ would you expand further?  There is another question asking about the trust of users.

>> JORN ERBGUTH: Well, I tried to answer it.  It depends whether, of course, trust does not really always depend on transparency, it always ‑‑ it also depends on experience.

If we have a good experience with the system, we might trust it even if we don't understand why it is doing that.

When you look at the self‑driving cars, you look at humans driving, you might have a human that's driving much better than the average person.  This human, they may still cause an accident, and this person then cannot tell you, but I'm driving much better than the average person and therefore I don't want to be liable for the accident.  Of course the person is liable.

If you develop a self‑driving car that's driving better than the average person ‑‑ we're not there yet ‑‑ but if you arrive there, why should there be an exclusion of responsibility just because the self‑driving car is driving better than the average person.  Humans are not excluded from responsibility.

If you go to hospitals, to surgery, if you have a robot performing operations, that's having the higher success rates than humans, then you still have to ask was this an error or was it avoidable?  If it was an error, you should look for product liability and you should not say, well, this is better than average so you should not be compensated for your loss.  You should apply the same standards.  Even the highly‑skilled doctors that's better than your average doctor by a factor of 10, whatever, if they do an error and commit an error, it is similar and there is no reason to exclude them and we have to make sure we apply the same standards, manufacturer, they cannot pay, the doctor cannot pay either they will require insurance and the insurance will ‑‑


>> JORN ERBGUTH: And this is how we deal with it with humans and how ‑‑ why should we deal with it differently with computers and companies doing it.

>> MODERATOR: Very good point.  It is going to be an interesting area with all of the insurance companies as well, and how they ensure all of these technologies.  It is a whole new area there too which I'm sure we need actually to get some people on board who are familiar with the insurance background there on to our Dynamic Coalition.

We're coming to the end of our session time.  I really appreciate the audience who joined us in this conversation.


Contact Information

United Nations
Secretariat of the Internet Governance Forum (IGF)

Villa Le Bocage
Palais des Nations,
CH-1211 Geneva 10

igf [at] un [dot] org
+41 (0) 229 173 411