IGF 2020 - Day 4 - OF30 Human rights and the use of AI in the field of health

The following are the outputs of the real-time captioning taken during the virtual Fifteenth Annual Meeting of the Internet Governance Forum (IGF), from 2 to 17 November 2020. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

 

   >> MARTHA STICKINGS:  Okay.  Good evening, or at least evening where I am here in Vienna, and welcome to the session on Human Rights and Use of Artificial Intelligence in the Field of Health.  We at the Human Rights Agency, are really delighted to be hosting this session which we have organized together with the Council of Europe.

For those of you who aren't aware, the Fundamental Rights Agency is an EU agency which provides data and expertise to the EU institutions and the EU Member States on fundamental rights matters, this includes through social and legal research on various topics related to fundamental rights, and we also carryout large‑scale surveys on a number of different topics.

The agency started working on Big Data and artificial intelligence in 2017 reflecting their increased attention to this topic amongst policymakers as well as the many fundamental rights issues that are engaged by the use of artificial intelligence.

Since then, we've published a number of papers which deal with related issues, including one on the potential of discrimination when using algorithms, data quality in artificial intelligence, and facial recognition technology and fundamental rights.

In the course of last year, 2019, we started working on a research project that looks at the fundamental rights implications of using AI, and that's based on interviews with representatives of the public administration and businesses in five selected EU Member States asking them about their experience of the use of AI and related technologies.  We'll be publishing the results of that project on the 14th of December this year, so you've got a little over a month to wait.

One of the areas that we cover in this research is the use of AI in the field of health, and here we've collected evidence about use cases of AI in the EU, including, for example, image‑based diagnostic tools as well as tools that monitor health indicators remotely.  And the intention is that we can use this evidence to support the ongoing policy‑making processes that are already underway.

And, of course, we've seen this topic of AI and health is becoming ever more relevant at the moment in the context of the pandemic where we see that technologies are really emerging as a very important tool in helping to control the spread of the virus, so we think that the discussions on this topic are particularly timely.

I'm not going to give away the results of our research quite yet.  For that you have to wait until the 14th of December, but I can highlight a few of the owe occurring issues that we see coming out from a fundamental rights perspective.

So, for example, we see issues about a lack of clarity, about the definition of artificial intelligence.  We see from our research that often this term means different things to different people and that that can pose a particular challenge in the context of any regulation in respect of the principles of legal clarity and foreseeability that will be really important for any regulation that is forthcoming.

We also see some confusion over the application of existing law.  For example, standards on data protection and discrimination and how that applies to the use of artificial intelligence.  Partly tied to that are also questions of whether we should take a horizontal or sectoral approach to thinking about regulation and that's also often tied to questions of risk, and that may in fact mean that there are different considerations depending on the particular use within one sector.

Another topic that often comes up concerns data quality, so in terms of what data is being used to train AI systems, where does it come from, how representative of a particular population is that data, how is it being input into the tool.  And, again, these are issues that we can particularly see emerging in the health area.

And, lastly, we also see questions around the effectiveness of tools to mitigate fundamental rights violations and enforce existing rules, and there we see that there is a big issue around the awareness of fundamental rights issues amongst the full range of interested parties whether that be developers, users, or others.

Now, these are issues that policymakers are also very much grappling with as they're exploring possibilities for regulation of AI, and in that context we're really delighted to have excellent speakers to discuss with us this evening about how AI can best be used in the area of health and also how governments can and should respond to some of the fundamental rights challenges that may be associated with the use of AI.

Each of those speakers will give a brief opening intervention, after which there will be time for discussion with the participants.  So please feel free to type your questions into the chat during the session.  It would be great if you could briefly introduce yourself, and perhaps you could also indicate if your question is for a particular speaker or whether it's for the panel as a whole.

We have an online moderator who will be closely following the questions in the chat, so your input won't be missed.

So, I'd like to start by giving the floor to Ritva Hailila, Senior Medical Officer in Finland, as well as being a Member of the Council of Europe Committee on Bioethics.  And prior to that, she was the General Secretary at National Advisory Board on Health Ethics and Research Associate and Head of Department at the University of Helsinki.  Ritva, the floor is yours.

   >> RITVA HALILA:  Thank you very much.  Do you hear me?  All right.  Okay.  Thank you, so just to be sure that my voice is heard.

I'm much more familiar with ethics than human rights, but anyway so there are some differences between ethics and human rights.  So human rights are or have been written in international conventions and ethics is always like balancing between good and bad and benefits and harms, and there are a lot of things just like AI that has benefits and also has risks, and I would like to open this topic a little bit.

So AI can be used and has been used in medical fields quite a lot, so it has been used in medical imaging, support of professionals, especially medical doctors when they are making diagnosis and planning care with patients, especially with new medicines and side effects so that there might be like warnings that this medicine is not suitable to use with another one.

So it helps in everyday work, but there are also these things that you have to be careful when using with AI, which is that health data is always very sensitive data and especially in our country, in just a few weeks’ time so we have experienced that what happens when a lot of data is going outside or practically they have been stolen from health care units and there has been a hacker, but hundreds of thousands of people have become very anxious and very, very worried about their future because some data has disappeared.

So, if medical sensitive personal data is used in a way that can be identified elsewhere by outsiders, so then it is a kind of thing that you have to be sure to ‑‑ to not have that kind of accidents because once they are lost so you can't do anything for that.

And then another thing that we've been worried about, especially in the Bioethics Committee is ‑‑ is that if a lot of data is used together, especially genomic data, so how people can keep their privacy and not to be identified from the amount of data that is used there.

One question that has been raised very often is that once you get the information from Big Data, for example, when you have got the information, how can you be not to know something?  Because especially, genomic information contains a lot of things that you can ‑‑ you can get to know already 30 years or 50 years before you'll get some troubles with your health, and it would not be very beneficial and not very nice to know something that is ahead in your life and you know it but you cannot do anything about it.

So, benefits and risks and balancing between them is a very important ethical question or ethical issue that is important every time when we are talking about artificial intelligence.

Also, justice between individuals is also important.

I think there are a lot of things that are discussed in this panel afterwards and so I could come to that, but anyway.  All the things that are done also in this AI field have to follow the ethical principles that have been laid down in the convention that is a product and so that is a kind of tool that with which we operate every day.  Thank you very much.

   >> MARTHA STICKINGS:  Thank you very much, Ritva, for highlighting the really important issues this in terms of some of the existing rules and regulations that apply in this area but also in some of the particular risks in terms of the security or sensitive data, and also about the sort of challenges that can arise when you have an amount of data which can say a lot about an individual and their potentially their future health prospects.

So now I would turn to Andreas.  Andreas Reis is the Co‑Lead of the Health Governance Unit in the Research for Health Department of the Division of the Chief Scientist of the World Health Organization in Geneva.  His main area of work is public health ethics which aspects of infectious diseases and outbreaks of emerging pathogens, and he also works on the ethics of public health surveillance, health research, and big data and artificial intelligence.  A lot of very, very relevant things for our discussion.  Please, Andreas.

   >> ANDREAS REIS:  First, let me thank you, Martha and your colleagues and Council of Europe for inviting me and WHO to this important panel.  I'm really happy that I can be with you tonight.  And I want to make a few points before we get to the discussion.  And being at WHO in Geneva, I would really like to focus on the issues of this session with regard to the use of AI for health in lower and middle‑income countries.

So as I think we all agree and Ritva has said, AI holds a great promise for transforming health care and public health, and this includes the rich countries but also the LMICs, but particularly in LMICs, the challenges for adopting AI to generate benefits are quite considerable, and they raise concerns that actually precious resources could be diverted from proven but underfunded interventions that would reduce mobility and mortality.

In addition, as we all know, there is already an existing digital divide, which is the unequal access to direct communications or Internet services, computing equipment, and so forth, and there is a concern that it might actually be widened rather than reduced by AI applications.

Another concern, which I think is at the heart of the session is data and data security.  This, of course, concerns all countries, but I would argue that it's even more challenging for lower and middle‑income countries.  So, this is an absence of data, poor data quality that could actually distort an algorithm's performance or poor datasets that require significant investments to even make them usable.

So, compiling this data to make them usable could prove difficult and time consuming, and in addition, the human resources are quite scarce in many countries so it could be, again, a difficult tradeoff to put even more work on their shoulders which might sort of detract from the time to care for patients directly.

And in particular, data from the most vulnerable or marginalized populations, including those where actually there is no health care services, is not likely to exist or could be inaccurate.

Another point is that there has been a concern raised about data colonialism where data from LMICs could actually be used for commercial purposes without due regard for ethical principles and human rights norms, such as consent or privacy and autonomy.  Such a collection of data without informing individuals of intended uses, which could be for public health purposes, but also for commercial purposes, can potentially undermine the human rights agency, the dignity of these individuals.  And this is in particular a concern because of the possibility that companies from, you know, countries with very strict and developed regulatory frameworks and data protection laws could actually expand data collection to LMICs without in the end providing products and services back to these underserved communities in countries.

The third concern is that AI technologies may be introduced to LMICs without adequate impact assessment in terms of human rights prior to the deployment, and that technologies that are not adapted to the existing context, such as diverse languages and scripts within the countries, can mean that certain applications may not operate correctly or at all.

And this is a wider problem which is that many AI technologies may be designed by companies in high‑income countries and for high‑income populations, and they don't necessarily apply directly to the needs in LMICs.

Final point I want to make is that AI is proving to be quite a challenge for regulatory agencies, even in rich countries, but there is a concern that regulatory agencies in lower and middle‑income countries may not have the capacity or expert these to assess novel AI technologies adequately and to ensure that potential systematic errors do not affect the diagnosis, surveillance, and treatment, and in particular if the introduction of technologies is piloted in an LMIC before maybe going to the market in more regulated environments.

So, such technologies could be introduced into countries without up to date data protection and confidentiality laws especially for health data.  And without data protection or authorities that can protect data of individuals and privacy of individuals and communities.

All of these ethical issues that are raised by the potential use of AI for health have led to WHO's project to develop guidance on the ethical and governance aspects of AI for Health.  We have been working on these guidelines for about a year now.  We have an expert group of about 20 people working on these issues and we're hoping to issue the WHO Guidance on the issues I mentioned and many more in the first quarter of next year.

With this, I will close my remarks.  Thank you for your attention.

   >> MARTHA STICKINGS:  Thank you very much, Andreas, for highlighting the work of the WHO, and I think for bringing a really important global perspective to the discussion and highlighting some of the particular challenges that face the Low- and Middle-Income Countries.  Having said that, I think a number of the issues reflect countries everywhere, irrespective of whether they are wealthier countries or not in terms of questions around data security, questions around the capacities of regulatory agencies, including data protection authorities, but also questions around discrimination, and I think you talked quite a lot about discrimination in the context of the data per se, but of course, there is also significant questions around discrimination in access to, to different tools and the benefits that artificial intelligence may be able to bring.  Thank you very much for that.

So, I will turn now to Oli.  Olivier Smith is responsible for overall strategy and head of ethics at Kao Health where he also establishes and maintains partnerships and deals with business model development.  Prior to joining Kao, he was a Director of Strategy and Innovation at a charity in the UK where he was responsible for investments in innovations across the acute, primary, and integrated care, and biomedical research and digital health.  And he's also been a Senior Civil Servant at the UK Department of Health.  Oli, the floor is yours.

   >> OLIVER SMITH:  Thank you, Martha.  Good evening, everyone.  It's great to be here with you all virtually from here in Barcelona.  I'm sorry that we couldn't all be together.  There is a huge amount to talk about here.  I can already tell from the two previous speakers that we're going to have a really rich discussion.  What I want to do is to give the perspective of taking principles of ethics and human rights and putting them into practice.

Now, I will start with a bit about Kao Health just to give you that context.  So, we create meta health services, meta health digital services that support greater access and enhance well‑being.  And we have a portfolio that ranges from prevention and well‑being through to treatment and also prediction of worsening health as well.

Actually, today's quite a big day for us because today we actually launched Kao Health.  Before that the last four years, we've been part of Telefonica in part of the Innovation Alpha, and today we graduated so it's a big day for us.

Over those four years, ethics was part of our thinking from the very beginning, and we have it into our work.  The first is we have a set of principles as you imagine, and to an extent we drive that from human rights and having talk to Martha I know we can do more there, but we made some efforts there.  The second layer is we take the principles and put them into practice, so we look at our design, we look at our engineering and say okay, what does that really mean?  And then the third area is governance and having a strong governance system.

What I'm going to focus on now though is kind of what we learned really in terms of trying to take our principles and actually put them into practice into our portfolio.  Around there are really three big challenges that we faced, so one is tradeoffs; the second is proportionality; and then the last one is really context.  So starting with tradeoffs, to give you an example, we had a product cycle, one of our products, and a year ago we asked a company that is also based in Barcelona to review it, to do an audit, an ethical audit of this prototype, and one of the aspects of that was to look whether the algorithm, the recommendation engine at the heart of this product, this prototype, was actually biased.  And they very quickly came back to us and said, we can't do that.  Because you actually aren't measuring anything that would allow us to look at whether this is bias against one group or another.  And it was one of those real moments where we're like oh, that's really obvious, yes.  What we've done is we've really privileged privacy as a principle and we sort of ignored or forgotten about, or not really thought through bias as a principle.  So, it's a tradeoff that can hit you if you haven't really thought about it.

But when you have thought about tradeoffs, you then have a question of proportionality or how do you balance between them, and one of the classic errors for us, and I think for other organizations as well that creates health applications is, is onboarding and consent.  And that's a moment where the desire to have impact and having an effective and engaging service quite often drives organizations to say, well let's get people through onboarding really quickly, let's avoid friction as a design team would say.

And then, of course, we have this counter desire to actually be true to our values, be true to our ethic strategies, and we want people to understand because that's really important, and you can sort of get through in a GDPR way and have minimum, but we actually really like people to understand and comprehend what it is that is going on here, and the way we do that in our stretch product called Foundations is we use layering, so for instance at the moment of foundations, it's very much well‑being and it doesn't really collect ‑‑ it doesn't collect health data, but we're intending to introduce the features that would actually require some biometric data.

So rather than bury that consent in the very beginning of the application, we thought well actually it's much better to just not ask for that consent until the moment when the person using it clicks on that feature and says I would like to do this.  Then it will come up, then we can explain it, and it's just much more of the moment and salient to that user.

So that proportionality is really important.  But then how do you know it's the right ‑‑ how do you know you actually got the right balance?  My team gets really frustrated with me sometimes because they say, Oli, Oli, we see this, is that in line with our ethics strategy?  And I get frustrated because I say, it really depends.  It depends exactly what you're trying to do and what's the context.  So, to be a little more concrete about this, then we go back to the example I mentioned before about the prototype and bias, and so fortunately, what we're able to do is look at some of the indirect data, although we weren’t actually measuring directly anything about people's gender or anything like that.  We were able to look at some of the data that we were collecting through coins and app stores and they did a thorough review of the content, and they said if we look at indirect data, we don't think it's bias.  And we think that in this context, using indirect data is okay because this product is about well‑being.  If it was bias the impact to those groups wouldn't have been that big in terms of the impact in their lives.  However, we were very conscious that had that been the case with one of our products that's about treatment, say in a product that's looking at treating depression, actually then if we were having a product that was bias, well that would be a problem because the detrimental impact would be much greater, so in that circumstance, we would probably think but it wouldn't be good enough to ‑‑ we wouldn't think it would be good enough to use indirect data and we would actually have to use direct data, and what we might want to do then is actually go out and survey the users and the proportional way to actually understand some of the data about them.  We wouldn't necessarily still try to grab data from everyone because we still have privacy as one of our principles.

So, proportionality and context, we struggle with, and we don't have all the answers to this, but some of the little nuggets that I just wanted to mention and maybe spark your thinking as we move into the discussion in a few moments.

So, one aspect that's really, really important is for you to think of this upfront, and you do actually need to take some time at the beginning to work out, well, how does that ethical strategy or ethical framework apply to what we're doing.  I know that sounds obvious, but actually building that into the process can become quite challenging because the team at Kao Health is very much used to an agile approach, and if I said okay, you've got to sit down for a week and really think through the ethics before you can do anything, they just won't buy that.  So, we have to take a more approach where we go step by step.

But one aspect that I found really helpful is to take some of the language of agile development and redo I employ.  So, engineers in agile development talk about tech debt, yeah, we can build that for you really quickly but it won't be production ready.  It will have lots of bugs, so we will need to go and fix those later on.  So, I now talk about ethics debt, and I say, yes, we could do that without necessarily thinking or taking a week to think about it.  We can think about a bit about it, but there is a risk that we miss something and then we would need to go back later on and pay back that ethics debt because we will have spotted something that we haven't originally seen.

So, try to fit into the process is really important.  There are not loads of tools around that help you with this, and emerging all the time but not a lot out there, so I encourage people really to try to use what there is out there and also to publish what's out there.  We published our audit, we try to publish as much of what we do, and then just try to build this ground swell of forward movement really because there are lots of benefits to artificial intelligence and health, but there are risks and we need to manage those and then together we can take advantage of those benefits while managing and mitigating the risks.  That's enough of my introduction.  I'm really looking forward to the discussion later on.

   >> MARTHA STICKINGS:  Thank you very much, Oli.  I think you raised some really important issues that can inform the discussion.  I mean, I was really struck by the question of an ethics debt and that also sort of ties in to some of the discussions that we have had in the data protection context but can also be sort of relevant or broader in terms of the idea of fundamental rights by design as it was, you know, really trying to build in those ideas from the outset and making sure you don't then get stuck later on trying to retrofit your fundamental rights obligations on to something that's already underway, where it's likely to be much more difficult.

And also, the point that you made at the end about transparency, and that there is still a lack of concrete evidence data and experience that is publicly available, and obviously the more of that that there can be and the more sharing that there can be, then that's going to be very useful for informing discussions and also trying to avoid making the same mistakes twice.  Yeah, where possible.  So, thank you very much, Oli.

And lastly, I will turn to Katarzyna.  And, Katarzyna is an expert in human rights and technology and she's a lawyer and activist.  She is co‑founder and President of the Panoptykon Foundation which is a Polish NGO defending rights.  From 2012 to 2019 she was Vice President of European Digital Rights, and in the past prior to that, she was an associate at the International Law Firm Clifford Chance and as well as a member of the Civic Advisory Board for Minister of Digital Affairs in Poland.  Katarzyna, over to you.

   >> KATARZYNA SZYMIELEWICZ:  Thank you so much.  As a lawyer we've experienced in the EU lobbying, and our focus on what can be regulated or how we should approach this challenge of ‑‑ the challenge that comes with the fact that we clearly want to have more AI applications to solve serious societal problems like now to fight the pandemic, but on the other hand, we are still not ready as a society to respond to risks that will come with these applications.  So one of the responses, I'm stressing one of them, and definitely not the only one, can be legislation.  And in this regard, it is our position that not only Panoptykon Foundation but the whole movement I represent here firmly says that it cannot be just ethical framework and we will need or we do need right now a clear legal framework that can only be introduced at the EU level.  So, this is not so much a task for specific governments to handle, but for the whole supernational structure or global treaty at some point to make it effective.

Obviously, we do have safeguards and standards coming with data protection law, the GDPR, and these should not be disregarded, and so in practice we could try to rely on Article 22 of the GDPR that gives individuals the right to have automated decisions that affect them in a significant way explained, and give them also the right to human intervention in the context of AI.  So for example, if there is an AI application in the field of health that results with individual decisions, we can certainly use GDPR as a source of safeguards for these individuals effected, and in practice our interpretation of existing safeguards precludes the use of black box systems, so we need to be able to explain why a specific AI System produced a specific individual decision, and I think this is what is beyond debate.  This is what we already have in the European Law, but by no means does this standard solve other problems that will come with the use of AI in high‑risk sectors, and these other problems are related to the fact that not always we will have individual decisions, but always we will have personal data involved in producing certain outcomes, and not always the outcomes will have individual significant impact.  They might have far bigger impacted on the society as a whole.

If we think about errors, if we think about simply waste of public money or getting predictions wrong or getting public policy wrong, these types of results are extremely problematic even though they might not affect a specific individual or entail the use of personal data.  This is why we represent this position that the EU needs to create a new legal framework for AI that goes beyond the use of personal data and goes beyond individual protection.  Fortunately, it's not just saying this, but it is definitely one of the priorities of the current European Commission and we are quite satisfied with the principles that were already formulated by various bodies, Council of Europe, definitely, but also the high‑level working group set up by the European Commission and the only caveat here is that ethics is not enough.  We need to at some point arrive at strictly more rules.  What exactly can be done?  I agree with the idea already represented on the panel that we had need to introduce obligatory human rights assessments for such systems, both public and private, but definitely the higher standard should be apply to public applications and there should be no way for thorough, detailed, public evidence‑based human rights impact assessment to make before an AI system is deployed, but that in itself increases transparency, increases exploitability, prevents certain risks like the use of non‑adequate data for training, but even the best human rights impact assessments will not ‑‑ will not minimize the risk of simply applying AI to the context where risks are too high or where AI cannot solve the problems we're facing.

And I think for that we will need to arrive at some redlines also represented in the law, and the type of redlines that we suggest for the debate, it is early stage of the debate, so these are still suggestions, so the following concepts of, for example, AI should not be allowed at all, AI systems should not be allowed if risks cannot be mitigated, so if the results of the human rights impact assessments are not satisfactory, but it should also not be allowed if we cannot explain the functioning of the system to the level that allows for independent auditing, especially in high‑risk areas.  So that's something to say about transparency.

Finally, one of the red lines that we put up for the discussion is that goals of the AI system cannot violate the essence of fundamental rights and they cannot violate human dignity, which is not exactly the same thing as human rights, and I can imagine systems that do not present human rights‑related risks but still they cannot be reconciled with the cost of human dignity.

So these are the examples of rules that we would like to see represented in the binding legislation, but that framework is only the starting point, and then the whole next level is how to implement it, and I'm not sure whether we can discuss it today, but a lot can be said about enforcement, about the need for independent authorities, independent governments with adequate budget, skills, and stuff inside.

   >> MARTHA STICKINGS:  Thank you very much, Katarzyna, and I think that's a really good place to start the discussion in some of the terms of highlighting some of the processes that are underway at the European level for looking at prospective regulation, and some of the issues that are very much part of the discussion there in terms of the role of the GDPR, the need to legal framework, in terms of the role of fundamental rights impact assessments, for example.

So, there is one question already in the chat that's directed to Ritva, and so I would perhaps address that to her first.  And then I will encourage you also to let us know if you have any other questions for the panelists in the meantime.

Ritva, I'm not sure if you can see it in the Q&A Section, but I can read out the question anyway also for the benefit of other participants.  It's referring to the point that you were making about data security and asking at what conclusions and reform steps have been envisioned after the data hack in Finland?  What is considered?  Hardening technical security, considering alternative architectures, different storage practices, and what are Finland's recommendations to others after that experience?  You need to unmute your microphone, Ritva.

   >> RITVA HALILA:  Yeah.  Now.  This is a long question and probably a long answer.  Shortly, this is just a recent occasion and people got to know it just about two weeks ago, two or three weeks ago, and so we don't have answers yet.  But there are a lot of issues probably in data security issues also, so that the hackers could get into the data, into the personal data and could transfer them outside the health care unit, so that actually, I think this is a criminal case.

But I think that could be used as an example, also, about the ‑‑ about how important it is to make barriers to outsiders, and not to use personal data because that caused really a huge emotional reaction from like a lot of citizens, including our President and politicians, Prime Minister, so that reacted on that.  And so that you could imagine how sensitive, how this whole case became under ‑‑ under our skin.  So, it was very personal, and so I think that could be a good example on how to protect the personal data and something that is ‑‑ something that belongs to us all, and how the privacy of persons is so important to be protected.

I cannot say what we should think about it, so in a couple of months, so I think I more might make sure to say something about that but it just shows the importance of this whole area.  Thanks.

   >> MARTHA STICKINGS:  Thank you very much, Ritva, and I think also, it's a useful point to remember that some of these issues are likely to come into the political arena very quickly, and then of course, it becomes a different level of discussion that's not so much between sort of experts working in the area, but it's very much led by politicians and also in reaction to the experiences of the general public.

So, thank you.  Andreas, I would have a question to address to you.  Thinking about to what extent do you see a problem with discrimination in the use of AI in particular areas in the health area?  So, for example, if you're using data from one area of the world in another area of the word, what potential issues around discrimination could be there, and what is the potential for the use of AI in that way to be exacerbating existing social exclusion?

   >> ANDREAS REIS:  Yeah.  Thank you very much for this question, Martha.  That is, indeed, quite a big concern, and I think there is one example that I want to give, which is a tool that was developed for diagnosing skin cancer and melanoma and when it was developed, basically, the data that was used to feed into the algorithms was exclusively from white persons, and I believe hundreds of thousands or millions of pictures of melanoma were basically used to train the algorithm to recognize any suspicious skin lesion.  And the result of this was that the tool was not able to detect melanoma from black people.  And this is, you know, basically, restricting access very seriously, for example, for the African Region.

So, this is an example where I think it's very important to also use data from other regions, and I would suspect it's very similar for people in Asia that data from all of these different ethnicities need to be used to train algorithms in order to ensure that also non‑white people can benefit from these interventions.  Over.

   >> OLIVER SMITH:  I just want to add to that, Martha because it's not just that approach of taking a data set from one group of individuals and applying the algorithm that comes out to another group of individuals.  That's not only unethical but bad data science as well.  Data science, don't worry, it's definitely going to work.  Have you really thought about why and what the differences might be, are they salient?  It's also just bad data science to do that as well as all the ethical challenges that Andreas has quite rightly highlighted.

   >> MARTHA STICKINGS:  Thanks very much for that, Oli, and actually I think the question that I would address to you is actually sort of very much built on that, and you were ‑‑ you were mentioning in your intervention about sort of indirect indicators, indirect data which I guess can also be sort of in proxy data in some respects, but are there sort of examples of potential proxy data that could be linked to discrimination that is actually okay to use as an indirect indicator, so for example, risky driver behavior or gender, let's say.

   >> OLIVER SMITH:  Again, my answer it seems is it depends because, (Laughing), because ‑‑ the ethicist answer.  Yeah, I'm afraid I'm not really an ethicist.  So, one example that we come across, for instance, is gender and whether we should measure gender or not, and some examples of what we were doing, and so in general we don't because we just don't think there is a meaningful difference in some of the work or most of the work that we're looking at.  So, when we were thinking about well‑being and stress, we just didn't see enough evidence in the literature to say that actually knowing gender is a salient fact that will allow us to give someone an intervention, an activity that will actually help mean that we get equal results, whether you're a man or a woman.  But when we're looking at sleep, it turned out actually there was evidence that suggested that it was important to understand the gender, and so that's what I mean about the context coming out.  One of the challenges with proxies is that even if you don't measure it, you can learn it and then the system could learn it, and question found this with voice as well.  We've done some work looking at voice to analyze mood and stress, and it was really obvious that the system actually learned that oh, these people ‑‑ they didn't name them as women, but you could tell through the research that that's a group of women and that's a group of men, and that sort of is okay but then if actually that dataset is then being applied to something else, you they'd to be aware of whether it's then going to say, well actually we're here going to recommend an activity to keep you healthy, and because this group we sort of learned you're women, we think you should do something that's more associated with women than something more associated with men.  You might say actually that the men can go and do, I don't know, football and the women can go and do something else, just pick some awfully sexist examples, but that's the kind of thing that the system might learn if it was drawing on its data from ‑‑ if it was drawing data from this, for instance.

So, I think this for me speaks to, gosh, there are so many points.  One is explainability becomes really important because you need to be able to look into what's going on.  The other part of this is then the auditing so that you can then actually understand and periodically look at what's going on as well, and those two need to work together, and then finally the transparency point becomes important because then someone else needs to mark your homework at the same time, so that's a really long answer to my, it‑depends question, Martha.

   >> MARTHA STICKINGS:  Not at all, thank you.  Katarzyna, I think you wanted to comment here as well.

   >> KATARZYNA SZYMIELEWICZ:  Yes, indeed.  The proxy data is a very complex political problem, a legal problem that we need to find a solution to and it's not an easy one.  If we want to protect the use of proxy data, prevent or simply make sure that people are not discriminated based on proxy data, we probably need to force system designers to reveal correlations that are not always tracked by the system itself, so in order to interrogate which factors had impact on individual decision in question and to what extent the data are sensitive, we probably need to collect even more data and we need to force the AI system in question to identify correlations that otherwise wouldn't even be identified.  That leads to more data processing, that leads to more exposure, but if we want to be sure that high‑risk applications of AI like in the places of health are not discriminatory, probably there is no other way to do it.

On the other hand, in the private sector where the stakes are not that high and where we are even more worried with data minimization principle and we're even more worried that platforms like Facebook will collect more data than they need, and more intelligent approach might be to simply prohibit ‑‑ prohibited certain data analytics leading to the use of proxy data to detect certain correlations.  So rather than saying to a company like Facebook, please reveal which correlations you used to target people with health‑related advertising, we should rather say, do not use your relevance metrics for health‑related advertising because it's extremely difficult to prevent the use of sensitive data, understanding that in our data‑driven reality, almost everything can be a proxy for health these days. 

So, yeah, and interesting as well to protect individual society in private sector might be budding certain practices, I'm afraid, in the public sector applications, we need to be ready to dig deeper into how AI systems detect correlations which leads to my earlier point on explainability, interpretability that for at least public sector applications, we will need to work very hard on ensuring interpretability which includes understanding of which correlations were in play, even for machine learning systems, I'm afraid we will need to require that.

   >> OLIVER SMITH:  Could I, is that all right?

   >> MARTHA STICKINGS:  Please, go ahead.

   >> OLIVER SMITH:  I'm not sure about the difference between the public sector and private sector.  I know what you mean, but I ‑‑ the proxy, I take that quite often as a proxy that the public sector is working on issues and has tools that are more impactful on people's lives, which is often very true because, of course, you think about health and education and all of those areas.  But it's not always true, and if I think about pharmaceuticals, for instance, as a sort of parallel industry where pharmaceuticals are overwhelmingly created by the private sector, but it's a very well regulated and manage the sector, so I would explain ability think less in terms of public sector versus private sector versus what is the potential harm that could arise from this, and then I would regulate both in the same way.

   >> KATARZYNA SZYMIELEWICZ:  Sure, I agree.  I just use a shortcut to not go into details because I understand that today we are focusing on high‑risk sector in particular health, and by saying that certain things should be precluded in the sensitive but less important in terms of societal benefits and harms, cases, I just use a shorthand of private sector for this less important but not necessarily less harmful, and I fully agree that what Facebook does with advertising and targeting can be equally as harmful or wrongly implemented AI system dealing with health, and just states are different, and so my argument here is rather that I'm prepared to do much more work also on the legal front with the public sector recommendations because we all understand that they are important, and we need to do that work.  Where the less important but equally harmful private seconder, I'm just more inclined to discuss outright of the most risky practices, but full agreement that it shouldn't be private versus public divide but rather high versus low risk or high versus lower impact.

   >> MARTHA STICKINGS:  Thank you very much.  Now, I'm very conscious that we have very little time left, unfortunately, which is really a shame because there is a lot of very interesting issues that are coming up.

I wanted to finish with a very quick round to ask the panelists what your sort of number one burning recommendation to policymakers in this area would be, and perhaps I can just combine that with two really interesting questions that have come up in the chat, and one is about the potential discriminatory use of COVID‑free status, and which is obviously a very topical question.  And secondly, which I think we touched on a bit in the last discussion, about the responsibility between and inside organizations concerning potential misconduct and particularly in the context of public/private partnerships.

So, if you have a recommendation that also brings in those topics, that would be even better, so an easy task to close with.  And, Ritva, perhaps I'll start with you.

   >> RITVA HALILA:  Thank you.  This has been very interesting discussion.  I think this public/private combination and differences between private and public, I think that is very interesting because in our country, for example, there is a lot of private activities also within public sector, and public sector is buying like services from private companies and that might cause sometimes troubles also with data protection issues, but ‑‑ but in these situations, I think that this is also very good to remind yourself why data protection is so important, and especially public sector is quite heavily also they have quite a lot of rules of how to manage and protect privacy, and for example, in our country, people are sometimes very annoyed that things are going so slowly and legislation is going to slowly and services that are given to public, they are so ‑‑ it takes so much time to get this better, and there you have to remember that we are protecting something that is very, very, very individual and very sensitive as something that belongs to a person itself.

And in private sector, they have different things that makes them running and that's why they sometimes forget these issues.  Also, in that case, what I told you.  So, I think ‑‑ I like also governance of data and legislation, international legislation that Katarzyna also emphasized.  So, going forward, good luck to everybody in your work.

   >> MARTHA STICKINGS:  Thank you very much.  Andreas, burning recommendation from your side?

   >> ANDREAS REIS:  Yeah, maybe just very quickly.  So, from my perspective, globally speaking, you can have a lot of criticism about GDPR.  But, actually, European countries are quite fortunate to have this quite high standard, and from my perspective, I think many other countries need to step up their data protection and confidentiality laws and modernize them and also need to invest more in regulatory capacity for this quickly evolving field.  Over.

   >> MARTHA STICKINGS:  Thank you very much.  I think that builds on what Ritva was saying.  Two impulses, there is the need for speed but also the need to make sure that we get it right as well when we move forward with regulation.  Oli, over to you?

   >> OLIVER SMITH:  Yes.  Absolutely thinking about those two impulses as well.  So I have two recommendations to try to balance those because we're working in health, so I think we want a pretty high level of standards and trust building, so I think we can learn from perhaps the accountancy and having ethical audits and that being something that is obligatory, I think would help provide the standard layer that everyone has to just demonstrate about the principles and looked it up and can think about the details.  But I think learning from perhaps the pharmaceutical regulatory world where there is an understanding of the more serious the harm the greater the amount that you need to demonstrate to the heart of that process is.  And I think the two of those coming together adjusted properly to be sophisticated on artificial intelligence, I think it would be really powerful.

   >> MARTHA STICKINGS:  Thank you very much.  Katarzyna?

   >> KATARZYNA SZYMIELEWICZ:  I can only reinforce the need to look at the purpose of the systems and always start with interrogating why we want to implement something and what will be the societal and individual impact, so never assume as the government that AI is your silver bullet to solve a complex problem unless you can prove it, and never assume that discrimination can be solved by tweaking data or, you know, adding a fairness metric or something inside of the system unless we can really prove it that is a thing really relevant in the context of Monica's question about the use of cove‑free stages in the public policy and the use of data on people related to their health to let them in or regulate people's movement or whatever we can imagine here, I would say that that's exactly one of the examples of maybe not AI, but data applications that will be extremely risky are bound to be discriminatory and ineffective in managing this crisis, so if we need examples of where not to go, this is for me one of them.

   >> MARTHA STICKINGS:  Thank you very much, and I think hearing from Oli and Katarzyna, we hear really important things about questions of tools to ensure compliance and at different stages of the process, and also in terms of thinking about potential harm and making sure that particularly in those cases where there is risk of harm, then sufficient protections are in place.

And so I want to wrap up the discussion now by sending a huge thank you to all of our speakers for contributing to a really interesting discussion.  Thank you, really, for your time.  Thank you also to our co‑organizers at the Council of Europe, it's been really good to work with you again on organizing a session at the IGF.  And we've seen it and I think just how much there is to talk about in this area, and so if you'd like to talk about it some more, I would invite you all to attend our online launch event for our AI Report which will take place on the 14th of December and it will be an online event that we're co‑organizing with the German Government in their role as the Presidency of the Council of the EU at the moment.  You'll be able to find more information on that event on our website, which is all on our social media channels and perhaps we can put the link in the chat to that now.

So, thank you very much and we'll be touching on the issue of health again there, but also other sectors which have been mentioned in passing today, social benefits, targeted advertising, law enforcement, so we'll have plenty to talk about.

Thank you very much, everyone.  Have a good evening or lunchtime or morning wherever you happen to be.