IGF 2022 Day 2 Lightning Talk #29 Using trustworthy AI to create a better world - RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR: All right. Thank you very much for joining my session. We'll be talking about using trustworthy AI to create a better world in the broad sense of it. First I will focus a little bit on what's trustworthy AI. And later on we'll dive into a use case which is a health use case. So let's start.

At this point in time, according to the world economic forum, we're in transition from the third towards the fourth industrial revolution. The third revolution was more about automation. And the fourth is about autonomous decision making. And you see this idea there are quite some sessions about AI and you really see that shift as well on this forum.

And reasonable and logical choice happening. Because AI has different opportunities both in health. Also solving possibly an energy crisis. Predicting great balances. So, AI does have a lot of opportunities. There are also challenges. Or does also have a lot of cost. Really need to organize organization in a whole different way.

Challenges. Also just has a down side if the whole AI governance isn't arranged well. In the Netherlands, the scandal with the text authority not having human oversight involved. Caused a lot of trouble and societal unrest situation.

We have to move forward. Also been recognized by the UN. Trying to regulate and doesn't come with regulation. Just as to come with a whole organizational setting which I call AI governance.

What should we do?

I hope to gather some input from you in this. So I hope I could give somebody the mic.

>> Thank you. I do agree with you because I'm an IP professional, intellectual property lawyer. One of the things we're asking ourselves is can AI actually own property, intellectual property?

We see jurisdictions that are not ready to depart from the traditional form of property protection. What should we do?

That's a very good question.

>> AUKE PALS: I hope to answer that for you. So my question when I prepared this presentation is this forum the right place to discuss AI?

It's the International Governance Forum. We're discussing internet issues. I'm not sure AI is a topic we should discuss over here. Maybe it's an even broader discussion that we could follow up. So possibly an AI governance forum. I don't know. That might be a suggestion. This model exists of six dimensions. And I hope to dive a little bit into that with you. And to zoom into that. AI needs to be embedded throughout the whole organization in my opinion. You need to have skills, the culture needs to be there, and has to be accepted by the whole organization structure. The processes in place to support that.

You need to have within your organizational structure -- I guess something is wrong with the presentation. Is it still okay?

Okay. Need to embed a structure where you know that the quality is okay. That you follow the risk management procedures. And legislation. Let's dive into digital ethics.

Hopefully it works now. Yeah, cool.

So within your organization, you could have a structure in place where you have, for instance, ethical committee looking at how the algorithms are being developed. The input data, output data and what's being done with it. You could also use technology for that. You could use federated learning set up where you got your data not only one central place but try to combine that without merging all the data. The most important part what I think and that's also why it's closed to algorithms in AI. That's data. The data is input for the algorithm which essentially gives the output and gives your prediction. So your data management should also be really in place. So it's quite a structure. I've joined some discussions this week and those discussions really focus on the compliance and control part. So we need to regulate a government needs to come up with legislation to present issues of AI. And in this presentation, I use algorisms and AI combined. AI is just an algorithm. You can also check if it is being used fair. And how does it perform?

In the most practical part, you should have messages in place. It's just input. With a set of rules and output. And what we try to do in the research projects enabling personalized interventions is actually using all the six pillars for the use of health. What we did is we had research project and focusing on health data. And health data is the most sensitive data, in my opinion that we do have. If you want to use AI in-house, you really want output. And all the checks and balances have been in place. So that's the project really focusing on finding a solution in healthcare to promote using algorithms, AI in a federated learning set up. Health data is put in silos. Once you go to your GP, your data is stuck there. If you go to the hospital, it's stuck there. And date sharing agreements with health data are really difficult to have them in place. And it's also a question that we can ask ourselves do we want our health data to be shared across all different domains. And that's the question we ask ourselves as well in this research project. We said not really. Because we don't know how this data is being handled. Want to have a controlled set up for that.

So what we actually also find out is that over few of relevant data is missing that is really difficult to get agreements in place to the institutions on sharing data. And sharing data is not allowed by law. So health data can't leave the premises of the hospital.

So what we actually did or maybe this is a better picture. We used health data or used a set of two hospitals in Netherlands which have both data from patients. And in this first set up, we used dummy data. And later stage, we are using a real patient data. And this data is being locally computed. So the research error the doctor is actually requesting whatever they want to know. And this request is being sent to the hospitals locally. The first iteration of the results they compute that. After that, one variable is being sent to one central node which is, in this case, an organization supporting research and organization. From that point, at that point, no health data is being under sized. Only some aggregated research. And we are in the process of running this experiment and finding out what are the difficulties of innovating research or healthcare. It's proven to be difficult. Having resources available is really difficult. By doing this research, we focus on all the different aspects on the circle from my AI governance model. So this is research project that tries to use technology for enabling in AI and in data. And having data management in place. And all the other aspects that I explained. Besides using this set up. And our desired outcome is to creating a health twin is able to intervene on their own health. As I explained, the learnings we got is that first by design is possible. And federated learning set up is actually a good way of making this possible without having to transfer your health. We have to implement this whole AI governance cycle I showed you before. Thank you very much. If there are any questions, please feel free.

>> Thank you for your presentation. I really like the AI governance model. And I feel like, to me, AI seems to be a buzzword. I know you work in AI. You know a lot more about it. To what extent do you think this is being used or do you feel like right now, people are saying AI is going to save the climate or healthcare?

To what extent do you think it's realistic to implement this model?

>> AUKE PALS: Thank you for the question. I see AI also as one big marketing worth. And I'm a big fan of using the word algorism. Does use the machine learning and using rule-based algorithms. However, in my experience, what I see at some companies I work for is that this model is not being implemented at all yet. And what do I reference -- so, for example, if you set up ethical committee setting algorithms, what do I have reference points. And might be some public organizations who want to implement this. What we do see is only look at the compliance. So in my circle, one of the outer rings was compliance. And you still do see the regulation is sometimes a push for using this and implementing this whole AI governance model. Not that it's used throughout. Thank you.

>> Thank you very much for this wonderful presentation. I believe the model that you have described in your presentation is based on the global knot. So my question or my comment is do you foresee the global south could have learnings from this model?

And can it be implemented within the global south?

Which really is not at the same level of development?

Thank you.

>> AUKE PALS: Thank you very much. I do think it can be implemented in every organization. And everywhere in the world. The outer ring was having organizational structure aligned and being matched with all algorithms being used. And if you got an organization and you got the time to think about it because you need some time and you do see that implementing this, it's time consuming. And that might be an issue for countries or organizations who do have less resources available for that. It only depends if the resources are available. And the culture within the organization is being able to accept in the way in this first station.

>> Thank you very much. I have a question. So I think that every time that someone would come with a suggestion to an organization, let's do this model and start to use AI. They will ask how much time it would take and what will it require in terms of human resources and cost. Now, I think this is based on one case. Could you elaborate a bit about the time and cost of this example or other examples that you may have?

>> AUKE PALS: This use case is one use case which the model is quite implemented. I wouldn't say that the model was there and after that, the use case was being created. It's a parallel. This model is being implemented in a lot of organizations. But it does cost resources. And I don't have an estimation on cost. That depends on the willingness to implement it and the prices which differ in different countries. What I do see is with implementation of this model, a lot of projects are being initiated. You really need to -- depends on the size of the organization. And also that all your processes need to be redesigned. If they are not really aligned with the way you want to implement it this way. So sorry, I can't give you concrete numbers. But this research project has started two years ago. Within this research project also transparency policies are created within the infrastructure itself. We use e-flint to a program policies. And that's also one of the researchers still working on implementing that in the right way for this use case in particular. Thank you. If there are no further questions, I would like to thank you for your presence. Good questions and participation. I do think IGF really makes it by having a good debate and having a good presense. Thank you very much.