IGF 2021 – Day 4 – OF #26 Artificial Intelligence (AI) for consumer protection

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> We all live in a digital world.  We all need it to be open and safe.  We all want to trust.

>> And to be trusted. 

>> We all despise control.

>> And desire freedom.

>> We are all united. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: Okay, hi. I hope everybody can hear me.  Welcome to everybody onsite and online.  I am Martyna Derszniak‑Noirjean, Director of International Cooperation Office in Poland, and welcome to our panel on AI in Consumer Protection.  Before I will introduce our panelists, who will also have the opportunity to introduce themselves, I would just make a quick introduction to the panel, and I will give you also a quick recap of how we are going to proceed today in this panel. 

And you know, it's 3:00 Friday, and it's the fifth day of the Internet Governance Forum here in Katowice, and we have already had a few panels on AI, but I believe that our panel here is quite special because we have a special mission to focus on AI in public administration and consumer protection.  But this has been a very interesting event, and we have had some really interesting opportunities to learn more and discuss about AI, what are the challenges in its implementation, in different sectors, in different fields, what are the trends, what are the opportunities, what are the regulations, and how it's used, and what is AI in general, how to perceive it. 

But as I mentioned, today in this panel, we want to discuss specifically, as public sector actors, and see how you can help us, all the panelists, and also those in the audience, to explore and help us address our challenges, which we'll introduce to you briefly also.  I think we can group these kinds of issues that we are facing into two fields.  On one hand, we have kind of technical and practical issues that need to be addressed, and they concern, you know, data availability, data bias, and all kinds of technological bias issues that concern all the technical aspects of using AI tools, but we also have practical issues that concern learning about AI, learning how to use it, and learning ourselves how to work with it, teaching our colleagues and the staff working with AI.  Of course, this is as much an issue in the public sector as it is in the private sector, but nevertheless, we do face those questions. 

On the other hand, we also have ethical and legal issues that we need to mind, you know, the question of what are the current regulations, what are the upcoming regulations, how can we help and contribute to this very difficult challenge of providing new regulations that would effectively help us regulate and support use of AI, but also the ethical questions which concern questions, such as bias and our knowledge of the limitations of AI and the result it delivers and how we should interpret them, how we should work with them.  So, we will try to address all those issues. 

And before I introduce the panelists, I will just make a brief note and recap from Monday's panel, which was also related and similar, concerning a similar topic, where I will just throw two takeaway points which I have taken from there.  Maybe they will also help us steer the discussions a little bit.  I have heard that using AI and all kinds of new technologies in the public sector requires us to be a little bit courageous to experiment with this, to try and be creative and innovative, but at the same time, to have a strategy and to learn, to perhaps identify our needs correctly and to be able to implement it in a smart way.  And also, once we have this strategy, we have to also be able to implement it. 

And just last point, we also are not competing with the private sector in the way that we need to be able to implement it at the same pace as them, so it's less a race with the private sector.  It's more about knowing how to do it in a smart way.  And today I hope that we'll be able to explore it a little bit better with our panelists. 

So, I'll introduce the panelists first and they'll be able to say a quick word for themselves and what's their relation to AI.  And next, my colleague here will make a quick presentation of experience with the AI tool at the Office of Competition and Consumer Protection in Poland.  So, first of all, Jacek is the Deputy Director of Bydgoszcz's Branch Office. 

>> JACEK MARCZAK: Thank you.  I am an Attorney at Law and for ten years have worked in Polish Office for Competition and Consumer Protection.  And for the past two years, I have also been involved as one of the team leaders in a group implementing AI tool for consumer protection in our office.  So that's my background in AI.  Thank you. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: Thanks a lot.  We have also online Bob Wouters, a Project Manager of eLab at the European Commission.  Hi, Bob. 

>> BOB WOUTERS: Hi.  Yes, thank you.  Yes, indeed.  My name is Bob Wouters.  I'm the Project Manager of EU eLab, and I started at the Commission beginning of this year.  Before, I worked at the ACM as a Digital Enforcement Official for four years.  And I'm very pleased to join you here on behalf of Team eLab.  Thank you. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: All right.  Okay, so the next person I would like to introduce is Professor Monika Namyslowska from Attorney of Law and Administration at the University of Lodz.  Hi, Monika.

>> MONIKA NAMYSLOWSKA: Hi.  Thank you very much for the invitation to this panel.  My background in the topic of artificial intelligence is an academic one.  I am the Principal Investigator in the Product Finals by the National Science Center in Poland entitled Consumer Protection and Artificial Intelligence Between Law and Ethics, but I'm a lawyer.  And for ethical issues, I have an expert in my team.  And I am also a local coordinator at the University, of which I am with a (?) project on legal challenges and implications of digital technologies, and this is supported by (?) plus and we work together with the university in Poland, University of (?) and the university in France. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: That's nice.  Nice to hear this really good background in the topic.  So, next we have Thyme Burdon, a Project Manager at the OECD Committee on Consumer Policy and Working Party on Consumer Product Safety.  Hi, Thyme. 

>> THYME BURDON: Hi.  Thank you so much.  A pleasure to be invited today.  So, yeah, I'm working to support OECD's Committee on Consumer Policy, and we have been for a number of years now contributing to the OECD's broader AI work, and there's certainly a lot going on.  Notably, we've been contributing to the 2019 OECD recommendation on artificial intelligence, which identifies the values that should be applied in order to ensure a trustworthy AI. 

And we're also working with our colleagues to ensure that important consumer policy considerations are taken into account in other projects, such as our current business survey on the use of AI by manufacturers, as well as the development of a global database on AI incidents, recognizing that these projects that policymakers really need much more empirical evidence in order to ensure that policy responses to the many challenges that AI's presenting are based on our solid evidentiary basis.  Thanks. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: Thanks, Thyme.  And we have also Marcin Krasuski, a Government Affairs and Public Policy Manager at Google.  Hi, Marcin.

>> MARCIN KRASUSKI: Hi, I hope you can hear me.  I work with issues, one of which is AI.  So, I'm approaching AI from the regulatory perspective and from really practical one, because at Google, we have employed AI into our various services, and we are happy to share our experiences how we operate and what have we learned so far.  So, thank you very much for the invitation.  I'm looking forward to discussing with you. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: So, hi, everybody, again.  And last but not least, our colleague from ECC Net Poland, Karol Muz, is coordinating the online chat.  So, feel free to also speak online.  She will be answering and, hi, Karol. 

>> KAROL MUZ: Hi.  Nice to meet you again. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: Okay.  So, as I said, use the chat and speak up here in the audience so we are all open to questions any time.  And we will start now.  Jacek, tell us about your AI tool. 

>> JACEK MARCZAK: The AI tool, we will start off our discussion from is actually under construction, but the topic is very vibrant and I have to give this overview in the beginning to start this interesting discussion with our guests.  So, the Office of Competition and Consumer Protection is now working on a tailor‑made tool based on artificial intelligence that will help us, will find and eliminate offer contract terms in consumer terms and conditions, in consumer contracts.  So, this is one of our major tasks, one of our tasks, actually, that is pretty much time‑consuming, laborious, and in many instances, repetitive. 

It works like this, that an officer, usually a lawyer with higher education, is analyzing a complaint from the part of the consumer or consumer organizations, is reading tons of documents, bank and insurance contracts, to find whether some provisions are unfair and should be eliminated from such contracts in public interest.  We would like to shift.  We would like to allocate these talents, our officers, to more sophisticated tasks for more rewarding roles to find these infringements that are more hidden, sophisticated, difficult to find, and automatize as much as it's possible, this process of acquiring, collecting, the terms and conditions from the market, the standard contracts, such as terms and conditions of VOD, platforms of eCommerce marketplaces, of social media and portals, but also banking, insurance, telecom contracts.  So, we would like to acquire them using our AI tool with the use of Webcrawler.  It's actually not that much complicated.  These kinds of tools work nowadays. 

But where's the intelligence in this?  The tool, our new robot friend, I would call it, should also read it, analyze it, and compare it with our know‑how, our registry of unfair contract terms that has been created through years, actually, of our work.  It now includes about 8,000 to 10,000 examples of abusive clauses.  And in last months, it has been cleared, annotated, and tagged by our officers so to make it useful for the artificial intelligence to learn on it.  And the intelligent tool should not only find the same phrases in contracts but also use NLP techniques to find similar synonymous phrases in consumer contracts. 

So, after such an analysis that should take it a couple of seconds ‑‑ normally, perhaps a couple of hours or days ‑‑ it should suggest to a human officer which provisions seem unfair.  It could mark it in red, for example, whereas those, okay, those fair terms, neutral, could be marked in green.  So, this would speed up and enhance our enforcement of consumer law, make it possible to allocate these talents of our officers to more rewarding tasks, and also make our surveillance of the market more continuous, more proactive, and the protection of consumers simply better.  So, this is the tool that is actually to be designed and to be implemented by the end of the next year.  We have already found/chosen an IT company in a public, open competition, and we are about to sign a contract.  So, these are our expectations.  And in the following months, we will be implementing this project.  Thank you. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: Thanks a lot, Jacek.  So, I will start, perhaps, with throwing out a more general question, and I will ask the question to Monika, because you're a researcher and you work a lot with AI and in general with the use of new technologies.  What kind of problems, challenges, and issues do you perceive in this respect, in the context of public administration using AI and these kinds of technologies? 

>> MONIKA NAMYSLOWSKA: Thank you for this question.  As a researcher, I do not focus on strengths, but on risks and shortcomings, and I try to look for solutions.  And artificial intelligence is used by public administration for various purposes, and among others, to improve and enhance the protection of consumer interests, like the tool presented by Jacek or the (?) system developed by European University Institute. 

But paradoxically, the significant deployment of AI, deployment, the use for consumer protection, also entails risks, even for consumers and for businesses, and in the end, for the public administration itself.  In other words, it may harm consumers.  And I can identify a number of legal issues in connection with the artificial intelligence by public administration.  Just for example, discriminatory outcomes and non‑transparent, automatic decision‑making, unfair, automatic decision‑making, also problems with data processing.  And what is more, the legal problems arising from the use of artificial intelligence are compounded by over or under-reliance of public administration staff. 

So, what are the most important regulations from the perspective of the use of artificial intelligence by the public administration?  The existing ones are well known, like the GDPR, of course.  But in my opinion, the public administration has to focus on the proposed EU regulation on artificial intelligence, so called act on artificial intelligence.  It addresses risks created by artificial intelligence applications, and it’s basic ideas, risk‑based approach, and distinguishes minimal risk application, limited risk application, applications and high‑risk AI systems and unacceptable AI, unacceptable risk AI systems.  And public administration should be very careful if the AI systems, if the applications they use, if they are not high‑risk AI applications.  And there is a list of high‑risk AI systems on an extreme of the proposed regulation, and these include two sections that are very important for public administration.  The first one is law enforcement AI systems, and the second one, administration of justice and democratic processes. 

And high‑risk AI systems, they will be subject to very strict obligations before they can be put on the market.  So, in my opinion, monitoring the legislative process nowadays is essential.  Thank you. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: Thanks very much.  And indeed, I'm pretty sure, I'm aware that this is what the colleagues who are working with our tool are also aware of, and these are precisely the questions that we are also trying to ‑‑ the challenges that we are also trying to address.  So, I think this kind of awareness‑raising of the AI users, whether it's public or private sector, is definitely paying off because I know that these are issues that everybody's aware of.  It's more of a question of how can we actually address them while using the tools? 

And this gives us kind of a general background, like starting point with points that we should pay attention to while working with AI.  But let's now move, perhaps, and discuss a little bit, once we are aware of these general underlying issues that pertain to AI use, how do we go ahead and identify the processes in organizations, particularly in consumer protection organizations that could be automatized by AI?  Because that's also quite a difficult question because there is many, many different areas of the work of consumer protection agencies that could be automatized.  So, maybe perhaps, Thyme, would you like to address this question? 

>> THYME BURDON: Yeah, thanks.  It's a very important question.  Agencies around the world are looking at ways they can use AI because they know it will make them more efficient and effective at achieving their mandates.  But the question, I think, is fundamentally where to start. 

The starting place should, of course, be in an application that is a low‑risk application to avoid some of the risks that Monika just touched upon.  And the most straightforward candidates would be those that are more routine and repetitive.  And Jacek touched on those tasks a little bit when complaining the application of your new tool in Poland. 

So, a couple of further areas.  Complaints-handling I think would be a prime candidate where AI could be deployed by consumer protection authorities.  We can have AI tools categorizing complaints, identifying areas of concern, whether the inquiry even falls within an agency's mandate or not, and, of course, automatically allocating the inquiry or the complaint to a Case Officer.  We can also be thinking about building AI tools into websites, such as chatbots.  These can assist consumers with information that is personalized, general advice, of course, but delivered in real time.  And that can really be a great assistance. 

Market surveillance is another key area.  And of course, this is the area where the tool developed in Poland is going to fall within.  But some further areas we see scope for the use of AI would be the identification of unsafe products being supplied on online marketplaces.  We've already heard through our work at OECD that a number of online marketplaces are already deploying tools like this in their own marketplaces to ensure that outside products are not being sold there. 

Further applications would be to detect misleading advertising, fake online reviews.  Of course, we know that this is a real problem.  Scams, and even the detection of dark patterns, which are designs in eCommerce websites which will seek to deceive consumers and prey on their behavioral biases.  An example would be a fake countdown timer, for example.  Ultimately, at the end of the day, though, all these tools have to be accountable to a human being, so they cannot be existing in isolation. 

And just finally, another area would be, given the intent of AI to be used across multiple data sets, it can have great potential to be used to transform the way that agencies are gathering insights about what's happening in markets more broadly and help them set their agencywide priorities. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: Excellent.  I knew there were many fields where we could apply AI and automatize, but I was not aware there was that many.  So, thanks a lot for this time.  It's very ‑‑ it was very interesting.  I just wanted to say that if any of the panelists would also like to jump into any question, just raise your hand, or anybody also in the audience here and online, just raise your hands and give us a sign.  And anybody listening can think about, and later I will ask the question.  You could share with us, what are your experiences, and perhaps you have similar experiences as us at working.  So, think about it and share with us later. 

In the meantime, I would like to turn to Bob and Marcin next, perhaps, but we'll start with Bob.  And Bob, I know you're working a lot with these kinds of tools at eLab.  And what is interesting for us is also to know what technical challenges you perceive which are most, perhaps most prominent and most difficult and share with us your experiences.  What are the most difficult technical issues with implementation of tools like AI? 

>> BOB WOUTERS: Yes, thank you.  And yeah, I think if we look at consumer protection within the digital single market, key elements here are the level playing field for the companies and consumers protected in an equal way everywhere.  And what we see is that not all national administrations have the same capacity and infrastructure or powers.  And with the EU eLab, we are aiming to deliver a solution that really helps the national authorities to tackle these misleading, or even illegal, commercial practices. 

And we really have been focusing on the EU eLab as an equalizer to make this more of a harmonization, but that came, indeed, with quite some challenges.  Of course, we had challenges already in our testing phase that has been all this year, that you sometimes want to use third‑party tools for AI because there is a lot of good tools available, but sometimes this is really difficult because of the ownership of the data, where you ran into problems, or it's a third‑party company that is out of the EU that maybe already gives some problems.  So, then you are starting to look for developing it yourself.  And I think there, there is already the challenge that these authorities, they are now doing the work in investigations, mostly manually, but they do investigate the practices where AI and also machine learning is used.  So, we really see a big need for automatization in consumer protectancy, but again, where do you start? 

I think it's really good to also, a little bit, say something about the question you asked the time.  It's good to start also with the enforcement officials to really see where is really a need for going into automatization.  And I think also the terms and conditions are a very good example.  This is a lot of manual work.  If you can automate something like that, this can help a lot for the enforcers.  But I think it's also good to start there getting the information from them to identify where you can give a focus on what to do with AI.  And I think when you then go into using AI for better enforcement and protection for consumers, I think you really need to seek for an agile approach.  Basically, fail quickly and learn fast.  This is something, it takes time to make a good system, but I think that the public administrations can't really afford low quality of evidence.  So, I think collaboration really creates here the added value and learn from each other what is already being done.  So, yeah, I think that is something that ‑‑ that's how I see it from the perspective of EU eLab. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: Thanks a lot, Bob.  Marcin, what do you think from your perspective?  There is lots of technical issues.  Bob has mentioned some of them.  How do you think it's possible to make those tools more accessible to public authorities? 

>> MARCIN KRASUSKI: First of all, thank you for the invitation.  And if I may respond to the previous question of how it is possible to use these tools in public administration and what are the challenges?  So, as Bob rightly pointed out, there is a lack of harmonization of rules across the EU, because even though one particular Member State might have very good ideas about how to allow new methods of machine learning, artificial intelligence, maybe something else, to use in consumer protection, in their health policy, et cetera, et cetera, but this is not actually reflected in the whole EU.  And as Bob pointed out, scale in artificial intelligence and in general in the tech business matters. 

So, what we need is we need a more scalable approach across the EU, so national consumer protection authorities should harmonize in a way that they release the same data in the same manner, in the same format.  Some of them release that kind of data on PDF, which is not really readable for all tools.  So, these are small things, but these could mean a lot when it comes to adoption of that kind of measures.  

Then, what we see as well is that we see that the EU single market is slowly fragmenting, as, for example, for data.  We see more and more different requirements of data being kept within national borders, or we see, for example, that the new barriers to trade in data are being erected across the Atlantic.  And there were huge uncertainties when GDPR was introduced about data exchanges with the U.S., and this is something that is not really helpful when it comes to training a lot of data models.  Therefore, we are really looking forward to, for example, train and technology council discussions, which are discussions between the EU and the American Government on how to create a single space for data exchanges across the Atlantic.  And then it all affects fairness, but fairness and all different biases have already been mentioned, so I think there is no point in addressing this. 

But then, as a last point and maybe a bit controversial one, I would like to end on saying that, for example, we in the European Union tend to care a lot about consumer protection.  For example, recently in our EU legislation there was an introduction of face recognition ban.  And this is something that we have to understand that this would have a cost, because in other regions of the world, these consumer protection privacy values are not cherished as in our region, so these other countries and other companies who are present there, they will develop that kind of tools. 

So, in a way, we can end up in a situation that some of our services and tools which are used by private sector or by public sector are a bit worse.  Because I can easily imagine that, thanks to this ban, for example, in China, the companies there will have more expertise in that kind of applications.  But not saying that the consumer protection is something bad.  Obviously not.  But everything, all legislation comes at a cost for the development of applications and technology.  Thank you. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: Thanks a lot.  So, on one hand, you mentioned some of those, you know, very detailed and nitty gritty technical issues which you, yourself, said they're small, but at the same time, I think everybody agrees that they are in a way very fundamental to using those tools. 

On the other hand, you have now just made the really interesting point, which is exactly something we would like to go to next about, you know, it's a new and emerging field, and Monika has already mentioned some of those ethical and legal issues, and it is, indeed, a question that we all need to discuss and explore about how we look at the ethical and value questions and how we decide to transpose them into legal regulations.  So, perhaps, Jacek, you could tell us now, what's your perspective on this? 

>> JACEK MARCZAK: Yeah, sure.  I will try to refer shortly to these issues that arised mostly by Monika.  Well, the first thing is that we are fully aware that to be responsible in using AI, we have to understand it, to some extent, at least.  So, perhaps, me as a lawyer will not understand all of the technicalities and these aspects of how it works when it comes to the infrastructure, but we are going through some trainings, meetings with IT companies and also lawyers and academics experts to analyze all these aspects that you raised.  And after that, we decided that the decision‑making process by the AI tool will not be left alone. 

We will not let a machine to decide.  So, we decided for a supervised machine learning.  So, this new system will be our junior worker, junior employee, I would say, junior colleague, supervised by a human officer.  So, there will always be human in the loop, a human deciding whether the suggestion/recommendation from the part of the computer is correct.  If not, we mark it, we reject the recommendation, and the system should learn that in the future it should not recommend, should not suggest such a provision is unfair.  So, this is our supervision over this and the last word will always be upon the human officer.  So, I think this is something that will minimize the risk of bias of discrimination, well, the same level as if a human decided alone, let's say.  Yeah.  So, that's my answer to this. 

And we are very careful about this, so we would ‑‑ I think this makes this process rather low risk than high risk because these are not criminal cases and these are not ‑‑ so the seriousness of cases is important here, and also the supervision from the part of human employees.  Yeah, thank you. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: Thanks a lot.  And I think I would like to return to Thyme right now, because you also make a lot of research on this at the OECD.  What's your take on this?  What are the other perhaps ethical issues?  We can refer to the issues that Monika has mentioned also.  And how do you think they can be tackled by, you know, by people like us, who are using AI. 

>> THYME BURDON: First of all, I want to start by saying I think it's critically important, it should always be front of mind when AI systems are being deployed.  There are a few, I think, components to the ethical issues and some of which have already been mentioned.  I won't spend too long going through them all.  But the first I think is fairness.  We need to ensure the AI we're using is, of course, non‑discriminatory.  And as we know, the AI's only going to be as good as the data that we feed into it, so we need to be aware of any biases that are inherent in that data and ensure that they are not perpetuated with our new technology. 

We also need to make sure that the AI is accountable, as has been written by a few speakers.  We need to have a human behind it, at the end of the day.  And ultimately, it's the agency that is accountable for its use. 

We also need to ensure that it's transparent.  It's often, you know, referred to as a black box, AI.  People don't really understand it.  And weapon need to ensure that the people using it, first and foremost, understand it, and the people who are being affected, consumers and businesses, by decisions or processes that are using AI, that they understand that it's being used, and they are able to challenge any decisions that are made adversely, if they wish to do so. 

We also need to ensure that it is robust, as well.  Digital security is a key issue, and it should also be central to an agency's consideration of how they're going to use AI.  And if there is any physical application of the AI, we need to ensure that that application results in a safe outcome. 

Agencies may ultimately consider whether there are certain situations where AI shouldn't be used, or if it is used, whether businesses and consumers should have the right to opt out of its use.  And I think agencies should also keep front and center the existing multilateral efforts to create frameworks, principles to govern AI use, and these are all centered around ethics and the OECD AI principles are very key in that regard. 

Ultimately, I think authorities need to remember that the consequences of ensuring that AI is not used ethically are not only harm to individual consumers and businesses, but also harm to their own reputation at the end of the day.  And one incident could result in years of hard work in building credibility in agency just being undone overnight.  This shouldn't dissuade agencies from adopting AI.  I think it's critical that they do, but it should make them extra careful in the way they do so. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: Yeah.  Thanks a lot.  And I think that's also partly, as you mentioned, it concerns, you know, training ourselves and making sure we take the responsibility for using it.  So, first and foremost, educating ourselves, those who decide we will be using that tool, and then making sure that we educate farther our colleagues and the employees, because at the end of the day, I think using those kinds of automated tools will be like some time ago starting to use the computer and implementing the computer at work and moving, digitizing the processes.  Now we are automating the processes.  So, it will be a similar learning experience, but you know, it's a much more complex systems and technologies that we'll be implementing now, and much more risky, so we need to be aware of this and training, educating, and taking responsibility is definitely one aspect that we need to pay attention to. 

So, what are the other issues that we need to pay attention to in order to make AI tools safe and secure and to make sure that they are also maintained like this throughout their lifetime?  Maybe Bob, what's your take on this? 

>> BOB WOUTERS: Yeah, I think that the human supervision, that is really important here.  And I think the risk in AI lays more on the other side, on the handle for conversion and targeting consumers.  But still, however, I think if you use AI from the enforcement angle, for me, it's very simple.  It's a human oversight that needs to be in place, and I think training and awareness with the authorities can be very beneficial here. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: Thanks a lot.  Marcin, what would you say about this, knowing and working with AI and also from the supplier side? 

>> MARCIN KRASUSKI: Yes, thank you.  So, I would like to disagree slightly on this, because I feel that we should not ‑‑ we should be a bit varied on reliance on human oversight as a solution to AI issues, because even though we agree that there should be human oversight, but there are some AI‑based applications that do not require this.  The ones which are time‑sensitive or operate on very sensitive or delicate matters, I think that could be easily left out for AI systems to carry as they want and be judged by the output, without human oversight.  But when it comes to very important, medical, for example, information, then we see that human is necessary.  But if we over-rely on human intervention, then we might slow some of the AI‑based processes, because please remember that AI is supposed to make things faster, more automatic.  And if we require each time to have a human sitting and deciding whether the output is okay or not, then we are risking that this whole benefit will not materialize. 

So, for me, the most straightforward answer to your question would be the safe AI ecosystem would basically rely on safe regulatory vision that governs the AI applications.  So, I would summarize this into five points, if I may.  So, basically, a sectoral approach that builds on existing regulation, then adopt a proportionate and risk‑based framework.  Three, I would say promote interoperable approach to AI standards and governance.  Four, ensure parity in expectations between non‑AI and AI systems so that we do not expect everything at once from the AI system.  So, variables should be equal.  And five, that we should recognize that the transparency is a value, but it's not a means to an end.  So, it has to serve something and not be in itself.  And I think that answers the question.  Thank you. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: So, I think your point, at least for me, makes it a little bit more complicated, because your point was interesting because you mentioned and brought our attention to the fact that we should try to understand even more at what are the capacities and the limitations of human brain, and what are the capacities and limitations of AI.  So, this is, indeed, easy to say but very difficult to implement, especially given then that there is as many ‑‑ you know, there is ‑‑ AI is a very broad term used for different technologies, and there is different technology, and each technology has to be learned and its limitation has to be understood again.  So, it's a case‑by‑case basis that we need to try and understand the limitations of both and the interplay between human and the AI tool.  Thanks a lot for that, Marcin. 

I would like to ask if there is anybody who would like to share from the audience their views and their opinions?  I think there was Amali here.  Amali, would you like to say something?  Otherwise, I think we can all see the message in the chat that Amali has shared.  Thanks a lot. 

So, perhaps I would just throw the last question into the audience and to the panelists.  It's a little bit more specific question.  Again, addressing the challenges with AI, how can we prevent AI from ‑‑ and it refers, of course, to what Marcin just said and to what everybody has mentioned and emphasizes, that we need to have a human person behind the AI, but perhaps on a more technical level.  What can we do to ensure in the design of the tool that the decisions of the AI will not be biased in any way?  Of course, there is many different aspects that the tool can be biased against.  But how can we ensure that it's becoming, you know, not biased, and it's not manipulating decisions and it's not leading to unfair discrimination?  Anyone who would like to address anything?  Otherwise, perhaps, Thyme, would you like to say something about this? 

>> THYME BURDON: I'll just be brief.  Firstly, I think we need to have some frameworks in place.  I think organizations should, before they even embark on a specific application of AI, they should be thinking about what their AI strategy will be generally and then taking into account the potential first applications they should be developing a framework on how to actually, you know, on a day‑to‑day basis ensure there is an ethical application of that AI.  And again, there should be reference made to those existing multilateral statements of principle to ensure that we have the important international consistency that we need in this area.  I think that that would be the central thing to ensuring an ethical AI. 

I think another thing that we also need to remind us is that we need to ensure that these tools over time continue to meet the needs and priorities of the agency as technologies in this area evolving very quickly, so, it's likely that any tool will need to be updated equally quickly.  And we also need to think about market responses to the tool.  We've already heard in our OECD from those marketplaces, marketplaces that have introduced AI tools to supervisors to connect on their platforms, for example, preventing the listings of non‑unsafe products.  Trainers understanding that the tools are being used and changing their behaviors, changing the photos they're using, the descriptions they're using to succeed in relisting these unsafe products that have already been taken down.  So, we need to be really mindful of that, and that the tools will need constant updating and review over time.  They shouldn't be just left to operate. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: Thanks a lot.  Monika, is there something you would like to add, perhaps? 

>> MONIKA NAMYSLOWSKA: Yes, thank you.  I think that as inspiration to the public administrations, dozens, or even hundreds of guidelines may serve, but apart from them, I would like to draw your attention to a document that is now being drafted within ELI, European Law Institute project.  And one project team is preparing model rules which deal with the impact assessment of algorithm decision‑making and making systems apply tools by the public administration. 

And from my point of view, an extensive impact assessment prior to the deployment of AI system is a very important and interesting exercise to reflect on and to evaluate potential impact of the AI system used by the public administration. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: Thanks very much.  So, we are running out of time slowly.  And for the last part of this panel, I would like everybody to make a short statement, because we have discussed, we have presented our experience, have presented our tool and then we have discussed challenges, very difficult challenges.  So, I know it's not easy, but if any one of you would like to share the last statement, their kind of most important piece of advice for use of AI in public administration.  And perhaps I will start with Jacek, just to sum up your impression from this discussion. 

>> JACEK MARCZAK: Well, I will not give any answers or any ‑‑ well, I would have to counsel myself.  So, I will just maybe emphasize that we give priority to the robustness and, well, low‑risk, let's say, actions in the field of implementing this tool.  But we can see all of these dangers.  So, that's why we've chosen this line of supervision.  Also, from the point that we are not acting for many, we don't have to speed up that match.  We act for protecting customers, but also, we respect our employees.  So, employing AI doesn't mean necessarily firing people, yeah, the same way, but using them or inviting them to other activities. 

What I would like to ask ‑‑ perhaps I didn't mention it, that the next step with our mechanism is that it should give a brief substantiation, for example, refer to some judgments that we have also in our register to our previous decisions.  This is actually something more complicated, but we have it in our project, and we will try to implement it also at some point.  We will test it.  So, it will give some transparency, I will say, because I'm aware that there is a separate domain of learning how machine learning actually, it's not so easy.  So, I think that this substantiation of the machine decisions will give some idea why the decision was made and it will also make it easier for a human officer to decide finally.  So, thank you. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: Thanks a lot.  And, well, Monika, perhaps, what's your statement and what's your one piece of advice for us? 

>> MONIKA NAMYSLOWSKA: Okay.  Thank you.  My last statement is a simple one, but not easy to apply, I am afraid.  The public administration should bear in mind that artificial intelligence systems applications for consumer protection, that they are not magical tools.  And by all means, the public administration must ensure a high degree of reliability and trustworthiness of such systems.  And I am convinced that, for instance, an interdisciplinary cooperation will lead to increased consumer protection, and this panel is the best example of such an approach.  Thank you. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: Indeed.  Thanks a lot.  And that's a good piece of advice.  Difficult to implement, but we are working on it.  Jacek, I think, is more confused than he was before.  I am, for sure.  But what about you, Thyme?  What's your advice? 

>> THYME BURDON: Again, I think I'm just probably borrowing words from Monika, but I think trust in AI is the key.  It cannot be overstated.  This goes for both the consumers and businesses that would be affected by agency decisions or processes.  At the end of the day, the AI should be explainable.  And as has been said many times, we need to have a human in charge.  

>> MARTYNA DERSZNIAK‑NOIRJEAN: Thanks a lot.  Bob, how about you? 

>> BOB WOUTERS: Yeah, I think we really need to start doing the work with AI and try, really try new things.  It is a very hot topic, but there are a lot of studies, of course, going on, but I think it's really good just to go experiment, and, yeah, try a lot, fail a lot, but also then you will learn. 

And I think with the EU eLab, we really try to support the national authorities to set up a more collaborative way of using it and learning from each other, so that is something that I also would really like to embrace, try to cooperate in this area.  And I think for it, we still try to get evidence out of this.  And if you have the evidence correctly documented, it doesn't need to be a very scary thing, and you can just very well explain things to a judge or in court if you have it well documented and be transparent, and still, that human oversight is very important for checks and balances. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: Thanks a lot.  And that's, indeed, a very nice initiative you are doing at eLab.  And your words are also very encouraging, because while we, I think, are mindful of the issues that Monika and Thyme are mentioning, also I think that's the way to try and learn and to implement it.  So, we need to learn it along the way and try to do our best, minding those underlying issues.  Marcin, now, what would you say?  What would you recommend? 

>> MARCIN KRASUSKI: I'll be very bold on myself to just give you one recommendation.  So, like my company and people who are way above my pay grade, they basically came up with seven principles, what we have, on AI.  And I'm not going to read to all of you, but I invite all of you to go to ai.Google, where you can learn them.  I think they are common sense, and they could be easily applicable to the public sector as well.  Ai.Google.  And we'd be happy to engage with everyone, actually, to implement these in practice.  So, please drop me an email and maybe we can cooperate on these in practice.  Thank you very much.  And thank you, once again, for organizing this discussion. 

>> MARTYNA DERSZNIAK‑NOIRJEAN: Thanks, everybody.  It was really interesting.  And I think we have covered all the topics that we wanted to cover and we wanted to discuss, and I think we are out of time, but it was very fruitful, at least for me.  And I think that a takeaway for me is that there are very important issues that we always have to keep in the back of our mind, but at the same time, go ahead, be courageous, experiment, and try to do our best, and we will learn, because we are open to learning and just wish us good luck, I guess.  Thanks, everybody.  And I look forward to seeing you in person soon.  And thanks a lot for participating in this panel.  Thanks.  Bye‑bye.