IGF 2022 Day 1 Launch / Award Event #25 The use of AI in the public and private sectors - how do opportunities & challenges compare? White paper on implementation AI-engined system enhancing effectiveness of a public authority

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR: So I think we have to start. We have one of our panelists, the director of the Berkman Klein Center. She will join us during the panel. So let's start with a few words to introduce ourselves. So my name is Piotr Adamczewski, and I'm the director in the Polish Office of Competition and Consumer Protection working on different aspects of our capacity, mostly on strategy‑building and enforcement. And to the best enforcement, we are strongly working on efficiency. And this efficiency nowadays can only be achieved by using different digital tools.

One of the tools is definitely artificial intelligence. So today in the sessions, we would like to present our experience on our implementation actions and give you some background on the problems and challenges we actually faced, and how we solved them. And we also would like to present some more broad aspects of things which we need to consider while working with AI.

And the things which are still to be solved in the future, which we just noticed that it required further work from our perspective. And today with me is Ewa Sikorska.

>> EWA SIKORSKA: Hello, my name is Ewa Sikorska. I am from the International Cooperation Office in the Polish Office of Competition and Consumer Protection. And today, I will be moderating this panel along with Piotr. He will also be a speaker, and I will also be rapporteur for this session. I will also be online moderating, so for those of you online, please, in the chat, send your questions, if you have any, during the panel, and I will let our panelists know, and we will answer as soon as we are able. Thank you.

>> PIOTR ADAMCZEWSKI: So I will start with the presentation. So our project. So what we did, we were working in the way of the project. So what we noticed in our organization, that we are ‑‑ we have enough experience regarding the digital models. We digitalized most of our registers and our capacity was enough to work on something more.

So we decided to use all of our resources, not necessarily to change the structure immediately, but just give the first possibility for the organization to work with AI on the basis of the project. So call it artificial intelligence for consumer empowerment.

Can I ask for the next slide, please? OK.

The idea was to recognize how we can actually implement AI, which is not so easy, to be honest, if you are just working as a typical lawyer.

So mostly, what you know is exactly what is your organization, what are the strong sides of the organization. But, you know, also about some problems which are there. However, in order to ‑‑ for recognition of what we can achieve, what we can do, we require a lot of consultation with the experts who already take part in the implementation of AI in the organizations, or at least they have some experience on this.

So we started the project with inviting a lot of people from academia and from business who already had experience in technology. And we were checking, showing them, what we have, what kind of digital registries we have in our capacity, and they were actually answering our questions, so what we can do with all the data, which we already got.

So, mostly, in all the organizations, you have different databases, which you can use in different ‑‑ it's difficult for people within the organization to properly recognize what is good for the process, or which is not so good.

So we work with the expert on the basis of the short workshops. So we invited different people from different environments, and combined them with our people from our organization.

Firstly, before there maybe could be also possible to work in the method of short hackathons. Where after designing the problem, we could ask our guest for giving us some short solutions. You know, short BOCs. But it's quite challenging, to be honest.

The requirements of technology are not so easy to implement within those short hackathons. So we concentrated on the workshops, and we produced information, the road map. What actually can be done with the information, with the databases which we have in our organization.

And on the basis of that workshop, we decided that we can go into five different directions. So there was the possibility of creation of the automatic register of complaints. Because we have, like, thousands of complaints coming each day into our organization.

Then, we thought about artificial intelligence using for prediction of patterns, which is still very important subject of Consumer Protection, and then we thought about terms of contracts and about other possible solutions. And the decision‑making process at that time was quite complicated.

But for the purpose of today's meeting, I would like to focus on three aspects, actually. Why we've finally decided to go into one of the projects, and not to make more than just one tool at one time.

So, first of all, we thought, "What is the most important for us at this moment?" So we realized that the organization still requires some more development, and it would be better to be involved only in working on one solution, because it's a lot of work, which must be done. I will explain it later, how we proceed with the database.

It's thousands of hours of employees who actually were involved in the preparation of the proper database. Then we were, after those meetings, with experts, we realized that the best database which we actually have is actually on terms of contract.

And at the same time, this kind of activity of offices of our kind is the most time‑consuming. To be honest, it's not so challenging for our employees. So it's much better if the artificial intelligence can make this work for us.

So having all those aspects in our heads, we decided that we will focus on the tool on detection of unfair terms of contract. And we decided to use the nature and language processing as the technology for the discovering of the unfairness in the provisions.

And at the same time, we were assured that it's already possible, because there are some technological solutions already on the market. So having that in mind, it was reasonable for us to choose this option, which, in the process of implementation, was actually tailor‑made for us, but nevertheless, we were assured that it's possible to achieve the outcome of the project. Namely, the actual working tool.

So we actually cooperated with the vendor, which was chosen by the organization. So we decided not to work exactly inside of the organization, because we didn't have enough resources regarding the data science.

And with this aspect, we felt it's much better to outsource that to the vendor company, which has much more experience than we actually have.

And for choosing of the company, we decided to go into the contest, which means that it's quite unique procedure for the choosing of the vendor company. Always, in that kind of procedures, you are not deciding on the price, you are deciding on the quality of the proposed solutions.

So what we had, we had a database. We were showing our database to other companies on the market, and ask them, "What would be the best possible way to choose one of you for the production of the tool?"

So this is quite interesting story about vending process. So consultation with the market and asking them what would be the most objective points for working with one of you. And with this approach, we actually gathered a lot of information: how the market is feeling about the solution, how the solutions could be accessed by us for giving the contract.

And this is pretty interesting, that the market actually decided how the contract should be awarded. And in our case, was just to prepare special jury, which was composed of four people. Two of them from our office, two external experts.

And all of that people actually decided on POCs provided by the companies. So the companies just get the number of our databases. So in the process of implementation, fast, we prepared, structuralized the database that we had.

So, to summarize this part of my speech, we had, like, 10,000 already detected unfair clauses. And our employees, at the peak of our work on that, it was 50 people who were preparing the structuralized database.

So they made this structuralized data, as I'm showing on the slide. So in each case, in each entry, they underlined which phrase of the provision is unfair, and which phrase is correct, what is the sector of economy, what are the text of this specific data.

And having all of that, we actually produced around 10,000 entries to the database. So in the contest, we were working on choosing our vendor. We actually allowed them access to part of this data, like 10,000 entries. And on the basis that they had 30 days to prepare a POC. So, for us, it was not important what kind of technology they would use, but it was important what would be the outcome of the POCs.

So the jury actually assesses the accuracy of the tool. So it was quite easy to understand and to keep it in the objective manner, the whole contest.

So after producing of all of the POCs, the jury assesses which one is the best. And all the other parts of the contracts were already prepared. So the very interesting situation was that no one actually appealed from the outcome of the contest. So it was so objective, and normally in the public tendering, I'm not sure how it looks like in your countries. But in our country, in that kind of issues, mostly companies appeal against each other, because they know it's quite interesting contracts, and it's worth to appeal against. Basically, playing with the price.

But with this kind of projects, we are not engaging people to compete on the price; we just engage people to compete on the quality of the project. And in this aspect, it's quite fair.

So, going further with the presentation of what actually was produced, I would like to also add some more information, that, in each data, we were matching the data with the reason why the unfair term of contract is actually unfair.

It is really good, because, for further actions, we know ‑‑ we can assess why the tool detected this or another clause, and what kind of justification was used for the detection. So it is quite important for keeping better outcomes of the artificial intelligence, and at the same time, better possibility for supervising of the tool.

In this manner that we know why, actually it was chosen.

It looks now, this diagram is actually showing the tool is working. So you can see two different sources of getting the information for feeding the tool. So one is the old school uploading of documents, so scanning, making the OCR and preparing the format for the tool. And the second is user upload, which is actually taking the information from the market. And then there is the assessment, where the tool is actually generating the information important for us. It's working on the statistics methods.

So, actually, those natural language processing methodology, where all the words are actually ‑‑ and all the sets, the phrase, are changed into the numbers, so the system can actually work with the meaning of the phrases, not just repeating the words.

And finally, giving us the results. Flagging the phrases which are unfair. So, shortly ‑‑ so how does it look like? So maybe the interface is not so nice, but still, we are more taking care of the accuracy. So this is the first part of working with the tool, taking the standard contract template from whatever, from scanning from the Internet.

Then there is the process of assessment, and the phrases are actually shown to you, so they can be flagged as the fair ones or abusive ones, and then it's quite easy for us. We can just generate the soft letter quite immediately, because we already know that we matched the unfair clauses with the reasoning. So the person who's actually working with the tool is just confirming the choosing of the tool. And after confirmation, it can generate the decision of the office.

Of course, there is some challenge about the provision over the tool. This is the most important part in our office, which is the human being, which is actually finally deciding on the tool.

And it actually must be the proper person who's deciding. So we have different employees in the office, but only those who have the special keys. They can be the people who finally decide and confirm the decision made by the tool. Because otherwise, our database would be quite messy.

If everyone, if, like, 50 people working on the tool would have this possibility to decide on the subject matters, then the database could change quite fastly into something which you cannot work anymore. So this is quite important to control over the processes, over the machine, but also over the human beings.

So I will stop here and give the floor to Lis who is with us.

>> ELISABETH SYLVAN: Yes. Can you hear me OK? I apologize, I could not turn my video on, but perhaps that will be able to happen later.

So thank you for the very helpful description of the project. I think we can see from this project that you know, a variety of the ethical issues that come up when one has a public deployment of technology that includes AI.

One of the things that we think about is how to have such a deployment be fair, be transparent, be responsible. And when we think about projects like these, we might first go to considering transparency, which is absolutely a necessary part of such an implementation.

But it's insufficient by itself. You know, we've certainly all seen cases where technology creators have said, oh, look, let's see, this is how much we have shared. But that doesn't mean we've shared all this information about what the system does.

But that doesn't mean that the system is working for the constituents that technology serves.

So, in order to have not only a transparent, but also a fair and responsible AI, it's important for additional components to be addressed. So, for example, as I was implying from what I was previously saying. Understanding the needs of humans, those using the systems, to be able to use the tool.

You need to have, define an ethical model for how you're going to govern that technology, and be held accountable to that model. You have to know and share what the data is, how it's accessed, who can access it, how it could be combined with other data. Do the data subjects have the rights to restrict or to revoke use of the data, to have their data be forgotten? You know, it's important to be able to identify or if it's not possible to identify existing standards and procedures, to be able to create them.

You know, what are your ethical performance metrics, beyond any technical performance metrics, what are the sociotechnical ones, the community‑based ones. How does the system work for the people it serves? Was there understanding of an experience of it?

And then, of course, there's a need to address any cybersecurity vulnerabilities and risk management. And none of these are sort of one and done. These are things that need to be built into the entire cycle of development and use, from the design phase to the implementation stage, to deployment and monitoring.

And so, what you can see from ‑‑ this is something that was designed based on ethical principles from the start. And then throughout development, the engagement of stakeholders periodically throughout the process. In fact, it was designed with the people who were evaluating the abusive clauses in the first place.

So what we see here is part of what I'm describing. The need to do this kind of work throughout. I also want to talk a little bit about the need for ‑‑ what I mean by standards and procedures.

So sometimes these are required by legislation, sometimes there's industry standards, but oftentimes the current state of things, those don't exist yet.

But for that, this is when people talk about being able to identify and create audits, and returning to them over time. Not to ‑‑ continually monitor through the process and through the long hill of deployment.

So I might pause there and see ‑‑ and return it to everyone else to continue the conversation about the uses of and best practices when public authorities deploy AIs.

>> PIOTR ADAMCZEWSKI: So do anyone in the audience have any question? Or about your experience or implementation? Please let us know so we can exchange the experience on that. So, for us, this first project, which actually we did, and what we achieved. That we have this one tool which is actually already working and giving us this necessary efficiency in regards of unfair terms of contract.

But there are still the challenges which we need to face for further implementations. So, first of all, we defined four aspects, which are still work for further consideration. At least four.

And all of them definitely require more horizontal approach, than just one implementation. So, first of all, for the public administration, it's quite important to have the methodology of getting the knowledge from the markets.

So this is what I described as the short workshops with the experts. But, to be honest, when we started the project, we thought that it would be much easier. It would be much easier to get people to know what they are doing, to actually achieve these short hackathons with preparation of POC.

It's turned out to not be so easy, and this world of technology is working for business, for huge administration ‑‑ huge challenge to be attractive for IT people. We had to make a lot of effort, actually to work with them.

And finally, we had to be open for the whole market. When we were open for the whole market, and we made this public consultation, only then we got this necessary visibility on the market.

And the people in IT sector, they knew about us. They knew of this interesting project, and they could actually take part of the project. That's the first issue.

The second issue is working with databases. So what we did is just the designing and preparation of databases.

But we all know there are some problems, which are partially solved by the legislation, but still not fully. So we are looking at the different legal acts like the Data Act. How we can share our data with the market? Who is still the owner of the databases? Who is the owner of the intellectual rights regarding the creation of the new tools? And how we, as the public administration, shall widespread the knowledge which we actually gained?

So definitely, what we would like to do is this full transparency and one of these actions which we have is this panel. When we are showing our work on preparation of white paper, which we'll be summarizing the information which we can.

But still, what are the limits? So we need to be very precise to understand what are the limits of the public administration regarding the sharing of the intellectual property, and at the same time, of databases. Taking into consideration different aspects like the right of the organization to work with what they prepared.

That's the first issue. The second issue, how we can ensure that our databases will not be ‑‑ there will be no discrimination. And after giving access to this database, how to be 100% sure that we've been working in the proper aims.

So those are the challenges.

What are the other? Mostly, for us, what we can see after the implementation process is working with the whole organization. So everyone in the organization needs to be convinced that this work is really necessary. It's really good for it, and it will be giving us more efficiency at the end of the story.

Because when you ask 50 people to work on the database, which, to be honest, is very hard work, so you need to make it manually. So the computers will not make it for you. If you want to have this really nice and productive database, then the only solution is to have people who are engaged, who realize what they are doing, and they have strong support from the leadership of the organization.

So it was our job to keep this process on the high level, in leadership, because otherwise we would lose the necessary involvement of our co‑workers. So that was the huge challenge.

And last thing which I would like to mention at this point is that we still need to decide what is the role of those kind of tools in the organization, whether they are just assistants, maybe they are already co‑workers, or maybe something more.

So this is the thing which requires the further consideration, but, at this point, I can say that definitely there's not such, like, artificial intelligence deciding over the decision‑making process of humans. That's impossible. Definitely the tool can play a role of the co‑worker, but actually it's just an assistant who is just flagging, who is showing what are the problems on the market and giving more power for reaction.

And still there is this strong discussion about the trust which we can give in the assessment of the tool. And this is actually my question to Lis, to check Lis, what is your opinion on how the public administration can actually rely on the results on the AI‑powered machines?

>> ELISABETH SYLVAN: Thank you so much. Well, I think what you see from all the description of our process is actually a nice example of how to ensure that an AI can be deployed more responsibly.

So I think, as I said earlier, working through to ensure that a bunch of criteria are met in the design and development and deployment phase is really important.

And I think what you said, is when you've set that up well, when you've engaged various kinds of stakeholders, as we started with Arbus (phonetic), and you have identified any kind of performance metrics, that again are not just technical, but have to do with how well it works for the people who are using it, and for the people who are the subjects of it.

Then I think you move on to thinking about how to engage stakeholders throughout the monitoring process. And here, depending on what the deployment is and how aware the public is of it, it also becomes important not only to keep track of the things we talked about earlier, but to watch ‑‑ to understand public sentiment, to be responsive to it.

Here is where people may take about stakeholder boards. And I think those are a great approach, if they have a real impact on design, as opposed to being more symbolic. You know, they're useful if they are multi‑stakeholder, that they include the public audience, the key users of the system, the designers and developers.

And, of course, it's always important as a tool is rolled out, that there's budget and resources put towards continuously tracking cybersecurity threats, because that isn't something that goes away, but needs to be accounted for in perpetuity for the project.

I think once you've deployed, and if you've done sort of the early steps, well, you've built up that trust. And you'll have a better system. So if a system has been rolled out in such a way that the stakeholders have been involved, they're more likely to be open to the process. Honestly, it's less likely to be expensive, because if you make errors in the early phase, it's always more expensive to fix them later, when you have to fix things retroactively.

In addition, you probably have a system that works better for people that may have ‑‑ may be more efficient or effective. Have better user experience, for example.

That doesn't mean that you're all set. And sometimes there are unanticipated outcomes, and you might want to reassess your measures. Still, this will be much easier if you have done what has been done with our system, to have this be part of the design from the beginning, right into the deployment, even the procurement piece, than if you're putting them together post‑hoc.

>> PIOTR ADAMCZEWSKI: Regarding the responsibilities is the important issue. And still, this is in action in one of our priorities right now. So the deployment is the most important. It was the most important for us. I mean this accuracy, efficiency.

Still, we cannot lose from our eyes that there must be the responsibility actions. And so, whenever the tool is making any decision. I mean, not the administrative decision, but just the flagging.

And somehow it's a decision made by the tool. So it required a lot of controlling.

And for this, and we are working on preparing the special procedure of how the tool can be deployed when it's actually opening the files and who is actually deciding on the results.

So that's the issue which we are still working, and another aspect is also the further developments of the agents. So, regarding this, we have one question online.

>> EWA SIKORSKA: The question asked by one of our online participants is, "Could you share your experiences in detecting bid rigging? What should we consider in this AI project and public procurement?" Piotr or Lis, who would like to take the question?

>> PIOTR ADAMCZEWSKI: I can start from the perspective of the enforcer. So it's true, the agencies are working on different projects. So I showed the slide where we actually defined at least five possible tools. And here, one of them was bid rigging.

And what I noticed is different progress in different agencies, like with everything.

And here, what I would always take into consideration is actually the database, which is (?). So here, bid rigging, you have this situation that there are a lot of data, because there are a lot of procedures in all countries, and they are actually guarded mostly by the public administration, by the Office of Public Tendering.

So the data are there. Still, I'm not sure about how digitalized they are. That's the first problem. So they must be fully digitalized. And then, another problem which I see here for preparing well‑working tool is the proper structuralizing of this data. So this data should be showing something.

Basically, it should show the market processes and flag the situations when there's the possibility of bid rigging and the different kind of bid rigging. So you need to structuralize the typical pattern of people who are engaged and to realize what kinds of data these models are showing. So it's quite difficult.

I guess it's possible, but we are still at this stage of structuralizing the data in our office.

But having this question, I would immediately mention about our future work. So we would like to focus right now on dark (phonetic) patterns. Here, we think this is also quite challenging, but from the Consumer Protection perspective, it's really important. So we have taken this other aspect of the importance of the actions. Yes?

OK. I will finish and go back to you. So the dark patterns ‑‑ the importance of the dark patterns (phonetic) is pretty much high. Plus, actually it's quite difficult for people to work with dark patterns. They are not so easy to detect, and there are a lot of them.

There are the surveys which actually show that there are almost 90% of the companies working in e‑commerce, which have one or another dark patterns. Maybe not so bad, but still, messing within the decisions of the consumers. So still, there is a lot of work to do.

But here, we have the same challenge, like with bid rigging, namely propagation of the database. We need to figure out what kind of data can be used for the tool. Maybe the construction of the websites, maybe some specific words which are used by the companies to attract the consumers to buying the goods and services.

So here, we also consider that it will take some time, one or two years, to prepare a proper database. With proper database, we are pretty sure that we can create at least a good proof of concept of the tool which we can work with that.

So we have a question.

>> Thank you. I was wondering if this white paper that's going to be published will be in English, and also if there are plans to open source the tool. And I ask that because, like you said, there are many challenges yet. But sometimes, for the tech community to contribute, actually having it open source is good for having this broader view on the challenge that you're facing.

And also, I think that coming from data protection authority, this kind of tool seems very interesting, because I can see how once it's more mature, it could be easily adapted for privacy policies. Well, surveillance, oversight, you know? And also when you mention on dark pattern, it's also an issue that we face as data protection authority. I don't know if any joint work data protection policy could be within the scope.

I'm just curious how you see the development of this further collaborations that could be useful for making the tool more mature.

>> PIOTR ADAMCZEWSKI: Thanks for that. Yeah, definitely, we are working in English for sure. We would like to widespread that worldwide, so definitely it should be in English. That's one point.

Second, so open-source tool, not yet. That's the huge challenge which we are facing. Actually, when we started, we thought, no, I mean, it's only for our internal purposes. But this opinion is changing. Mostly because the openness to the market actually is a good thing. So when we were open and we showed transparently what we have, then the market responded.

And then we got a lot of POCs. Otherwise, it would be quite difficult. So when we shared our data, that was possible for the market to work on them. Otherwise it's not possible. So it's quite difficult to have a nice and reliable vendor without showing what we have openly. So we need to think how we can share the algorithm to the tool, which we already prepared. But still, it's quite a legal challenge for us. And finally, the private data protection.

Yeah, there are a lot of overlapping with your work and our work. So there are different forums for cooperation. Definitely, there is the ELAP in the EU, which is working for different tools which are supposed to be used on further on by different agencies.

But still, we are also open for corporations. That's one of the aims of the panel, to search for some further work after that.

>> Thank you. Great presentation. Just a couple of things to take care of. Stakeholder engagement and the modeling process of AI, when you mention that, I feel like you're closing up the loop to open up this knowledge when my colleague speak of open source. There's a lot that open source would be able to contribute in modeling some of this algorithm. And the term scope you're giving of two years.

The (?) Estimation does not in an instant. It comes in through a given period of time.

Are there any points that you are building in the trust AI to cover the human decisions. Because if we say that AI is going to take over the human decisions, there is at least a margin of error in protection analysis of AI, and some of them might not come out very well.

And this better factor, this design that you are deploying, as a limitation, how are you going to be able to solve some of these limitations in the development of this tool?

Because you are ‑‑ two things, the automated and then the manual systems in building this AI engine. Thank you.

>> PIOTR ADAMCZEWSKI: A lot of things which we need to discuss [Laughing]. I'm not sure if we have the time now, but just to briefly touch on what you said.

So definitely, I mean, it's true, there's this artificial intelligence and it should work automatically, but we cannot forget that we need to prepare a lot of things manually. That's for sure, and this takes a lot of time.

And maybe not all people engaged in the processes truly realize it's human beings who need to prepare databases. Otherwise, there will be some biases, there will be some discrimination, that's for sure. Even with the preparation of human beings, there still can be discovered some problems.

The open source. This is very important. But still, this is the challenge for public administration. You know how we work. It's always like the kitchen and we are showing only the results of our work. So it's the nature for us to ‑‑ and actually the legal limitations. How we can share what we have inside?

So this is one of the points which we need to be considering further on. So I think we will discuss at some points after the panel, and we are in the time limits. So maybe I will give the floor to Lis for some final thoughts on the panel.

>> ELISABETH SYLVAN: I think those final questions sort of point at the last thing we might say, which is simply that over the coming year we plan to work together to do a series of workshops where we engage with these kinds of issues. Looking at RBUS (phonetic) as an example, but also looking at other people's examples of the work.

And our hope is that if you're in this room, you may be interested in ‑‑ you may be thinking about these issues, and you may be deploying or developing an AI technology. And we hope that you'll ‑‑ if you are in such a position and you're interested in such a conversation, that you reach out to us to see if it might be possible for your project to participate. So you could speak to us after the panel or you could also reach out via e‑mail as well.

We would love to be in conversation with you to see if this would be a useful thing to do together. And thank you so much for the attention. And apologies for the technical difficulties at the beginning of the program today.

>> PIOTR ADAMCZEWSKI: Thank you, Lis. Thank you, everyone. And I invite you to further discussion after the panel. We have just last comment from the audience?

>> It is not a question. Just curiosity about the explanation of dark matter. What do you mean by ‑‑ is it unstructured data or structured?

Is it not (?) by gender, or are there the heterogeneous kind of data? And second question: during the processing diagram, you share there is the manual entry of text and automatic entry of text. What are the tools used by you and your project, in order to do it automatically, feeding of the data, using ‑‑ considering machine learning? I wonder, if you use machine learning tools, it will be better if you explain about.

And third one is, once you launch that project, do you have evaluation ‑‑ periodic evaluation of that project? What are the indicators used by you in order to evaluate the periodic capabilities? And I am not talking about the pilot.

>> PIOTR ADAMCZEWSKI: Thank you for the questions regarding dark patterns. But here I can only say that we just started the project. So, still we need to internally decide about the data. And yeah. I think it's actually summarizing what Lis said and what we have here in the interaction with you.

That the next workshops definitely discuss, definitely could concentrate on preparation of databases for different actions. And one of them definitely is ‑‑ (no sound)