IGF 2023 – Day 4 – Open Forum #161 Exploring Emerging PE³Ts for Data Governance with Trust – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> CHRISTIAN REIMSBACH-KOUNATZE: I would say our speaker has arrived.  And we can start the session.  I would say welcome to this session, IGF session privacy‑enhancing, empowering and enforcing technologies.  My name is Christian reams batch, I'm in charge of privacy and data governance and today we have a good interesting set of different speakers, that will talk to us about the role ‑‑ essentially the role of technologies for enhancing privacy and data governance with trust.

We will not only talk about classic privacy enhancing technologies such as ‑‑ let's pick one, homomophic encryption, but we actually will have a broader discussion about what is the role of digital technologies for not only being ‑‑ or going beyond to be the problem, when it comes to privacy, but also to become a solution or to be used as a solution.  And what are the challenges related to that?  So we have different speakers that will make an intervention.  We will start, indeed with the role of privacy‑enhancing technologies but then we will move to broader discussions.

Without further ado, I would like to invite our very first speaker from UK's Data Protection Authority to make her intervention, and maybe I will let each of you briefly introduce yourself, because maybe that's a little bit quicker before ‑‑ instead of going through each of you individually.

So I would say let's start with the very first presentation and Clara, the floor is yours, but maybe very briefly, if I may say so, so the idea in terms of the run of show is to have a series of interventions by our speakers.

They have roughly seven minutes.  And after that, we will have a first set of questions and discussions and we will open the floor to the audience.  And we may have also a second round after that.  So be prepared and Clara the floor is yours if you may, introduce yourself very briefly and talk about the ICO if you want and then go ahead with the subject matter.

>> CLARA CLARK NEVOLA: Can I just collect, are the slides showing well?

>> CHRISTIAN REIMSBACH-KOUNATZE: It's showing pretty well.

>> CLARA CLARK NEVOLA: So my name is Clara Clark Nevola, I'm joining you from the UK this morning, well, my morning and your afternoon.  I will be talking about the privacyive enhancing technologies.  I will introduce my role and the role of the information commissioner.

So the way that the ICO is basicking as a tool to enable data sharing.  If you are not familiar with the information commissioner's office, or the UK's independent data protection, we look at Data Protection Authorities.  We are independent of government, but public funded.

And we produce guidance.  We take enforcement action.  We provide advice and support for organisations and members of the public and we also engage with governments and not the stakeholders on advancing policy positions in this area.  So within this ‑‑ I work with a technology policy team and our role is to anticipate and understand and shape how emerging technologies impact people in society.  And that's very much how I have approached privacy‑enhancing technologies.  Maybe the best question is what is privacy‑enhancing technology.  It's important to understand how they work and what they are, I think it's more interesting to approach what does a privacy‑enhancing technologies actually do?  So this is a vague term.  It covers multiple desperate.  I see it as a toolbox and in explaining what is a hammer.  So if you have some furniture that you need to assemble, how do you put this together?  Not so much how much you make a screwdriver, and what are the technical components of a screwdriver.  The screwdriver allows you to screw the two pieces together.  With that optic, that's how I would invite you to look at privacy‑enhancing technologies.

So instead of focusing straightaway on the tools, I will explain the problem is.  So what is the furniture we are trying to assemble?  And broadly, the furniture we are trying to assemble, the problem statement is that data sharing is difficult.  There's lots of different scenarios in which data sharing has challenges and these challenges are sometimes data protection law, but in many cases they are much broader.  So they will be recreational, commercial, organisational barriers.

So typically scenarios of data sharing involving two or more organisations who are sharing data.  We want to see what the overlap of patients for social services is.  Then we have a scenario, and a common scenario is sharing of data, output of the information to the audience, or public at large.  Then we have putting multiple databases into one.  So one organisation ingests data.  You might think of a local government wanting to make a difference from the roadways and they need to talk to the police and others.

Then another typical scenario is to if they use an external provider to host data and they may need to be sure that it's particularly secure.  So these are the sort of problem statements we have with the various tasks to be done and now I will move on to explain what are the tools to be used and this is where the privacy‑enhancing technologies are what are they and what do they do?

So the first scenario, we clumped together the types of technologies that would be useful in that scenario.  I won't dwell on them in detail given the time constraint, but I'm happy to go over them later if anyone has any questions.  I will give a brief over.  Homomorphic encryption, it allows computations to be performed on encrypted data without the data first being decrypted which keeps the data more secure.  And the zero knowledge proofs, they refer to where one person has to proof something to someone else.  Typically whether you are a above certain age, you are eligible to drive a car, purchase alcohol.  Instead of revealing the underlying data, you can just prove that you are over whatever the threshold is.

So publication and ingestion, two techniques.  So differential privacy is a way to prevent information being revealed or inference about them being made.

It adds noise to records and measures how much information about a certain person is reveal.

While synthetic data is essentially artificial data.  Which replicates the properties of the real data.  You have the real data set and that's a data set that maintains the same properties but is not the real underlying data.  So it anonymizes.  And then finally ‑‑ not finally, but so federated learning first.  Federated learning is very useful for ingesting data from multiple sources.  Typically you need to move the data across to a central hub.  So imagine you are developing a tool for medical imaging.  You need to collect all the medical images for a whole group of hospitals to have a large enough data set to train the model that you are then going to use to detect these images and with federated learning, you avoid the need to move the data across and you train a model locally and then bring together centrally the improvements in that model.  So it really reduces the need to shed that.

And then finally, trusted execution environments, it's secured activation that sort of mixes hardware and software that allows data to be isolated within a system.

So that's a whistle stop tour.  And I want to talk about our involve., if you would like to more detail about anything that talks about it, I would highly recommend you read the guidance.  And we focused on the link between these technologies and the benefits to the data protection and so how the privacy‑enhancing technologies can enhance the data minimization and data security and data protection by design and default.  We provided explanation for the items, the use of the tool and the compliance of the law to help both decision makers and organisations developers of technology.  And that's a flavor of what the guidance contains.  It's Winston one using a tool, how should you use a tool and how will it help?  We also provided examples of scenarios in which PETs could be appropriate.

I'm sure we will talk more about the risks and benefits but I think it's important to note that this really helps with the data sharing and data reuse, but there's still a relatively emerging field.  There's some very ‑‑ there's some great examples of them being used in practice.  So they are still relatively, new.  And so I'm going to finish up which is that there's still a few challenges to solve, as I mentioned.

So, you know, there are great screwdriver but they are making not yet an electric screwdriver and there's still issues to understand how we can match up well the users of the privacy‑enhancing technologies with the developers.  How do you bring the expertise and how can technical standards be developed in this area and how can could haves be brought down.  And so that's my introduction to privacy‑enhancing technologies.  And I will hand it back to Christian.

>> CHRISTIAN REIMSBACH-KOUNATZE: Plain before we move on to the next presenter, I just want to provide a little bit of context why we started with Clara's presentation, because I realized that I missed maybe to clarify that point.  And the reason is because privacy‑enhancing technologies have been traditionally been looked at as the ‑‑ this has been essentially the first kind of approach and tools if you want when you ask people, think about the rule of technologies and how it can protect privacy, people look at privacy‑enhancing technologies and as Clara's presentation has highlighted this has evolved definitely.  There's new types of privacy‑enhancing technologies that she addressed and maybe one, if I may ask you one question because it also opens up a little bit, why has the ICO decided to look into this and to public the guidance if you could elaborate that a little bit before we then move to our next presenter who is sitting next to me.

>> CLARA CLARK NEVOLA: Of course.  So we have long been advocates for responsible data sharing and that's something that they tell us.  No matter how much they say data protection is not about data sharing, there are always challenges and a lot of the challenges were not so much legal but more organisational and business‑wise in the sense that you would have a data ‑‑ a data set and you would not want to share it because you don't know what's going to happen to it afterward asen with privacy‑enhancing technologies, you can massively reduce that risk.  I was talking about homomorphic encryption.  If you Hanover a data set to a third party, you don't know how they will use it.  If you have homomorphic, there's a limit to the queries.  It's only used for a preapproved set of things.  It's exciting and useful to develop data sharea.

>> CHRISTIAN REIMSBACH-KOUNATZE: Thank you.

I think now is a good time to move to our next speaker.  I will also ask you to introduce yourself.  But maybe as a context, why you are next essentially we thought that everybody probably knows the foundation.  They are use privacy‑enhancing technologies so essentially it's a good illustration about not only the potential of privacy enhancing technologies but also an example where every one of us is potentially interacting with these kind of technologies.  So, again, I will ‑‑ please introduce yourself.  Maybe you want to talk about the Mozilla Foundation eventually.

>> So hi, I'm work with the Mozilla public policy team, where I'm the head of global product policy.  My job is to work with internal technical experts and external regulators and lawmakers to help them understand the consequences of regulation, as well as ways in which regulation could be improved to further Mozilla's mission.  And Mozilla is a unique organisation because we are, of course, known most for our browser, but we are a corporation owned by a foundation.  So the Mozilla has a single share that is owned by the mow ZillA. foundation.  Most of the typical incentives apply in the technical sector do not apply to us.  Shareholder, the drive for profits, which at some level, we believe are responsible for some of the month egregious practices when it comes to data collection in this space.

The reason that context is particularly important, Mozilla, with we started the Firefox browser, there was a strong policy of simply not collecting any data at all.

And urgely when organisations say that, they are actually talking about user data.  So, for example, even today, Mozilla's browsing history is end‑to‑end encrypted, which means if you have history, say, on your desktop and you are accessing it on your phone, the only two places that that exists in the unencrypted is on those devices.  Mozilla doesn't have access to that.

15 years ago we didn't collect any telemetry, and this was obviously both that came from the strong privacy decree denials and the idea that we would not collect any data at all, but ultimately we realized as we became a more popular browser that for a product that use to access hundreds of millions, in fact billions of website around the website, not having access to any telemetry, would mean we could never make a product that would serve our users because that telemetry was used to detect which websites were breaking and which websites were throwing compatibility efforts and we could resolve that and make changes in our products to help make sure that they don't happen again.  And that is the period when we had privacy‑preserve ways of collecting this information, which means separating the who from the what.

And that separation for us has been a long journaly.  And that has crystallized around three issues and those are the three maybe samples that I'm talking about to explore Mozilla's thinking and react to developments taking place in the external world.  The first is there's definitely been a recognition that the proliferation of Internet availability, bandwidth and connectivity along with computational power has proactivivey is preserving technologies today that were not available or not as feasible a few years ago.  The second is that privacy post‑2014, because of laws like the GDPR actively became a differentiator between products and people are choosing products because of privacy.

So the net investment that is come coming into the space in these technologies has increased and finally and this is related to Mozilla and the items that are happening.  The chrome sandbox technologies which have gotten a lot of attention over the last couple of years for attempting to do all the parts, targeting, attribution, in a more privacy, and Mozilla was one of the biggest and most open critics of some of these technology because we think while they are better than the current practices enabled by the third‑party ecosystem.

Many of the claims that they make still require some work and those are the things I will talk about.

On the first piece which is Mozilla's own practices.  There are three things that we have been involved that are now almost done at the ITF.  One is oblivious HTTPen and other the DAP.  Both of these standards essentially work by firstly sending data in a manner where there is a proxy in between that separates where the data is coming from, from what the actual substance of that data S. for the individuals on ‑‑ like on the room on the still, if you use Apple's private relay service, it works in a similar manner so that even Apple does not know either your DNS lookups or your browsing history because it's first sent to a proxy where the proxy strips the information about where it's coming from and then it's sent to the destination ultimately.

Mozilla is actively exploring ways in which we could use these technologies in order to collect telemetry information and we expect to make some announcement in this regard in the coming weeks and months.  There's a lot of progress, but one of the things that has actually held us back I would say is the number of players in the ecosystem willing to engage with these technologies is still actually quite limits, both from the demand side which it like how many players actually want to collect technologies with these privacy‑preserving manners and as you can imagine the more suppliers there are, the more competition, the cheaper they will be has not happened yet.  Despite in compareson to some of the more complicated and possibly more promising technologies like homomorphic encryption, these are much, much cheaper.  And it's not actually technology that's holding the deployment of DAP, or the oblivious HTTP, there's few people who provide the infrastructure, and they are relatively speaking much easier to deploy.

With Mozilla's own thinking on the development in the space, I would say when it comes to the evolution around targeting advertising that's taking place, it's almost certain now that the only browser in the mark that still collects or has not disabled third‑party cookies is Google Chrome.  And the pressure that Google has been subject to by privacy advocates and regulators is high.

There's now the privacy sandbox technologies that attempt to do what the current advertising ecosystem does, in a more privacy preserving manner.

What Mozilla has said on this more broadly is we support the idea.  We support the concept of the idea exists because Mozilla, for example, does not block ads by default in month he Zill park Firefox.  We believe that advertising is a valid way to support publishers on the Internet.  We do think that the current state of the advertising ecosystem is absolutely unsustainable and that's the reason we block trackers.  That's the reason we block fingerprinters and all the underlying infrastructure, including third‑party cookies are actively harmer to user privacy and security and we have done a lot of technical work in the last couple of years in order to implement that.

The biggest one there is it TCP or total cooking protection which creates jars of information this which people can ‑‑ when you visit a website, say, "The New York Times."  You can like it on Facebook or share it on Facebook, Facebook actually gets the ability to drop a cookie on to your computer that will also note the fact that you have been to "New York Times.com," you have been to Instagram.com, and WashingtonPost.com that will have that.  Facebook creates jars where each time a website, there's a jar where the cookie for that website and many other identifiers are dropped and these jars cannot talk to each other.

So that's a way of limbing the harm of the ecosystem while giving users the ability to gain from the benefit of third‑party cookies because we use sticks to determine is this an advertising third‑party cookie or a third‑party cookie that enabling single sign‑in.

And as we develop these technologies, its possible to give users a good balanced experience between those two things not having tracking but support publishers if they choose to do so and giving them the option to say, go to the Mozilla and download an ad blocker.

Finealely, I know I'm at time on the Google privacy sandbox piece, what we have said is right now there's a very serious risk that the standards and the technologies on the Google privacy sandbox will become the de facto way in which large parts of these activities are carried out on the Internet.  We think that's a privacy concern but more important a competition.  It's the interplay between privacy where traditional advertisers who are not Google don't like those technology.

It meansGoogled own technology will become more valuable.  And people like, it doesn't go far enough.  Everyone is unhappy with the state of play.  If these standards are going to be deployed and they are, Google has announced that they will stop the cookies we think they should happen in standards bodies there's a process it standard bodies like the IETF that works and validates these for the potential for interoperability.

In a world where more than 60% of the individuals who use the Internet are running on a variant of Chrome which is the Chrome browser engines these technologies have a strong ability to shape the future of the Internet and advertising and tracking will look like.  While they are privacy‑enhancing technologies, if privacy‑enhancing technologies like them are adopted at the scale at which they will be adopted they need a lot more scrutiny than they have received so far which is why why we advocated with the markets in the U K. and also engage with many other regulators around the world, both privacy and competition advocating why processes need to be better and some having conversations with Google as well.

With that I will end and I'm happy to answer any questions.

>> CHRISTIAN REIMSBACH-KOUNATZE: Thank you very much.

I think you raised quite a number of points that we will definitely need to come back to during our discussion and one the points if I may because it actually opens up a little bit the door for the next intervention to some extent.  But let's say broadly for all of us is the question about the difficulties related to validating the technical claims and what that actually means for the selection and also for policymakers and the regulators that are trying to promote the privacy‑enhancing technologies but also interoperability.  I think this is a topic I would like us to discuss about, but what I also find interesting was that you were talking about the current state of the ecosystem, of the advertisement ecosystem and highlighting that there are obviously some challenges and I think our next speakers starting with Max and then Stefan will address exactly that state but what is more interesting and this is really why I look toward to their presentation because they are talking about a different role of digital technologies for supporting privacy, which is the enforcement side.

So ‑‑ because interestingly you talked about a lot of those technologies have played ‑‑ gained a higher adoption because of ‑‑ or thanks to the GDPR.

So we have a legal regime in place, but apparently we will hear what is happening with cookies.  I will give you the floor Max and I understand you will copresent with Stefan, so I will let you manage that between the two of you.  And the floor is yours.  Introduce yourself and what you do.

>> MAXIMILIAN SCHREMS: Thank you.  I'm just going to do the artificial intelligence myself.  Stefan is the developer who works on a lot of these things and maybe get out of the policy‑only discussion and maybe some hands on discussion.  That's what Stefan is here for.  I will run through our presentation.  Fundamentally at NOYB, we do different enforcement projects.  We do deep dives if there's a big legal issues.  There's mass vacillations where the GDPR is just like ‑‑ I usually compare it to speeding.  It's not a big complicated legal case.  It's not a big overly dramatic situation, but we see mass violations where people just basically do that and violate the law in masses.

In the digital community we are still working on most of that in a rather analog way.  Typically when lawyers work on digital issues, it gets us digital as word usually and that's about it.  So the idea was if we have the hundreds and hundreds of violations, we have to speed up, especially we are a smaller organisation with being mainly on donations.  So we have to be efficient in what you are doing as well, which is a similar issue for government as well.

What we thought about on how to approach all of that is a bit like a speeding camera.  I can tell you from an Austrian perspective, if you speed in Austria, typically your license plate is read by the speeding camera.  The speed is automatically calculated and automatically transferred into a tick and you get a code to pay the fine.  There's no human intervention in any of these legal procedures anymore.  They are fully automated and that's basically for the standard violations in other areas of law.  It's inefficient to have people for that.

We thought to take that thinking and apply it to the ‑‑ especially the web technologies right now in the nuture plans that could also be used for mobile technologies and it's to come up with a multistakeholder e step system that allows us to generate complains automateically manage them automatically and settle cases with the companies automatically without the need to send hundreds of emails back and forth.  This all is in the background, basically, and Mongo DB and I'm just going to go very roughly through the steps of how all of this works to make it a bit practical.

What we started with is one trust, it's the biggest provider for cookie banners.  It's the standard cookie banner.  You see in the European Union.  They are done by four or five service providers.  Websites usually don't have their own cookie banner.  They usually use one of these services that.  Allowed us to scale up because we know thousands of websites are using exactly the same website to do this cookie banner.  And one trust has a JSON configuration where it's stored and we can actually ‑‑ or computer can read it quite well because this is like the banner show reject all button false so it doesn't show the reject button on the first layer and you can take it from the JSON file to know it's there or not there.

One trust provides an interface where the admin can change that.  So we took screen shots to explain to the companies which buttons they would have to fix to make sure they comply with the GDPR, and that was kind of like the system is basically the back end and the technology, like the technological way of saving these settings and that auto collect.

We did a first kind of code search.  There's a website called publicwww.  Where you shall search on Google and you can see which software the website is using and you get a list of all the websites that use the one trust cookie banner and we can focus on the websites that have it used and not have to scrape the whole web for random pages that use one trust.

What we then have is that we actually first auto scan the website to see if there's any violations and then we have a manly run through the website and check it.  We did have a two‑screen setup usually where there's a test environment on one side, and which was a virtual machine.  We are changing that and then a management interface where you can manage the case yourself.  We need to do that also because under the law, we need to have a data subject to someone that's directly concerned to bring a case.

All of that basically gets you a big fancy list where you can filter the cases and take a case and do your assessment.  We only file if the human and the computer basically decided it's a violation.  So two people have to agree kind of system to make sure that there's a low error rate.

Once you have done that, we basically auto generated a complaint which is text blocks that generated the PDF where you have certain elements that are filled automatically and certain elements that turn on and off depending what violations you found on the website, from the JSON file and what do you file in that?

We typically then sent that to the individual company first, that's one of the biggest issues and we have to make sure that we don't think it's then.  There's a legal procedure against you, most people will just throw it away and we even tried to use kind of some of the systems that the big companies use.  They typically use AB testing to figure out which type the interaction works.  So we AB tested that as well and saw for different types of emails we sent to the company we get a better or lower compliance rate.  So we thought if we can manipulate the users into clicking the yes button, we can manipulate into complying the law.  We even said we have a full guide unit on how to be fully compliant and it was served on a silver spoon, on a silver plate to have that done.

If companies actually decided to comply with that, they could go to a platform where they could log with their case number and password and they were able to let us no he that they have fully compliant and fixed the problem.  We then automatically were able to scan that and proof that and also from a lawyer's perspective, we were able to get the feedback from the companies in the automated mat.  We didn't have hundreds of emails with law firms that send you endless text.

Now what is super interesting if you look at that from a statistical point of view.  We were do the first version and that's pretty much what I showed to you in more of a duct tape technology version.  We did a first test and saw how well it worked.

What was interesting, first of all, we have 42 compliance rate just by sending the companies an email with the specific instruction of what is legal and what is not legal and further action taken if they are not compliant.  That was already a huge number.  That's better than what we get from the data protection numbers.  That was really interesting tag we had a good compliance rate.  Here.  The second thing that was interesting, that's dependent on the violation.  I didn't go into that, but it's different on the violation on how good the compliance was.

There was only about 18% of fully compliant.  They fixed some of them.  So the 40% is the total number of violations.  The really interesting thing was the domino effect that came out of it.  Typically in law we do not go after every person and go after everybody that's speeding.  We intervene often enough that people feel, oh, speeding can actually be a problem and what we saw is we scanned about 5,000 pages and only sent an email to about 500.  When we continued with the rest, we suddenly saw that hundreds of other websites have all fixed their cookie banner even though we never intervened with them.  What happened in the the companies understood there is an enforcement action going on, I heard it from a client or software provider that accept things around and we saw a huge number of compliance without intervening and that's the idea of general deterrence that we have in other areas of law that works well when you can speed it up and be a credible threat.

Now to wrap it up we upgraded this now to become a long‑term project, which is Stefan's main job right now to get all of that in a structured and nice way of using it.  We do it in a way that the authorities can use it in the future.  What we added is basically a bigger admin panel where you can manage all the cases and make it much more modular and you can go back and forth when it used to be more linear.

That added a lot of options to attribute cases better and we can say we only bring certain cases in anymore.

The other thing that we basically do here is that we upgrade a lot of individual functionality and the first version was not cookie banners but you can use that for tracking pixels for some script, anything else, and you can basically plug into the so muchware and take it back out.  That's fundamentally what will make a lot different and the rest is making the interfaces for an average lawyer.  And that is the elements that we are working on right now.

For us, that was really one of the most useful projects have done, especially considering the input and output ratios and really moving enforcement forward.  So on that side, I think it's very interesting approach in the sense that we're kind of working in a digital sphere, but still do kind of pretty analog procedures, and we could probably learn from a lot of areas on how we can do that better.

So thanks for that, and I hope if there's questions especially technical questions that Stefan can jump in on all of these.

>> CHRISTIAN REIMSBACH-KOUNATZE: Thank you very much, Max.  I think I have probably one brief question, if you could elaborate on that because it will actually be a good transition for the next speaker.  I think you mentioned that you had talked to Data Protection Authorities.  Could you briefly say what your ‑‑ what the kind of feedbacks were that you received on that?  And how how sigh is the interest among the Data Protection Authorities to implement this kind of tool and processes.

>> MAXIMILIAN SCHREMS: I think on a personal level.  It's a mix of fear because too much never seen that different world and high interest in the sense of really how can we be efficient in our world and also get rid of useless work for employees.

Loy the of these tiny things very trivial.  You don't need a lawyer for a lot of that.  One element that I got to mention, the quality usually gets better.

If you have a one‑time template that was proven well by the more senior people you know that what you are doing here is going to produce produce good results.

If you have a junior person, you have a chance that something will go wrong or gets done.  However, the big problem and reality is you need to implement that.  You need to have program programmers and people who need to understand that and you need the management skills to find the right cases because this doesn't work for every case.

A big thing with us was to not get entangles into details.  We are only doing these two things.  Just ignore them nor now and that's a bit of a culture change to say that's really a thing where we go for this one topic and we do that well and quick and the next time we do the next topic which is a very different approach where you usually do everything.

>> CHRISTIAN REIMSBACH-KOUNATZE: I noted that.  It's a good topic for the later discussion.  Because you mention the word "scale."  This is one of the common themes when it comes to us 50ing technologies for addressing privacy problems that we have the potential, let's say solution or support of a solution, or part of the solution that basicalely helps us scale with the problem, so to speak, but we will get to that point hopefully.

Now it's my pleasure to give the for to the European data protection supervisor and I guess obviously one particular question given Wojciech that you are following Max's presentation, to what extent are these tools relevant for your agency, but also basically your colleague's agencies?  And maybe also talk about the technology more generally.

>> WOJCIECH WIEWIOROWSKI: Thank you very much.  Thank you for allowing me to talk to you even in such an early morning from Brussels.  A good morning from Brussels.

The European data protection supervisor, I guess most of you are familiar with it, but for those who first time hear about the complicated system of the governance of privacy in Europe, the European data protection is the ex U, institutions bodies and agencies.  I'm not the super data protection commissioner for all of Europe, but I'm the commissioner for the EU bodies and EU institutions.

At the same time, we have 27‑Member States jurisdictions and 27 data protection.  Anyway, what is rather more important for today's discussion is not a supervisory role towards the EU, but we are in the process in the European Union.  And we are the Secretariat for the boar.

I'm not speaking in the name of all of these authors but I can somehow provide you with the approach that we have among the Data Protection Authorities.

Well, that's a good idea to put me just after Max, because I can somehow react to what he said about the work he does in the mark.  There's a lot of Data Protection Authorities who are interested in the practical solutions similar to the one that NOY B.does.  For some data protection authorities, it's strange that the NGO, the civil society movement can do the things which are called enforcement.  Actually, this is enforcement.  That is the way to make the thing running, and to ‑‑ I'm saying that also said that what Max, that's what the data protection people did before.  Coming back to the main point of discussion.

That's true that this varies tooled preferred by NOYB.  There are things that should exist in most of the Data Protection Authorities especially those that are ‑‑ would have really independent IT structure from the other ‑‑ from the other institutions.

With rather try as Data Protection Authorities, we rather try to deal with the legal and guidelines way of doing the things but it's true that some of the Data Protection Authorities do have their laboratories and they do have their IT teams preparing the tools.  We try to do it because we still remember that there's a kind of limit for the legislative actions that we can do.

They are making more law does not necessarily help.  What actually ‑‑ the point on which we are in the European Union is that we have the law and the law is not bad.  The thing is that we have to operationalize it, also by promoting the role of the IT architects and promotion of the comprehensive privacy engineering approach.

So that is something that lies in the roots of our strategy as the DPS and for the mandate strategy, shaping the new strategy for the new decade.  We, as one of the pillars put the tools, the tools so we are going to use the tools and we will develop the new ones.  It's not easy for all Data Protection Authorities to create the laboratory where these tools are ‑‑ are really produced, but the authorities like ICO, like Neil, like Canadian authority, like ‑‑ like some of the German authorities are ready to do it and are ready to prep their own tools.

What we do as the DPS, related to the remote control, remote audits we try to organize the society.  We have the IPAN which is Internet privacy engineering platform.  They are trying to prepare the solutions to discuss on them and to disseminate information about different solutions which are done by different organisations.

We try to make use of the fact that the European Unions, that's 70 institutions that have their own achievements in this feel.

And let me here just give two examples of such solutions, which are both coming from the Eurostat, which is the statistical office and the agency which is delinquent statistics in the European Union and they are both also giving us examples in the ‑‑ in the current guide on the privacy‑enhancing technologies for official statistics, which have been produced by the United Nations.

The first one is longitudinal mobile operator data where Eurostat has developed a proof concept solution with the technology provider, with the main goal of this project is to explore the feasibility for privacy and preserving processing of mobile network operator data.  The technology itself is ‑‑ for the project was trusted execution environment where in isolation, it has been delivered by the mark.  It's not only Eurostat, they are deploying it and to, let's say localizing it in it.

And once again, the situation which should be which Eurostat is trying to localize on the IT infrastructure for the EU institutions, the solution which is prepared for the mark.  These are the things that we develop.  These are the things that we try to promote, and this is a kind of culture, which we try to deploy among the European Union administration is.

>> CHRISTIAN REIMSBACH-KOUNATZE: Thank you very much, Wojciech.  Just a question because I think what I liked about the examples that you pointed out was that you are essentially directing ‑‑ your speech was directing to us what is a solution, how to promote the use of those different technologies.

And you gave also examples of, let's say Data Protection Authorities that were kind of leading the way.

I was wondering also, if you could talk a little bit about the importance of guidance in that particular role, given that ‑‑ or maybe we can talk about that when we ‑‑ when we later on when we talk about the solutions, because this is where the UK I'm O.  I just realized that time is running and we need to move on.  I'm sorry.

Our next speaker.  So ‑‑ and here, obviously, I would say maybe we just start and give you the floor, Suchakra, and how it relates to the discussion about technology and the role of technology for privacy, protection, I think one of the key elements at least from my understanding is that what you are doing is helping us scale with the problem and help us address some of the issues related to yeah, privacy, but I let you talk and introduce yourself.

>> SUCHAKRA SHARMA: So I will just share my screen as well so everybody can see and then we can talk.

So I am sue catch are a, I'm chief scientist of this upstart nice little upstart called Privado and we are what we are trying to do is look at PETs from a different perpeckive.  They are using ‑‑ privacy itself is looked at from the perspective of user.

But what we are thinking is that datea is not in the either.  Why not look at the software itself that's handling the data.  It can give you an interesting perspective when they were developing the so muchware and you can track what is happening.

Essentially we are trying to catch privacy violations before they manifest inside the system.  Even before you release software, you can understand how it's going to handle data and if you do it at all the points in the chain of where the software is handling the data.  That's where it would be.  You know, as max was pointing out, you know, automating everything, the ticketing system is automated the software.  It captures some information, and it translates the ticket that goes to five or six systems behind.  Those are all points where data is flowing.  How about we understand that whole system, the system itself and then we can predict what will happen no the data.

That's the perspective, I'm Suchakra, I'm the chief scientist here.  I have been working in cybersecurity for six years and almost two years in privacy, I'm going to implement all the learnings that I have from the cybersecurity industry in this environment now.  Visiting a doctor, this is how you do it these days.  I fill out a form and you have a lot of private information, Ph.D. information, the doctor looks at it, keeps it a then it getted shredded, you no he, hopefully but now we have something new in the ‑‑ in this millennium.  We have a software.  And the software is now handling your data.  Things have not changed much but now with the advent of the software what has happened is that this data gets exchanged through multiple hands.  Goes through logs and gets to an advertiser.  You just trust it, but what is happening behind and this is true because we have observed software.  We have analyzed it.  We know it's using a lot of technologies that proliferate this data.  Essentially what happens is that the development time of the software, you have no data.  You just have the intention of what ‑‑ what to do with the data but as the software gets deploy.  Some the data gets put into an analytic service and some goes to a third party and databases every.  Where the data expands.  So it's nice if you try to look at the software itself because that's where the intention of what to do with data is.

What do you with your software is where you look.  At the time it's getting deployed.  We can get data inventory, doctor's name, patient's name, et cetera, we can get a match of the data, the intention of the software is to take the patient's name and put it to this analytic service, and it will be put in a data.  You can again a location of where the data is and again, there's no data that's been processed, just the intention of what is to do with the data.  All third party transfers.  So in the doctor's software has some weird connection which goes to some other connection and goes to another piece of software and that is using advertising, you can track it all the way.  And this gives us something which I would like to call Aztec nickally verifiable PI Asing, privacy impact assessments.

There are documents that have to be filled and then you go back to the engineers and you go back to the developers and then the lawyers also get involved, and they want to see the document in a specific format.  But what if you ‑‑ you have all of this information, very early on in the game.  So if you try to do it at that stage, it's easy.  It's early, and it's proactive privacy.  If you try to do it at later stages try to understand where the data went and use, you know, ten other technologies.  It's a little bit late at that time.

So this sun one kind of PET that we would like to see.  It's expansion of PETs by actually making the software itself secure, you know, making the software itself not leak your private information in many places.

One example is in Canada, you know, I'm in Toronto right now.  It's pretty late.  And there is a directive which is released by the government, and all the organisations have to fill in the PIAs and go through a process and Canada had this dental benefit last year, and they creates a summary.

It says individuals submit their personal information on the C. RA, the Canadian revenue website and it's using HT T P. S, et cetera, et cetera.  And to get this type of assessment, they would look at previous assessments and software but software changes so rapidly, the time you will offer a new dental or vaccination plan, this is rapid.  The software gets developed rapidly and you never know what went inside it.

But you have all of this information, it's already there, because when the software was developed, we know what is supposed to happen to the data.  What if you can ‑‑ so imagine before making that kind of a service public, what if you could find whether it's collecting your PII or AP I will, and it's transferring it to some other weird service that you don't know.  These days it could be open AI and we built a tool, which allows you to really identify that if a developer decided to ‑‑ it can say new data found and it's at this exact place, if it's a violation, fix it very early on.  You know, you don't have to wait for a big assessment and then going back.  You can immediately though that, yeah, today this developer sat down and they decided to collect address information.  And you have this information right there.  And you can then see the flow, where it went.  You can actually analyze the software just like a human is writing, our tool tries to analyze that software to see the intention of the human.

You can see that that will go to open AI, and Mongo DB database somewhere or it gets leaked to a console, which people don't understand is this is a big privacy issue.  You can get this deeper understanding just by looking at code, because code has the intention of what the developer wanted to do with the data.

So that is essentially what it is and having these technically available PI As opens a door.  You have a chain of trust because you have a record of modifications, right from the design to the development, and to the deployment.

You have an opportunity to certify software now.  You can have privacy certified applications because you know that this application is handling private data in pay more secure manner.  They have not integrated these weird advertising things inside them.

You can try to translate privacy intentions of like legal directives that we receive, big documents and to very fine grained checks which are followed.  This can open doors to actually understand high level laws, you know, GDPR, CCPA, and convert them to really fine checks that can be run on software that says, yes, it's compliant and this is before it gets deployed.  Kind of automating what Max is trying to do in a manner, but doing it very, very early.  You know, even before the software gets developed.

And then, you know, it opens the paradigm for the privacy engineers.  It gets involved.  It's a new role that should be there, it's very important and they can help to build the privacy respecting apps but what we observed, it cannot do the person things.  Yeah, that's about it.  Questions?

>> CHRISTIAN REIMSBACH-KOUNATZE: Thank you Suchakra, thank you for making the connection to Max.  Is this something that theoretically NGOs could use, or privacy advocates could use to kind of enforce privacy law or even obviously also Data Protection Authorities could are doing in‑house screening or impact assessment and the likes, but obviously we also have a set of professions that are operating within the firms and this is a good link to our next speaker.  Nicole.  If you could introduce yourself and how your work and your experience represents to ‑‑ to what the previous speakers have said.  The floor is yours.

>> NICOLE STEPHENSEN: Thank you so much.  Hello everyone, I feel honored and delighted to follow such a wonderful group of presenters.  Thank you so much for having me today.  My name is Nicole Stephen Sen and I'm a partner at IIS partners which is Australian data protection.  You will hear from my accent that I am a Canalian, that I'm a Canadian and Australian citizen.  I have been living here in Australia for 20 years.

I lead our privacy service functions at IIS where my specialism is in privacy program management and culture building.  So you can sort of picture how I'm potentially going to wrap up today assaying.

Pear and I would like to real start with the essence of my intervention in mind.  Privacy‑enhancing technologies should not replace good decision‑making at the outset.  Our governments and organisations still have a positive duty to ensure that the information practices alike, noun in my work, there's a large focus on strategic privacy risk manage which is nature ac, the work of the privacy consultancy, looks at risks already taken.  Organisation, the projects or programs and then, of course, technology deployments.  And sometimes I find that our governments and organisations can be educated on what their risks, are but particularly where there's large volumes of personal data or complex vendor relationships involved they might struggle to solve for these using conventional methods.  As an example where there's a risk of unauthorized disclosure of personal data in those vendor processing environments, such as through vendor AP I.s or single sign‑on digital handshakes it can be difficult for organisations to test whether a risk exists only in the realm of possible, right and we often three those types of risks borne out in privacy impact.  Oh, you might have a risk of unauthorized disclosure here.

But is that only in the realm of possible or is it actually playing out in reality?

Now, unauthorized disclosures to vendors that are processing personal data on an organisation's behalf often happen without any real awareness of the organisation.  We often refer to this as data leakage but it's highly likely to qualify as a personal data breach depending on the jurisdiction that you will in.  Although I'm a huge opponent of administrative controls like contracts, data leakage is not something that a contract with a vendor will illuminate properly.  All of now he, right, when we are remediating are data breaches, this is a backward looking exercise.

Now, in the context of controlling for data leakage, so let's use this azon example.  Space.  It will look at data accountability tools and this is more of a gray area category for PETs as compared to some of the technologies already discussed here today, where technology can assist an organisation to enforce rules about what should or should not happen with personal data.

It will be found in the data application laws that are applicable to the organisation or they might be set out ‑‑ and/or they may be set out as commitments to the community in the privacy policy or they might be expressed as contractual policies between the organisations and the vendors and service providers.  All of this said, though, the implementation of privacy‑enhancing technologies doesn't remove from the govs or organisations those initial accountabilities associated with things like purpose specification.  You know, why do we need the data in the first place do.  We have a fit and proper purpose and then that collection minimization.  Are we only collecting the personal data that we Teed to fill that proper purpose?

These are vital building belongs for enforcing a climate or culture that limits the use of data to the greatest extent possible with or without privacy‑enhancing technologies, now all of that said and in my experience, the business case for implying privacy‑enhancing technologies, at least as seen here in Australia can be complicated by a number of factors, including whether the PET supplier is a small business or start‑up, right, because theaismselfs might lack the necessary ‑‑ because they themselves might lack the necessary.  There's not the vendor capital sitting behind the small business or start‑up.  Second is the geographical of the PET supplier and there are many sort of associated legal requirements or barriers that may impact an organisation or government's ability to engage that PET supplier and there might be some sociopolitical biases depending on where that supplier is.

You know, if we look at privacy in the sort of western conceptualization of privacy, if we are looking at PET supplier that doesn't have the same sociopolitical norms.  And the other one is the budget where privacy enhancing technologies are dealing with large volumes of data, if they are being priced based on units of data or volume of data, sometimes they ‑‑ the budget can, you know, blow out and really remove from the government agency or organisation the ability to use that technology at all.

Now, I wanted to share with you that IIS partners recently established a subsidiary company and it's called Trust Works 360 and that's because we think privacy‑enhancing technologies are a thing and are an important thing in Australia and in the wider global market.  And so Trust Works 360 is working to bring privacy‑enhancing technologies and other security management systems to the ANZ and Asia Pac market.  It's a real challenge is the feedback so far.

I approached one of our privacy‑enhancing technologies partners when I was considering the comments that I would bring to the group today, they are called Q privacy and at the deploy tools that both allow organisations to auld it for data leakage. ‑‑ audit for data leakage and then also establish and enforce rules that ensure only the personal data specified for processing purpose is able to be pulled into those vendor environments.

Now, I think that this type of data beingability tool is exciting for the global,ability tool is exciting for the global marketplace and I think is great that deals with large a.m.s of data that can't possibly be monitored by a person.  In these cases and with my consulting hat on, I would say that automated solutions are much more ideal, right, than relying on the privacy officer or DPO in an organisation to try to get a handle on this.  And there's barrier on uptake.

When I asked Q Privacy, what are the reasons for the resistance in the uptake.  There seems to be' low priority for uptake of PETs in sort of your small to medium‑organisations or your smaller governments because there is such a focus from Big Tech from a regulatory perspective.  Error risk managing possibly waiting for a data breach before we take action on anyone.

There tends to be an avoidance for zero trust approaches to personal information of the likes that Q Privacy is deploying.  And low budgets.  And so there tends to be more of a focus on the third‑party risk assessment tools and using standard legal contracts and treating those as sufficient.  And finally, most decision‑makers in the domain of privacy tend to be more in that legal space.  So we send to see legal teams or potentially corporate services teams dealing with privacy issue for the government or their organisations and they have a less technical focus.

So, you know, the barrier, the lack of privacy engineers or folks that understand how privacy‑enhancing technologies is a barrier for uptake.

And with that because I know we want to have at least 15 minutes for questions, I will end supply discussion here and again thank you to all of you and to the room for attending today.

>> CHRISTIAN REIMSBACH-KOUNATZE: Thank you very much, Nicole.  And I think you pointed out a number of questions that I would like us to discuss, I just wanted to invite the audience in the room, as well as online to feel free to raise questions.  Given ‑‑ I have a couple of them.  So I will take my privilege as a moderator maybe to ask a few of them.  One is the question of adoption.  If we all agree that all of those technologies are great, why is it that not everyone is using it?  Some of these technologies have been around for a long time.  So how come that still seems to be something tag needs to be discussed at the IGF?  So this would be my number one question.  Because that's actually the one that strikes me out of the discussion.  Everyone thinks that automation is great and needing to scale is a problem.

But at the same time, everyone seems to be saying, or at least I heard this multiple time, humans should not be replaced.  There should be a role for humans to be get in the process.  If could you elaborate on that.  That's something that some people may ‑‑ yeah.  For different reasons try to forget or ignore.  I will let you intervene.  We will start maybe with Clara and we keep the order of intervention.  If you could address some of these points and put the emphasis where you wish to do.  Clara.  Before you do I wanted to express that you are joining from early in the morning and you have Suchakra, from Canada.

>> CLARA CLARK NEVOLA: It's starting to get light in the you can see in the background.  I think your first one, about why do we not see it engrained yet.  It's something that we are working on at the ICO privacy‑enhancing technologies guidance.  And I think basically our answer is the organisations who would most benefit from the privacy‑enhancing technologies do not yet know that they exist.  So there's a real interest in them in community and that's where the use cases are going.  Is my sound okay?  Is a bit glitchy.

>> CHRISTIAN REIMSBACH-KOUNATZE: The sound is okay but your video is freezing.

>> CLARA CLARK NEVOLA: The lower tech are not aware of these technologies and one the things that we are working on is how can we bring people who are more expert in PET and organisations who have technically minded to go with more traditional organisations, local government health bodies to really understand, like why would you use a PET?  So that's my explanation for question one and I will hand over.

>> I think on question two, why humans are important and it's not just a question of automation.  There is a very real risk that we also often discuss with Mozilla, privacy‑enhancing technologies may make it so that people just start collecting even more data than they already do because it's so easy to collect it and you can ‑‑ like a lot of risks associated with it no longer exist.  And independent of the technology, whether using the tool to check code or write ‑‑ make sure the data leaks done take place.  Should this type of data be collected in the first place?  What will it be put forward of?  What is the risk that may happen if the data ends up leaking, as much as they invest in tools.

For me, that's the primary reason that human beings are important.  The decisions of what to collect are made by human beings.  If you are collecting more information than you need, raptor than investing the tooling around preventing that from happening, maybe you should reconsider whether it should be collected in the first place or not.

I think it's been a very enlighting conversation also.  There's two parts.  One is privacy‑enhancing technologies once the data exists in an organisation, but that's also the piece of privacy‑enhancing technologies that allow to collect data without some of the things that make it prive Sith.  Both of the things I mentioned on livia HTTP and DAP allow you to collect information in a way that aggregated and almost zero consequence if that entire ends up being in the real world because it's collected in a where they no longer respond to the people that operate them.  That's an important thing to remember.

>> CHRISTIAN REIMSBACH-KOUNATZE: Thank you.  Max?  If you could address that point?

>> MAXIMILIAN SCHREMS: I would ask Stefan to go first.

>> STEFAN SCHAUER: From my professional experience coming from other countries, it's just way easier, cheaper and people just don't mind if they collect a lot of privacy data because if you start thinking about PETs, first the knowledge is ‑‑ the knowledge is there or very hold.  It's easier to score everything and I don't care about the possible impact of what you are doing there.  And therefore management usually goes by do it the cheapest way and the cheapest way is not to care about privacy like it's more a risk‑based approach.  So that you rather we store everything and we hope for the best that it won't leak.

>> MAXIMILIAN SCHREMS: First I also see that in practice with we see privacy intensive technologies a camouflage.  We did something absolutely legal but we had a hash in between.  That's typically in litigation when it applies in the wild, so to say.  That's something that I would echo there.

The parts where you need a human, we could compare the color of the button, but is the color choice is deceptional.  Red and green would make sense for yes or no.  But white and gray and green probably not.  And the computer is not really able to necessarily get the context there.  That's what we had with the human.  If all the rest that you are doing is done by the computer you can focus on that one question and say yes or no and that's it and you basically make the human do well and one last thing that came from the presentation, before I what I thought was interesting.  We know for financial institutions, to software and so on, financial institutions usually run the whole finances of a company through a big software.  So there's weird transactions are going somewhere.  That would be interesting maybe for the speakers hereafter if that's realistic approach for the DP As, you I before your own software tool that can go through stuff.  That's a technical question for the speakers thereafter.  I like to look at other areas of law, like typically, what the tax authorities do.  It's similar problems, very big volumes of information but you have to get the one violation.  I was wondering if that could be in, what are your thoughts.

>> CHRISTIAN REIMSBACH-KOUNATZE: Thank you, Max.  What I liked about your point about the example for the financial sector, actually kind of inspired me to the question about certification.  Whether we need actually something like this because obviously if there's such a process that would scan through software.  Maybe a there's roll to play for certification.

>> WOJCIECH WIEWIOROWSKI: Yes, very shortly, with the questions, if we think about the problems to deploy the privacy‑enhancing technologies, I would say from the public institution point of view and the public sector point of view, we have two other big problems that we are sometimes forgetting about.  First the management of the public institutions are not usually the IT problems.  They are usually not engineers even.  They are very often ‑‑ these are the people with the legal or humanistic education.  And to explain to them what the software does, what the solution does.  So that's definitely the place for Data Protection Authorities and the NGOs and the civil society to deal with.  But there is a second problem which we sometimes forget about which is the lack of proper procurement procedures.

So procurement does not allow us to give the additional points for the fact that text is used.  That's something that is often forgotten.  For the second question about automation and the human decision at the time, well, of course the balance.  We like automation.  We like the fact that we are going to the shop and we want to buy not paying the whole price, and we are checked and we get the credit score.  I know that Max will joke because there are few countries where it works well, but there are countries and we are happy to get the positive scoring.

We are not happy when we get the negative one.  This is a possible to have the human intervention which is the most desired.  Not necessarily the fact that the ‑‑ we're trying to get something for credit, I would have to wait for the human decision in each and every case.

>> CHRISTIAN REIMSBACH-KOUNATZE: Thank you for making the point and actually listening to you, when I was just wondering will AI change things?  Will then once people have access to AI is that when human decision‑making is not needed in that process.  Maybe I give the floor to Suchakra, maybe you could address this.

>> SUCHAKRA SHARMA: I have been talking to a lot of organisations and big organisations and tools that you all use, everybody in the room is using and I have talked to folks there.  The push actually comes when there are regulations, standards and fines.  Without that, nobody wants to push, you know, one penny above in this.  So if you go to these organisations, and you talk about hey, you know, if you don't use security tool, you will be hacked.  They can write in some document that we are okay to collect this data and you have signed this, that, yes, it's find that I gave permission to, you know, collect whatever data.

So they somehow have, like, a coverage.  It doesn't sound too interesting for them.  The way they move, doing massive data collection, looking at websites and looking at apps and dissecting them that they will only move in the regulations.  There are standards that are defined and there are fines.  Without that, they don't move.  That's what I have seen and we should all accept it.  When you go in a car, all the software that goes inside a car is verified and checked.  These standards are there for critical applications.  So privacy should be critical and there should be standards that should be developed, evolved and regulation.

About the humans.  So we have to make humans believe that this is important so that they can pay.  You know?  I think that's of the role that humans should take and they should understand that this is important and that's why we need humans.  That's all I would say.

>> CHRISTIAN REIMSBACH-KOUNATZE: Thank you.

>> STEFAN SCHAUER: My Canadian colleague is singing my song right now.  In terms of the first part of the question, which is in terms of barriers, I ‑‑ I actually think that there are a couple of levers that we can always pull in government and with our organisations.  And that is innovation.  You can pull those leave levers and you can get a bucket of cash.  With privacy, there's a view of privacy as a block on innovation, a block on progress, a show stopper, the office of no, whatever you want to call us in terms of the advocacy that we provide within our organisations.

If privacy teams who are aware in and deploying privacy‑enhancing technologies within the organisation can access those levers of innovation and information security, get into the room with those people be part of those steering committees and move up to the executive in a coordinated way, I think there's a real opportunity to break down some of those barriers and show how privacy is an essential part of the organisational ecosystem.

So that's my answer to the first question.  And the second in terms of I guess ‑‑ I don't want to say human the loop because that's not the way the decisions are made in relation to privacy‑enhancing technologies but the involvement of humans is important because of accountable.  So the way government is structured worldwide and the way our organisations are structured, the buck still stops in the with the executive and the board, right, or however they are termed within the organisation.  So if a person hasn't been involved in making a decision about deployment, or configuring the deployment of a privacy‑enhancing technologies to suit the organisation, it's really hard then to call anyone to the floor in terms of accountability, you know in the event of a data breach or in the event of regulatory scrutiny where an organisation is asked, you know, open the kimono, show us what you have been doing in relation to privacy.  You want a person there having been part of that decision.

>> CHRISTIAN REIMSBACH-KOUNATZE: Thank you.  We are now heading towards the end.  I have maybe one important question, which is because we are obviously using this event to inspire some of the work done at the OECD.  This is the opportunity given that I don't see any questions from the floor, what would you wish the OECD to do basically to tackle this topic further.  If you could give a short statement on that, so that we can move on and go where we want this whole topic to evolve.  So I will start again with Clara, and then ‑‑ yes.

>> CHAIR: I know PET were the start of the story and the story is going on but my pitch is I don't think the work on PETs is finished yet.  I think most organisations are not aware that they exist and there's so much more data sharing or the free flow of data and trust that could be enabled by PETs.  I would ask for the OECD to continue the efforts this area.

>> I echo that and say if there's one thing about PETs apart from organisations that collect it and not being aware of it, it's also I think regulators not understanding the true scope of what PETs can do.  What they can't do is important and what they can do is quite important as well.  Given the fact that OECD is the body and one of the primary functions to study, analyze, and spreading awareness at the government level of how the governments can adopt them and how governments can make sure that the entities that they regulate are aware of them and deploy them is ‑‑ I would imagine one of the biggest things that the OECD.

>> MAXIMILIAN SCHREMS: I would echo that and the best practices would be interesting.  And also not necessarily only from the privacy sphere.  I think one of the biggest problems we have in this discussion we are only looking at what everybody else in the privacy bubble did.  But this exists for 200 years in another bubble.  So may oftentimes make sense to look as I said financial regulators.  We thought of the speeding ticket as a way.  There's probably 100 other ways of thinking about that and to look into that as well.

>> WOJCIECH WIEWIOROWSKI: Well, I actually think that your guidelines on emerging privacy‑enhancing technologies current regulatory and policy approaches, which you produced in March this year, are very good example.  Things which I would especially ‑‑ make not expect is the right word but that I think that the OECD is very good.  The study which makes the comparison between the solutions that exist in different places and the proposals that are there but this is not only the matrix and the mapping of the initiatives that's proposal of how to find of the convergence in them and I want to underline this wore convergence.  This is not interoperability of the systems.  That's making them better and making a good level of that.  So that's something that OECD I think is really very good in and even not having all the countries of the world in OECD does not harm this studies.

>> SUCHAKRA SHARMA: Yes, I would echo what Wojciech said.  I would add one more point to it, is that there could be a way where, you know, when we go to say lot of these ‑‑ some of you are customers, enterprises and all of these organisations who want some sort of privacy and they are innovating nascent states sometimes.  They ask us what exact technology should I use in how many vendors there are?  Who is building what, like, they want this kind of information.  So if we can have like a technical package for these people.  Oh, you have this problem?  Here the ten solutions that can be applied.  You have this problem here.  Here are the 20 solutions that can be applied.  If have a nice paper around this, it would help, I would say.

>> CHRISTIAN REIMSBACH-KOUNATZE: If I may take the opportunity that some of this is in the ICO guidance but next speaker, Nicole.

>> NICOLE STEPHENSEN: I think builting on the last comment there is a tendency for topics like privacy‑enhancing technologies to be quite impenetratable for organisations and governments, folks who are not technical, who are not engineers would pay not even be policy people, right with an awareness of what privacy‑enhancing technologies do.  Finding a way to capture what they are in plain language, almost like a sales pamphlet.  These are the types of privacy‑enhancing technologies that are out there.  This is what they look like, and this is how they can be deployed within an organisation or government.  That type of stepped approach, I think would be really, really useful, particularly in jurisdictions like this one.

>> CHRISTIAN REIMSBACH-KOUNATZE: Thank you.  Thank you very much to all of you for being here and in person, online.  Two incredible hours.

I took note of the different suggestions.  And what is also great is that I think with this event we have been able to also extend the understanding of what PETs or watt role of digital technologies could be beyond just those almost today transitional technologies to something that is much more much broader and with that thank you very much.  We look forward to definitely considering the conversations.  Thank you.

Bye.

(Applause)