IGF 2021 – Day 1 – OF #4 Free expression and digitalisation: compatibility mode

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> ELISKA PIRKOVA: Good evening.  It's exactly half past, which is the official start of Todd's session.  Welcome, everyone, it's gray great pleasure to see you all here today and also to have the opportunity to be your moderator throughout the session.  Where we are going to address a bunch of interesting topics touching upon regulating big tech and its relation to the protection of fundamental rights and freedom with the emphasis on freedom of expression, my name is Eliska Pirkova.  I'm a Europe policy analyst and I will be your moderator today.  Today's session is organized by the Council of Europe is having a format of the open forum.  Its title is free expression and digitalization, compatibility mode.  There is truly a late of topics to unpack.

We will walk through quite a few today thanks to our distinguished speakers who will be soon giving their substantial contribution.  The main background and what we really informed today's session is freshly published guidance individuals adopted but the steering committee for media and Information Society of the Council of Europe, that gathers best practices towards effective legal and procedural framework were self‑regulatory as well as co‑regulatory.  I had touches upon a number of important issues from how to design the regulation that is the daily exercise on daily platforms, and smaller players in a human rights compliant manner.  It touches upon several fundamental rights from freedom of expression, an obvious right impacted by those practices.  It addresses issues related to right to privacy, freedom of assembly and association, which are less explored fundamental rights from digital technologies.

I think that further details about the guiding note will be shared by our speakers.  So I will now introduce a very impressive lineup of experts who dedicated significant time of their professional focus to this topic.  First of all, Mr. Yoichi Iilda director general for G7 and G20 relations at the Japanese Ministry of Internal affairs and communication.  He chaired g working meeting on ICT policy, where Japan hosted the ministers meeting and proposed international discussion on grade lines on AI in 2016.  Artificial intelligence or automated decision‑making processes had a lot to do we content moderation.  A pleasure to have you with us today.

Next is Natali Helberger, who doesn't need to be introduced when we discuss regulation of online platforms.  She is the chair on AI and freedom of expression that is actually the author of the guiding note and a Professor of law and digital technology at the University of Amsterdam.  Natali is very well known in the policy world of Internet regulation and advises a number of international organizations and policy bodies.  It's Dr. Matthias Ketteman the head of the research program at Liebniz institute for research and Hans‑Bredow‑Institut.  Matthias is a member of expert publications of the governance and freedom of expression and regulation of big tech and will be definitely a significant contribution to today's panel.

Finally, miss Kathleen Stewart a Public Policy Manager and expert on content regulation at Facebook in the Europe, Middle East and Africa region.  Prosecute elf heave sauce we head of the national broadcast for policy at the U.K. department for digital culture media and support, she was responsible for overseeing audio visual media policy and as well as negotiation and further implementation of the audio-visual media services directed of the European Union.

Without any undue delay, I would like to introduce our first speaker, that's Professor Natali Helberger to introduce today's topic, mainly the guidance note and the session with her contribution.

>> NATALI HELBERGER: Thank you.  Now that they are laying down the rules and recommendation systems.  And thank you also for the opportunity to report about the work we have been doing with amazing experts at this Council of Europe committee on freedom of expression and the impact of digital technologies, which I had the honor to chair and where the questions were central.  As you mentioned already, Eliska, the adopted a guidance note with very concrete recommendation for the effective legal procedure frameworks for content moderation and connoting a range of very concrete and practical recommendation and also characteristics of successful and failed approaches on content moderation which is useful in developing strategies moving forwards.

The also adapted draft recommendation on the impact of digital technology on freedom of expression, where it formulates principles at ensuring they serve rather than (?) article 10.  And why it would go way beyond my five minutes to explain the entire scope of the recommendations and the guidance.  I would like to focus or highlight three of them.

I think an important approach for both the recommendation and guidance note is a focus on procedures, on the procedures through which intermediaries rank, moderate and remove content rather than banning undesirable speech such as disinformation, doing the latest with freedom of expression especially when couched in terms subject to interpretation.  That is an important recommendation in the light of the fact that we have seven, especially now in the wake of the pandemic, considerable regulatory activity at a government level to fight disinformation and some of these laws were actually prioritizing speedy removal over proportional and graduated responses and procedures for freedom of expression.  I think this is really important point of departure of the work of the.  The focus here is really on meaningful.  I mean, we all talk about transparency and also know there is far too much information out there for the burdening of the consumer, rather than truly empowering them.  Something that both the guidance note and the recommendation use is to highlight that for transparency to be really meaningful for users, it needs to be not only accompanied by media literacy initiatives and also on true choice that enables users to decide for or against being profiled and being able to exercise control not only on the data and interferences but which content they get to see.  This focus on transparency is meaningful if accompanied by true choice and agency.

The third point I would like to highlight is both ‑‑ acknowledge me fact that upholding freedom of expression in the digital environment is not exclusively, simply a problem of responsible content moderation, but also a structure problem that requires structural solutions.  And that is very much related to the overall health of the public communications space, and the existing flourishing ecosystem of diverse media players that can act as trusted sources of information that can help people distinguish between disinformation and verified, fact checked news, but also that can avoid an overreliance on social media platforms as the main or even sole source of information.

I believe that with these recommendations, the provide very important guidance that goes beyond some of the approaches that we have seen and really adds to the approaches that we are seeing right now on the table.

You like, I could expand on that in the discussion of where the ‑‑ I believe these documents go beyond the current state of the art.

>> ELISKA PIRKOVA: Many thanks, Natali.  That was an excellent summary.  And definitely putting emphasis on the processes and systems deploy on a regular basis instead of combating concrete categories of content is definitely the right approach to go ahead.  It's also in line with the efforts that are currently happening at the EU level developing legislative binding framework, social services act that also follows a very similar approach.

I had a chance to contribute to the work, and the guidance the draft recommendation, actually compliments the previous work done by the group, also the recommendations San media reliability which significantly informed not only our policy work but policy work of many writers and European Union as well.  Thank you very much for excellent work.  I think it's give space to a representative of those in the core of the regulatory debate, the private actors, sometimes also referred to as very large online platforms and that actually have specifically attention of regulators, not only in the in the EU, but around the world, I would like to give the floor to Kat line Stewart to tell us more from the perspective of the private platform, privately owned platform and what is   to be subject to those regulatory efforts and where you see the positives and negatives, when reflecting on the guidance and the draft recommendations.  Over to you.  If.

>> KATHLEEN STEWART: Thank you, I'll try and keep this short.  This is a topic I could talk for a very long time about.  I think the thing is, when we kind of totally agree what saying transparency needs to be meaningful, it's not an absolute good, considering the purpose and benefits of that transparency, and whether or not it's answering any critical or relevant questions, and often produced a report.  They produced a report earlier this year, where does a wise man hide a leaf, it's in the forest.  That highlights the importance of the transparency to be meaningful.  So we are always working to do more but there is the question of how do we get it to do better.  In terms of working with the various regulatory initiatives cropping up everywhere, I think that there's kind of two camps going on.  So there is the focus on systems and processes, and those transparency efforts, which work, it's hard to kind of get all the information that regulators want at the speed they want it.  But that is definitely an approach and aligns with what the Council of Europe paper is saying.  Versus there are regulatory efforts that focuses on individual pieces of content, rather than looking at the system as a whole, focuses on individual pieces of content results in overenforcement and inhibiting on freedom of expression, focusing on processes gives us that space to work on where can we do better.  So if you think of a concrete example, something lake the European code of practice on disinformation has both efforts, has the kind of qualitative reporting on policies, products, programs and the partnerships, as well as the quantitative reporting, which is the outcome based.  And there's a number of those kind of efforts cropping up, in Australia, there's a code of practice that's similar, New Zealand has a net safe code currently under consultation.

I would say these are all very positive developments.  I think the challenge for the platform is when you have conflicting efforts within regions, wanting kind of very different things, when actually it takes a lot of time, effort and resources to produce this kind of transparency reporting.  Without that consensus about what should be measured, and whether or not there's that consensus about how to achieve that, that makes that very challenging.  I think also there needs to be more consensus that they should be underpin you by international Human Rights laws, and that could be missing in some regions, I'll stop there for now.

>> ELISKA PIRKOVA: Thank you, and in the contextual dependency in individual regions when it comes to regulating user generated content online is a serious challenge.  Not only for platforms, but for regulators and states to come out with responses that do not violate Human Rights but reinforce the protection especially of those who find themselves in vulnerable positions, with whether these are Human Rights defenders or activists or historically oppressed groups.

Which is actually a good connection to now hear from the representative of state, those who actually sit behind the legislative table and put those measures into practice and oversight of implementation of some measures, I would like to invite Mr. Iida Yoichi to give us a state perspective that will be informed by his previous work on regulating AI.  Content included from a large perspective, decision‑making processes, it's quite important to also address learning systems and tools being regularly deployed by platforms.

Please, the floor is yours, now.

 

>> YOICHI IIDA: Thank you very much.  Good afternoon, good evening, it is my great pleasure to join this very important session from Tokyo.  And so as government, the balance between the freedom and the security on internet is always very difficult to and a sensitive issue, especially in Japan.  We put a very significant importance on protection of privacy and also the secrecy of communication, based on the Constitution.  From the historical experience, we have a very serious responsibility to protect the communications between private people and it is always a very difficult problem for telecommunications operators and the internet providers, how to handle the information on the internet, because, you know, they have very serious responsibility to protect the information belonging to private subject.

That means they cannot see the content of the information, and they cannot judge what is right and what is wrong.  So this is a kind of ‑‑ this gives a kind of limitation to their options in taking when they want to respond to harmful information on the network.

In our legislation, we have legal information and legal information and when it comes to the illegal information, it is not very difficult, you ‑‑ we need to take away illegal information from the network.  But when I it comes to the legal information, this is rather complicated.  Everybody can claim the freedom of expression and platforms cannot infringe the freedom of expression, even when the information looks harmful.

So there is a very complicated requirement on how to balance the freedom of expression and security of individual participant on the internet.

So having said this, our system is guaranteeing basically our system is based on the independent decision‑making of individual participants, including platforms and internet service producers.  So in the case of legal but harmful information, they have to judge on their own responsibility and in order to reduce their burden, we had a special legislation to exempt the internet providers from their responsibility to take down the apparently harmful information, although it is legal.

Meaningful transparency is very complicated in our system, and when we see more and more harmful information flowing on internet, the responsibility of providers are getting more and more important, and the social atmosphere is getting more and more serious, the citizens senior requiring more to private providers, and now the government is thinking, well, prior legislation to create more space for those private players to take actions on their own decision.

This is similar to the AI applications by private search providers.  They want to use AI applications to judge whether the information is legal or illegal or whether the information can be judged harmful or not harmful.  They can use a kind of application to know whether some specific information can be harmful or not harmful.  But they always ‑‑ those private players are always required to guarantee some transparency, and they are little reluctant to use AI applications on their own responsibility, and we need more discussion and more exploration on common understanding among business players and the citizens on what can be harmful, what can be admitted in the internet society.

The Japanese suggestion from the government, the situation is always fluctuating and moving, so we need to look at the requirement from the civil society and we need to discuss the matter in multi‑stakeholder participation, and we need to force the common understanding, all participants in the internet society.  Thank you very much.

>> ELISKA PIRKOVA: Thank you very much.  It's been widely observed challenge to establish clear distinction between content that is illegal or manifestly illegal, with just a term we developed by the Council of Europe as well, and the work of NSI, and also potentially harmful but legal content and really a civil society is of course fighting against many vaguely defined category of user‑generated content to be subject to legally binding regulation since we know out of experience how terminology can have detrimental impact for Human Rights.  We hear a lot about a meaning transparency standards and Matthias, you're the last, but not least of the speakers today.

So I would like to touch the ground on what actually is meaningful transparency and we will address this issue further because we'll have follow-up questions for our speakers before we gulf the floor to the audience and maybe a gentle reminder before we go to Matthias, if you have any questions or comments you would like to raise or address, do not hesitate to pose them to Zoom chat and we will make sure that you are being properly addressed if there's enough time and space for that.

So Matthias, over to you, there is a lot about meaningful transparency and data access framework that should be granted to vetted researchers and civil society, that provision now also exists in the legislative proposal at the EU level.  If you could talk about that one and how important this is this is now and will be at the future legally mandated and transparency won't remain just voluntary generosity platforms show based on some voluntary communities and so I will stop there and over to you.

>> MATTHIAS KETTEMAN: Thank you.  Thank you so much, Eliska, always a true pleasure to have you chair any meeting, especially one in which I'm involved.  Thanks so much for having me.

I do a lot of Internet Governance and platform.  A lot of my fellow researchers deal in data, route, and they need access to good quality data from platforms to be able to see what problems can arise and are arising during the practice of content governance.  While in the past a lot of research has been conducted on the basis of data access that was freely given, that research had certain constraints, the ones that the platforms that provided that data, accustomed ‑‑ which they put onto the data access.

Now, the problem in that is not that platforms are necessarily not to be trusted.  However, the problem is that researchers, once they are only given a very limited view, might not be automobile to ask the right questions.

Platform researchers have to be able to get as much access to as much data as possible to be able to find the right questions to ask and then to provide policy‑makers with substantial and correct and demonstrably truancies, we see that when it comes to the average of bot research, it is a mess, route?  It's a mess because all the researchers that try to conduct bot research, unless they truly have access to internal data have to reconstruct what they think a bot mute look like, which leads to the problems we know about.

The same goes for disinformation research, the same is true for instance for the impact of social media on children.  You know, we read a lot about those issues in the pops, right, but we would so much more as researchers want to be able to provide substantial and informed data, we have seen that, for instance, when it comes to the question of the so‑called filter bubbles, which don't exist like that, as has been demonstrated, but we would have been able to say that much being able to dispel the myth of filter bots if data access had been there quicker.

So I think what we need to do is realign the interests of states, researchers and platforms, and I think this realignment is laid down both in the current normative proposals, on the basis ‑‑ at the European Union level and also contained importantly in these more recent exploratory documents which provide very important support to a more vigorous approach to data access for researchers.

Now, I'm the first one to realize there are data protection issues, there are privacy issues involved in that, and I know that platform people with all tell me, everything is more difficult than you think.  I recognize that, right?  My data researchers also tell me things are much more difficult than I think, but I do believe that in weighing the difficulties, we can come to a good solution, synthetic data is an issue raised for some time.  We all know that good solutions exist, but you first have to be able to ask the right questions.  These first ‑‑ these approaches to a stronger access for a scientist's point into the right direction.

If I may say one last thing, I think it's so great how much progress we are making.  I was involved in the 2016‑2018 committee ‑‑ Council of Europe, and reporter for the recommendation we wrote then, you feel that's ‑‑ it's still nice, but it's legal history, and it's so great how much has happened in the two or three years, it's so good to have been wrong a bit.  Now things are developing.  I think in two or three years when we look back at the debates we have now, difficult access, we'll look back and say, it really wasn't that difficult at all.  Let's make that work, thank you.

>> ELISKA PIRKOVA: Yes, absolutely, we would like to share your enthusiasm.  Let's keep our hopes high.  I very much like to have you use the term platform people, which before I give the space to the audience, I think it's going to be addressed questions in that regard and touching upon the point that Natali raised, not only researchers should have the access but a strong push for civil societies with relevant expertise having similar access.  Let's see whether we will get that from the regulators in the future.

Going back to you, Kathleen, I think that ultimate key to every single measure and provisions that look good on the paper is an oversight mechanism.  Whether that's for state regulation, that one would argue also for self‑regulation being deployed by platforms to understand what platforms exactly do with the content, what are their response mechanisms, how they enable effective remedies, and so on and so forth.

The guidance has some concrete ideas about such a form of oversight.  I would like to hear from you and your reflection, what would be the ideal model that in your opinion, since you are representing a platform today, what would be actually effective.

>> KATHLEEN STEWART: Thank you.  This is something I spent a lot of time thinking about.  You've come from a long history or long career of regulation, and what is missing is this kind of overarching framework of regulatory principles.  If you look at other regulated industries, they have financial world held organization, they have bodies that set up this framework that then gets implemented nationally, but they all follow the same principles and that allows global organizations, and these are all global fields, to have the certainty of what they're doing and not kind of have to operate with conflicting models between one country and another.

So international oversight where you have, I'd saw, a from work at a global level and obviously you would have thinking global, acting local, I think would probably be the mechanism there that would work, which is how like medicines financed telecoms all tend to work.  So for that, that is missing.

The closest we have seen to it is the digital trust and safety partnership, which is an industry‑wide effort, but they've set out kind of five principles and 35 best practices.  At that high level, that is both content and technology and platform agnostic.  That would be the level that would need for that to exist.

It's touched upon in the Council of Europe's paper.  The main disadvantage with any kind of international oversight like that, the pace at which it can react.  You know, the industry we are in is very fast‑paced and chaining, the challenge would be how do you get some international global principles or oversight that can still have that flexibility for the merging issues ‑‑ the emerging issues, for which I do not have the answer yet.

>> ELISKA PIRKOVA: Thank you.

>> KATHLEEN STEWART: It may not be for me to have the answer, I think all of the efforts that have worked very well have been efforts that have involved industry, government, civil society and academia, it's when you get that combination is when you get really good results when you're looking at frameworks.

>> ELISKA PIRKOVA: Thank you.  I am aware that we are receiving now questions from the audience.  But perhaps we address those, I still would like to go back to Natali for a moment.  We speak about the role of online platforms and very large online platforms, but, of course, one could argue before we even started discussing the new models of platform governance and how to make o make them better, there was a media regulation that significantly informed the efforts still ongoing.

And so where do you see the legacy as an expert who actually works on also regulation of media platforms and how that actually translates to today's debates on regulatory approach to online space and platform governance?  Many thanks.

>> NATALI HELBERGER: Thank you for the challenging question.

I think ‑‑ I had a very interesting conversation on the European's new approach.  The AI media freedom act, and I think something that we started there, a lot of the rules for the media space were written for traditional broadcasting media.  Presently I'm very hesitant of translating these rules one and one to platforms and sometimes it is argued just expand media concentration or add platforms to the mix, I think this is really complicated because the sources, for example, opinion power, the dynamics and processes are very different.

I think what we should do and what also the recommendation does is to see the issue of regulation of platforms not in the isolation of the bigger context of the media system where that you are functioning and that is something that we see in some of the regulatory efforts, there is a lot of attention for the content moderation, responsibilities of platforms, but I think what we need to look at, and this is a point that recommendation guidance also highlight very much, how do they fit in the broader ecosystems of media organizations that are subject to other rules.

So I position that is something important to consider.  Something else we need to think, is how to revisit some regulatory concepts, for example, media concentration in the light of very, yeah, fundamental changes in what media means and dynamics and processes of media making and access to media.  So this would be my short answer to your very complicated question.  Maybe if I may come back to Kathleen, on the governance issues, I think you're right, this is an enormously complex issue, and I totally see also the importance of some approaches of standardization, you mentioned as a challenge, the speed of these processes and again, I very much agree with you on that.  I would like to highlight the recommendation was a rather swift process.

The second challenges may be also that many are simply contextual.  It is easier to globalize, have global standards for some values than others.  So I think that is ‑‑ another factor that with all keep us busy for the years to come.

>> ELISKA PIRKOVA: Thank you.  Now we can actually turn to the questions from the audience, there are actually two questions by the same author.  First of all, the first question actually consists of two, it's relative long, but very straight to the point, but before I will move to that.  I would like to ask a question that is being specifically given to state representative Todd and the question is are there ongoing multi‑stakeholder processes addressing the challenges of illegal and harmful content and broader content moderation challenges as raised by yeah during your presentation and specifically in the context of Japan.

 

>> YOICHI IIDA: Thank you very much for the very challenging question.

Yes, the content ‑‑ you think the question is asking the situation of content moderation.

>> ELISKA PIRKOVA: The question specifically considers whether ‑‑ so the question, are there any ongoing multi‑stakeholder processes addressing these challenges, so challenges relate to potentially but legal content and also perhaps the broader content moderation challenges you also raised during your presentation.

 

>> YOICHI IIDA: So I think one example, which is now being promoted by global initiative called the GPAI, in this framework, experts from various countries, from various communities, including legal background, economic background, just but also some experts from civil society and, of course, tech community, are joining together, and that you are discussing how to implement trust with AI in specific application.

What that you are talking now is how to ensure transparency in content moderation, which is used to buy social network provider.  One of the global platforms, and analyzing the content moderation regulation, and to share the common understanding how it's working and how it's judging the individual content, which is harmful, which is not harmful, which can be acceptable, which cannot be acceptable.

This is very important to share the understanding and to foster the multi‑stakeholder efforts in not only in the content moderation but also fostering the common understanding in a broader sense in what can be acceptable for the society, what can be ‑‑ what can be acceptable for the society.  I believe this can be dependent on individual country, individual community, individual society, because each country has its own history and own culture and own background, own situation, and this can be a different from one country to another, one community to another.

So as a government, we are promoting a kind of common understanding and nonbinding principles in, for example, AI principles, or data governance principles.  Based on that approach, I think individual country or individual society can build up their own governance system.  This is our basic approach, and example being taken, being promoted by GPAI is very good example for our approach.  And we are strongly supporting these efforts by multi‑stakeholders.

>> ELISKA PIRKOVA: Thank you.  And now going back to the question which is addressed to all of you, and this is a question which is very challenging to answer, at least in my view because it concerns the challenges connected to extraterritorial impact of content moderation decisions by platforms that decide about content removal based on their terms of service, which then has global impact, and how that actually can potentially conflict with different requirements across jurisdictions around the globe.

Also, the question then addresses whether there’s any extra territorial takedowns from democratic states which may empower more author tear union states to police content.  So in the first of its kind anti‑hate speech law or ‑‑ there are other nicknames how this law was adopted in recent year but police makers, and we know that based on those transparency reports that are being required by this regulatory framework, many of these takedown or the majority of the takedowns are being actually performed based on company's terms of service and not necessarily based on the state's regulation, in this case, German panel code, hence that means that the terms of service still take precedence and often detriment what will be considered as accessible and agreeable online and what on the other hand will be removed from the online space or platforms.  Is there a risk for extra territorial impact of content removals?  It's an interesting question because we have representative of state and platforms, independent academia and other experts.  Each of you can answer this question from your own angle, and perhaps Natali we can start from you this time, I know you spend significant time analyzing SDG and its implications and not only the EU, and we know especially the SDG had a very big international spillover and inspired very many legislative responses in the field of content governance.

>> MATTHIAS KETTEMAN: Fascinating, multi layered question, I'm not sure it would be in the middle of that question, if we talk about the one case which globally in the last year caused calls for worldwide censorship and fears like that to come up, that wasn't a German case, that was a case in front of the European Union coming out of Austria.  But even now, lots of tech media said this would be the end of the free internet.  If you look around, has anything happened at all?  Yes, there was this one case where the court of justice, European Union said European law does not forbid the National courts extend a global reach to their judgments, nor does international law forbid that this should be applied globally.  But the proof of the pudding is in the eating.  Namely are those judgments then enforced locally in 192 states.  No, of course they're not.

So legally, the impact of those single decision is rather limited.  However, just think the question, the real heart of the question goes rather to the other question, namely, do companies, nonetheless, perhaps out of fear or perhaps because that you find it easier, do they still take down content globally based on one national law.

You know, I don't see very strong evidence for that happening globally.  Because if we did, we would have ‑‑ we would have much bigger outcry, you know.  I'm still seeing criticism of the Turkish president on Germany's Facebook.  Still seeing criticism of Thai politics outside of Thailand.  We are still seeing all of those things.

We saw a Holocaust denial on U.S. Facebook until August.  Although a lot of European states have had lots of judgments saying Holocaust denial is illegal.

So I get the fear, I get the problem and the other is an excellent researcher working for the internet jurisdiction project, which is developing rules in exactly that question.  They are on the forefront of answering those questions, I don't see that as the end of the Internet not near, and it won't end because of those kinds of global jurisdiction questions.

>> ELISKA PIRKOVA: Thank you, Matthias for bringing they are decision back to life.  Natali I would like to hear your view on extra territorial impact of content.

>> NATALI HELBERGER: Thank you for bringing optimism into this debate, and I fully share that.  I think it's very true and part of a much bigger question, not only limited to content moderation but the fact that media laws are national and the technologies we are talking about, some of the companies we are talking about are operating globally.  I think what that highlights is the importance of focus on organizing processes instead of taking and deciding on the level of content what to remove or not, and in that context, also, establishes processes of contestability of the discussions, so if a nation or state finds itself infringed in their rights, values is are contextual, depending on the natural setting, so I think that we have process of contestability so users can contest takedown decisions, not only users, and I think that is another important point of the recommendation, news media recommendation also touches upon the right of news media organizations to contest takedown decisions if that interferes with editorial freedom.  That is another important component we don't talk a lot about.  Not only users that are affected by these decisions, but sometimes also media and the ability to exercise their freedom of expression rights.

>> ELISKA PIRKOVA: Thank you.  Very well said.  And there is actually follow-up from the audience.  Needless to say, the question is ‑‑ the author is Frances who works for internet jurisdiction project.  I will read his follow-up for you, Matthias.  I agree with you the states outside the west are catching up with legislation with extra territorial scope.  So that's some remarks from the audience, we are grateful for those.  You still have 7 minutes to ask any follow-up questions from our participants.  Please don't be shy and post them to the chat.

I would like to go back to one of the core elements mentioned by all speakers.  It seems to be a priority for international organizations and civil society players.  I will dig deeper into the topic of meaningful transparency and also because it's very important topic and very featured essential element of the guidance that we discussed today, as well as draft recommendations.

We now have also legislative proposals, so legally binding proposals that mandate criteria for meaningful transparency, and I would like to understand better what you actually envision under that topic as leading experts on this issue.  How such a model of meaningful transparency should look like in order to be truly meaningful.  I am aware we have 6 minutes, maybe I can start with Kathleen and let you also put on the topic and perhaps add your concluding remarks, unless the audience shoots some interesting questions your way still.

>> KATHLEEN STEWART: Sorry about earlier, you lost my connectivity for a few minutes, I missed some of the last question.

I think in terms of meaningful transparency, I think there's the risk of numbers for numbers sake, when you get into those metrics, regulators look for improvement without considering kind of wider measures that take place on platforms to try and increase the trust and safety, you know, and those are like the tools, like ‑‑ that practically detecting to review, demotions for content that's likely to be violating all of those sorts of tools which can impact the prevalence of content that is potentially violating.

I think for meaningful transparency, there is that too aspect.  Not just the Met reduction but also the quality of information about the system and the processes that are in plus around it.  Otherwise, we are just at this risk of overregulating and overenforcing on freedom of expression.

>> ELISKA PIRKOVA: Thank you.  Since we opened this session, the introductory remarks by the chair of the Council of Europe and the committee that is the author of the guidance that brought us here Todd, I would like to give the space to Natali one more time and perhaps describe in 3 minutes what is the guidance and recommendations approach to meaningful transparency and perhaps the most essential criteria that have to be met in order to make the transparency truly meaningful.  You can use this as the final concluding remarks of this panel.

>> NATALI HELBERGER: That's a big shoe to put on.  Thank you, Eliska.

Let's start with meaningful transparency, as the experts and reporters of the guidance notes and the entire committee elaborated on.  One important aspect to understand is that we promote transparency not for the sake of transparency.  So the point is not having more data or having more transparency, but transparency as a goal, accountability, oversight, the ability to exercise choices.

Something that MSI has tried to work out in its recommendations is that we need not only to look at making this information available, but helping and creating the conditions so that this information then can be translated into action.

To gulf you an example, the data provisions Matthias talked about, it's a major step forward, this now entering a more serious state of legal text.  MSI‑dig is a first step.  To make this transparency meaningful, we need to create a condition so researchers can fulfill this role in terms of funding and recognition and rewards, in terms of making sure the insight from this research then do instead reach regulators and platforms.  So accordingly, the recommendation, for example, has not only developed elaborate guidelines and access to data, but also calls on Member States to fund and promote rigorous and independent research and create these conditions that transparency can be turned into meaningful oversight.  And I think that is important takeaway.  Thank you.

>> ELISKA PIRKOVA: Many thanks, and we definitely cycled the idea of tiered approach to transparency in order to make it truly meaningful.

So I think we are very much at the end.  I would like to express my gratitude again to all amazing panelists and the great viewpoints that they shared with us today, and perhaps to also use Matthias's kind of positive framing from his initial contribution, three years ago or a few years back, there were many sorts of goals and ideals we wanted to see within the regulation or guidance or any other tools in order to establish some due diligence and safeguards and other responsibilities on platforms.  Today we actually have the real drafts, we have documents such as the guidance notes, draft recommendations, and the quest for that ideal model of platform governance continues.  And perhaps instead of being worried whether this is the end of internet or whether ‑‑ what kind of disasters are coming our way, let's reflect on where the regulation will actually be in five years and perhaps indeed we can achieve the Human Rights centric regulatory response to many issues we are currently battling, whether state regulation level, or self‑regulation, thank you very much again to everyone for being here tonight, and we will definitely stay in touch.  Have a good evening.