IGF 2021 - Day 2 - Town Hall #38 The New Santa Clara Principles

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> We all live in a digital world.  We all need it to be open and safe. we all want to trust --

>> And to be trusted.

>> We all despise control.

>> And desire freedom.

>> We are all united.

   >> JILLIAN YORK:  Hello, everyone.  Welcome.  This session is on the new Santa Clara Principles and we're going to be presenting the new principles, talking about our consultation process that we've undertaken over the past almost two years now, and we've got some remote participants that are going to join us to discuss various aspects of this process and of the principles themselves.  So, welcome if you're here for the session.

So, first, I just want to start by introducing the principles, a little bit of their history, and what the new principles represent.  In 2018, a group of people on the sidelines of a conference in California, came together to create a set of principles that would seek to push companies to be more transparent in content moderation and policymaking practices.  Those principles were launched, so three years ago now, and over time we received a lot of positive feedback.  We were effective in getting a number of companies, including some of the biggest ones, Facebook, or Meta, Twitter, Redit, YouTube, and as well as a few others to endorse the principles, and we got Redit to actually comply with the principles in full.

These principles focused on three primary areas, numbers, notice, and appeal.  So, transparency around the numbers of content removals and other aspects of moderation.  Notice, so notice to users, just letting them know when they violated a rule and how they can effectively appeal that.  And then, of course, appeals and ensuring that every system had a robust appeals process so that users can access remedy.

We also received a lot of feedback about the principles, that there were pieces that we missed, that there were other elements that our Allies around the world hoped we would have, and as such we embarked, I believe in December 2019, so two years ago now, on a process to do a full consultation with various people around the world and David will talk about that in a moment.  Without further ado I'm going to introduce the new principles, you can find them at santaclaraprinciples.org if you want to follow along.  We'll start with foundational principles and we'll have someone talk about why we chose that particular mode a little bit later on.

Our first foundational principle focuses on human rights and due process, and it says that companies should ensure that human rights and due process considerations are integrated at all stages of the content moderation process, and of course should be transparent about outlining how that integration is made.

The second focus is on understandable rules and policies, and of course that companies should publish clear and precise rules and policies relating to the actions that they take with respect to users' content and accounts and have the policies be accessible or available in easily accessible and centralized locations, like everyone has experienced certain platforms that shall remain nameless that it's really hard to find and follow the rules.

Third is cultural competence, and I think this will resonate with a lot of people and probably the number one thing we heard in our global consultation process, that companies need to ensure that the people making moderation appeal decisions, understand the language, the culture, the political and social context of the moderating, which unfortunately is still not the fact at most of these platforms, and we've seen through the recent leaks how uneven a lot of this moderation is, particularly when it comes to language, so that's -- oh, no.  There we go.  That's another incredibly important element of this process, and of course there is more detail in that that you can see on the website.

Four, state involvement and content moderation.  That companies should recognize the particular risks to user rights that result from content and moderation processes.  Here again, I'd say we've seen an uptick in demands for removal of political speech and other speeches from concern companies, and some companies are all too happy to comply with the demands.

So, this will also include things like a state's involvement in the development and enforcement of a company's rules and policies, and special concerns are raised by demands and requests from state actors.

The fifth foundational principle is integrity and explainability.  We want companies to ensure content moderation systems, including both automated and non-automated processes work reliably and effectively.

And then finally, we also have operational principles, and the first two are the same -- or I'm sorry the first three are the same although they're much expanded from the previous notions, and so again, that focuses on number, notice, and appeals.  I won't get into the details there for the sake of time, but you can read in detail on the site which we've got on the screen right now as well.

And then, finally, we've added two new components, principles for governments and other state actors.  So first we want to ensure that governments and other state actors are removing barriers to transparency by companies.  We know that often, I think China is the most famous example of this, but we know that some governments do place restrictions on what companies are allowed it publish about their own requests to them.  And then the second is the promotion of government transparency, and that governments and other state actors should themselves report their involvement in content moderation decisions, including data on demands or request for content to be actioned or account suspended broken down by legal basis for the request.

So that's just a basic summary of the principles themselves, and we'll get more into detail as we discuss how these can be implemented, so I'll turn it over to David now for more information about our consultation, our two-year consultation process.

   >> DAVID GREENE:  Thank you, Jillian.  I also want to point out that we do have several representatives of the organizations that were co-authors of the principles who are participating remotely, and I'm going to also invite them to join in as well, and so just in the order they're appearing on my screen, spanned Anna Singh from open secretary tolling institute and Kate Lynn from center for democracy and technology and Richard from global and digital partners, and Grecia Macias from R3D and let's see, I know I saw some others as well, our colleague Virada and I thought I saw Laura Hect-felella, as well and if I missed anybody, please raise your hand and speak up.

I think one of the most ambitious things about this reimagining of the Santa Clara Principles is the global open consultation process that led up to this revision.  It was really important to try to get as much feedback as possible about the existing principles and what was missing in them from as many -- as many concerned people in the world as possible.  And when we started planning this, it was before the pandemic and we really had envisioned using a lot of the international conferences, such as this one and Rights Con to really have places to gather people and have, you know, in-person meetings where we would be taking notes and have spontaneous discussions.  But obviously the pandemic made that much more difficult, but even so we're really pleased with what happened, not only did we receive a lot of written submissions from around the world, but we actually are also did have -- we had partners around the world conduct -- conduct regional sessions virtually, but really focused on what were the specific concerns in their region.  So, we had consultations in Latin America, for folks in Latin America, for American concerns we had America conversation, we had an Africa consultation, we had consultation in India, folks in Europe as well and these all proved to be tremendously valuable in providing content.  I'm going to ask Vladimir to talk about this process as well since he was involved both in helping to plan the Latin American consultation and reviews a lot of inputs from the consultations as well.  So, Vladimir?

>> VLADIMIR:  Hi, everyone.  Thank you very much, David, and Jillian.  It's an honor to be here finally after these two years of a process, and finally presenting this very rich and very special debate and the processes that we reviewed during this consultation period, consultation time.  We, from Article 19, Mexico and Central America were supporting these big efforts from Electronic Frontier Foundation and some other organizations to begin the dialogue, to look at the context, to look at what were the needs in not just in Mexico but in other parts of Latin America, and then start reviewing and knowing what was also part of the reflections and discussions and inputs from other parts of the world.  So, it was really like interesting to know what were like the focus and what were like the reflections from Taiwan and India and what were the things that other organizations and stakeholders in different parts of the world.  So, it was a chance not just to look at what was happening in our region, but also like expanding more of these particular things that worries different stakeholders when we refer to content moderation.

And then after that it was like looking at the things that we agree, the things that we believe it was like needed to incorporate in the news in the Santa Clara Principles 2.0 version, and what were the things in which we have to have like a broader discussion, and what are the things that we can request to social media companies, and what are the lines or the limits in which we can request certain kinds of information because it will create a risk in terms, for example, on privacy.  It's like, okay, we can request certain information, but we have also to think in other aspects and other elements as requesting certain types of data.

And from our particularly interest from Mexico and with other organizations as Grecia Macias is here from the Network to the Defense of Digital Rights was like what's the relationship between states and social media companies.  We understand and we believe that there needs to be more transparency and on what is this particular relation, what institutions from the states are requesting social media companies to take down certain information, and to know if these are accompanied by certain legal orders if it's just like someone from the state being mentioned in a critical journalistic piece and they want to just take down this information because it affects this image or so on.  So, it's like the need to know more about this information.

Just to mention, Mexico in the last years, has requested social media companies like Facebook and Google and Twitter around 30,000 takedown requests, we need it to be more transparent, we they'd to have more information about it, and the new Santa Clara Principles introduce this transparency elements that are relevant and important not just for social media companies and but also to insist and strengthen the work on transparency that states must comply in order to understand this relation and in order to protect freedom of expression and access to information.

So, I will just leave it there saying that it was like a really incredible effort from different organizations to have the consultation, to have the dialogue, and to continue enforcing transparency and accountability to all the stakeholders, and put again in the center of users and protect their freedom of expression and the access to information.  Thank you very much.

   >> JILLIAN YORK:  Thank you so much, Vladimir.  That was really helpful context, and I appreciate hearing your perspective on it.  I'm sorry.  Regional perspective on it.  The next thing we'd like to do is to talk a little bit about the implementation of the principles and what we're looking for from various stakeholders from advocates to governments to the companies themselves, and so to that end, we've created a set of toolkits, and these are available on the website and they're under implementation guides, and we have a toolkit for advocates, one for companies, and a note for regulators.  So, I'd like to -- should I turn it over to Grescia.  So, Grescia, if you're there on Zoom, to talk about that process which you were heavily involved in, and we're also just going to share on the website as we go am.

   >> GRECIA MACIAS:  Sure.  Thank you so much, Jillian.  I'm so happy to be here today and see the new Santa Clara Principles finally published after all the work that has been done for this, so, yeah 6789 we decided to make some toolkits for implementation of these -- of these principles because to give like some kind of orientation for the main actors involved or that will be involved with the Santa Clara Principles.  A fun fact is at first the note for regulators was supposed to be toolkits, but when we were drafting it, drafting the note, the thing that I realized is it was more than a toolkit and more like notes to don't do this, please do not use these Santa Clara Principles as like legislation.  There are some caveats that you have to take into consideration, such as your own regional legislation, your own, for example, framework of international treaties and it's not the same to take the Santa Clara in the context of U.S. rather than here in Mexico City and in Mexico or other countries or Latin America, especially, because we have a different human rights framework.  So, yeah, we tried to talk about that.

We talked -- we wanted to talk about the scale, and that you don't have to remind the regulators that there are different kinds of companies, and there are some of them that will only meet one of the principles and others that will meet all of them, and that's okay.  It depends on the size and the number of users, capitalization, and number of actors that they have to take into consideration.

Another thing that we were talking about is the potential for exploitation in case -- well, we have seen that Vladimir was just talking about here in Mexico that there has been ways to try to use some regulation, like kind of using the Santa Clara Principles to impact the free and competitive Internet that we all want to have, and that also there is a landscape regarding the danger, and that you cannot free legislation and not to expect that it has to -- that such legislation will have to change in a few years.

The other toolkit that we developed was about for advocacy and especially because we know that it's hard to deal with some of the key actors of targets that are involved in the implementation of the principles, and for example like we just talked about the state actors, and first off we just like make some principle to tell the state actors that they must abstain from passing legislation and hinder human rights, and also take into consideration the note for regulators and that these are not intended to be a template for regulation, because we know that some legislators just want to cut and paste, and that's not as simple as that.

The other thing was how to engage with social media, and how to communicate with such platforms, and also to encourage members of the Civil Society to work, and also companies and other stakeholders to work together to develop implementation plans in consultation with other stakeholders, especially to develop a roadmap to the various principles.  And also how we can use these Santa Clara Principles in advocating, like I just was mentioning that the principles exist and that explaining the history and relevance of taking into consideration these principles when talking about moderation, and organizing face-to-face meetings with companies, and also face-to-face discussions with relevant state actors and facilitating actions with directors of the companies, and also to hold press conferences to explain these Santa Clara Principles and show issues experienced on social media platforms, and using all of these opportunities to engage in new conversations where we can, especially in each country or each legislation or in each context, develop more -- develop more beneficial guidelines for freedom of speech and human rights in the Internet scope.  And the other, the last one, the last toolkit that we made is the toolkit for companies.  It's meant to explain to the different companies also how to implement these principles, to provide insight on how -- of how it should be implemented and the good practices that should be taken into consideration.

And also, to explain the operational principles and to -- that set out specific practices for companies, and with respect with different stages and aspects of the concept moderation processes.  And, also, we gave like some call to action for platforms to recognize the growing demand from users and Civil Society and how these kind of principles will help to nourish a more and free human rights-centered Internet.  And yeah, that's basically the general idea for the toolkit.

   >> DAVID GREENE:  Thank you so much, Grecia.  I wanted to bring in Richard Winfield at this point, because one of the big changes of the Santa Clara and new version, and I would like to talk to Richard about why the decision was made and talk more about the foundational principle and what the value is.

   >> RICHARD WINGFIELD:  Absolutely.  It's wonderful to be here and it's been wonderful to be part of the feedback.  As we've received from feedback from consultative events taken place in pretty much every region and academic and private sector and all the sectors, a lot of the feelings we saw emerge really cuts across a number of the existing principles, the three principles that were found in the first iteration.  As we were trying to think through how do we incorporate some of these pieces of feedback and some of these recommendations that people are making into the existing principles, we found it quite difficult at times to find out where precisely they fit because some of the themes cut across all of them.

We tried to group together a number of recommendations into broader categories, I suppose you could say, which have ultimately become the foundational principles.  We drew inspiration from the idea of having foundational principles from another instrument, the UN Guiding Principles on Business and Human Rights which sets out foundational principles and then operational principles when it's talking about the roles and responsibilities of governments and the private sector to respect human rights.

So, as a result of drawing from other companies we then looked to see how to best develop foundational principles in a way that would be constructive and useful, and I think if you look at the new version of the SEPs, you will see a difference between the way the foundational these kinds of principles these kinds of principles are drafted and the operational principles, at which build upon the three original ones.

One of the big differences between the two is that the operational principles are set very clear and sort of demanding and quite rightly so, demanding expectations of the very largest platforms and intermediaries now when it comes to transparency and we wanted to make sure that the SCPs weren't only relevant for the largest companies but would provide a source of guidance and inspiration to companies of all sizes that were looking to be more transparent.  The foundational principles, and this is one of the things I particularly like about them are crafted in a way that any company can look at how to proceed with their work, opposed to the others with the expectations.

The other difference between the foundational principles and others is that the foundational principles are drafted with a principle first and then some detail around implementation.  And we drew inspiration for this approach from another document that many people will be familiar with probably, the Global Network Initiative Principles which Alts also contain implementation guidelines, so the distinction between the principle which is essentially the sort of value that we believe should being embedded and recognized by the company, and then implementation as in how do you translate that value into a concrete mechanism internally or some kind of process or structure to reflect it in practical terms.  That's a little as to why we decided to introduce the foundational principles and the structure that they take.

If it's okay, I'd like to just briefly go through them all.  I know that David did so at the start, but perhaps I can elaborate a little bit on why we ended up with these five.

By far and away, sort of one of the most important pieces of feedback that we got was the importance of embedding human rights throughout all aspects of transparency, and of course in many ways, a corporate responsibility to respect human rights in the context of online platforms really does necessary at a time transparency over what's going on.  A lot of people also felt that the due process, which is a concept known in many legal traditions, was also important so that decisions were fair.  So, the first and we deliberately made it the very first foundational principles is that recognition that human rights really underpins everything that happens, so a strong embedding of human rights and due process into all companies' operations, but particularly content moderation decisions and transparency around it was important in our eyes to be the first.

.we then look at understandable rules and policies, and a number of pieces of feedback from the consultation looked not just at the content decisions that were made by companies, so looking at the sort of principles around numbers and appeals, but the policies in the first place 6789 a lack of clarity, a lack of availability and relevant languages, ambiguity that made it difficult for users to know what was and wasn't allowed, or gave too much discretion to content moderators themselves, and so we decided to introduce a foundational principle around the roles and policies themselves, which almost goes without saying but wasn't actually said in the original version of the principles.

For those of you who are sort of looking at how the UN Guiding Principles should be understood by online platforms, particularly freedom much expression, a lot of work by people like David Kaye emphasize where had states have a duty to are ensure they have clear and precise laws around freedom of expression, so companies should have clear and precise rules and policies, so this is really a good way of translating some of that language or expectations from the right to freedom of expression into the transparency process.

Cultural competence, as David said, was perhaps also one of the most widely provided pieces of feedback, and I think it goes without saying, and in fact we see reports of it all the time that the lack of understanding of different languages, of different dialects, of different cultural factors, in different parts of the world, is a huge barrier to fair and human rights-based content moderation, and not to mention the fact that transparency is often not even available in multiple languages, but often only in English.

So, the cultural competence foundational principle really tries to emphasize this point to make sure if you're going to operate your platform to use it across the world and offer your services in these languages, you need to make sure that your content moderation process is as good in those languages as perhaps the language where your company is based, and it many cases that will be English.

We emphasized that there and hope this can be a further nudge to many companies that have either reduced levels of investment in particular parts of the world, fewer moderators, lack of understanding of differences not only of languages but within languages of different dialects and forms of languages to really ensure that those parts of the world are not left behind as they move forward and improving the content moderation processes.

The fourth is state involvement, which has already been mentioned and you'll see the relevance of governments come to fruition in a number of ways in the new principles, so not only is the relevance of governments listed here in the foundational principles but you'll also see some of the operational ones make particular reference to the role of governments and there are of course principles for governments and regulators as well, so it's impossible now to talk about complete transparency without thinking about governments involving themselves in corporate decision-making, regulating companies to mandate, particular removals or retentions of pieces of content, and so we really wanted to make clear that companies should be particularly transparent when it comes to what governments are doing and the demands that they're making of those companies themselves.

And then, finally, integrity and explainability, and there was some back and forth during the consultation process as to the role of automation and automated decision-making which we know is a huge part of content moderation now, and how far we should go into detail as to how companies should be transparent about the use of automated processes and machine learning.

One of the ways we've done this is through this fifth foundational principle, which requires companies to make sure that all of their decisions are explainable, and this is particularly important when you've got the involvement of machine learning or non-human processes in content moderation.  So, we included that so throughout the lifecycle of content moderation from setting rules to appeals and beyond, there is an understanding and explainability of what is happening and why.

Very happy to answer any more questions that we have later, but otherwise hopefully, that's given a bit of explanation of how we landed on the foundational principles and their wording.

   >> DAVID GREENE:  Thank you, Richard.  I now want to bring in Spandana Singh to talk about one of the other products that's being launched today which is a report that summarizes the whole global open consultation, and I'm going to ask Spandana to talk about the report for a bit.

   >> SPANDANA SINGH:  Thanks, David.  Like David mentioned, the report summarizes the feedback that we received from the sort of live, virtual consultations that we held as well as the written submissions that we received, and I would really encourage folks to take a look at the report as you read through the principles and the toolkits as well because I think that the report has some really rich insights, which reflect how perceptions on transparency and accountability have evolved since 2018 and how they're evolving in different regions and communities.  It's definitely not homogenous, and I think I definitely learned a lot about how different stakeholders are viewing transparency and accountability efforts, and I think going forward, the report can also provide very interesting insights to companies who are trying to think through how can they improve their transparency efforts, advocates looking through how they can launch, you know, more refined advocacy efforts, and policymakers trying to understand how the space is evolving over time.

   >> DAVID GREENE:  Great.  Thanks, Spandana.  I think all of us involved in the process will say that we learned a lot.  You know, and I think all of us involved in the process, I think also consider ourselves to be experts in this area, but we all -- I think we all said that we all learned a lot from the feedback that we received.  It was all very high level, all very well informed, all really specific to the cultural concerns of the participants and I think it was -- way more beneficial than I think we had ever even hoped it would be, and again we do recommend that you read the report to see that, and importantly the report includes feedback that ultimately did not make it into the principles as well.  We thought it was important to include that as well.  There were a lot of -- you know, a lot of people's concerns that are very different and all over the place, and we think it really captured that.

One of the issues that was -- that we had to confront and I think at IGF, I've been seeing a lot of introduction of principles and one of the challenges to all of them is the problems of scale, and one of the big changes from the original Santa Clara Principles to this version is the original were minimum standards, and these Santa Clara Principles, the new version no longer are.  They are simply standards.

What we found and what our suspicion was and what was confirmed by the feedback we received was that by setting minimum standards, we really were setting minimum standards for very few companies, and not standards that were really widely applicable to most online, to most online services.  So why -- so why it was really valuable to be able to judge why some very large and well-capitalized companies were complying, it was the relevance as benchmarks to other services was less clear, and so the decision was made, and again everyone confronts this problem at scale, but the decision was made that these should just be standards.  They're benchmarks, and that many companies should actually be doing more.  And but many -- but this should be what they aspire to but maybe not what they're ready to do, not what they're ready to do yet.  And as Richard said, the foundational principles are really things that could be implemented from the very start, and we do think it's important that there be due process by design from the beginning, even when the company is brand new and has very few resources or few users, it's important to be thinking about the due process concerns from the beginning.

But we did realize that on the one hand, in adding so much to -- in adding so much detail to the operational principles, that it would be unrealistic to expect many companies to comply with these right away.  So -- yes, Jillian?  Do you want to take that?

   >> JILLIAN YORK:  Yeah.  Hybrid models are fun.  The other element that's new this time around as you may have noticed from watchers of these principles, is that this is the first time that we've included principles that are directed at state.  And this is in recognition of the very real role that states play in both restricting transparency initiatives by companies, as well as in restricting content and as such content, let's say.  So as such we found it vital to ensure that we're not just targeting companies in this, but that we're also looking at the problem holistically.  And I think that, you know, it is our hope that the toolkit, the note to regulators -- I'm sorry, the toolkit toward advocates, it's our hope it helps in both finding paths to advocate toward companies around the principles, but also towards states.

   >> DAVID GREENE:  And I think we'll bring in Caitlin at this point from CDT to just give her impressions of reporting and CDT's impressions, not report of the -- of the new principles.

>> CAITLIN:  I'm standing in for my colleague Ema who wishes she could be here but unfortunately had another engagement and represented CDT on the revised Santa Clara Principles and I want to emphasize a point that Grescia made earlier about the note to regulators and how the principles are not intended to stand in for model legislation or be adopted wholesale into regulation, and I think you know one of the things about transparency is that there can often be some tensions and tradeoffs around transparency, whether that's things like David was just mentioning, issues of scale and burdening smaller companies, or overwhelming users with a lot of information they may not find useful, and those types of tensions and tradeoffs are all things that the groups working on revised principles had to think about and discuss and also things lawmakers have to think about and discuss when they try to regulate for transparency.

But the conversations that are happening in two very different contexts, and it's very different when Civil Society organizations are negotiating voluntary recommendations and when lawmakers or policymakers are negotiating laws and regulations, so I think that the emphasis for policymakers and note to regulators that this is nod model legislation, this is not intended to be adopted as law, is very important because we have seen attempts to do that with the first Santa Clara Principles around the world and also in the U.S. in some state laws that are now coming forward with transparency requirements.

So, that was one thing very important to CDT and definitely a learning experience as we were working with the other groups on this process.  (Vogus12346789 (.

   >> DAVID GREENE:  From the other decisions, there were a lot from the global consultation in terms of transparency reporting, there were a lot of requests for very, very highly specific information, and I think that we realized that while we understood why these requests were being made in terms of sort of reporting things like racial information about users, that they also presented -- they also presented actually counter user privacy interests, and we wanted to be in the situation that the principles were being used as justification for companies to collect more information about users than they should collect or otherwise would collect, so I'm going to ask Spandana if she can talk a little bit about this more with the idea that I did not warn her I was going to ask her to talk about this ahead of time, so Spandana, but I know that this was something of special concern to you, so I wonder if you can speak about it.

   >> SPANDANA SINGH:  Sure.  Yeah.  I mean I think it was interesting to see how granular some of the requests for transparency reporting were, and I think they reflect really concerns from advocates from around the world that content moderation practices and tools are discriminatory and are harmful to very specific communities.

You know, we received a range of requests and many were outlined in the report, including a breakdown of like the racial background of different users, gender, like other affiliations, and I think that the intent behind wanting to know that data is really very clear, especially in today's world, but like David mentioned, asking companies to collect this data also opens up a whole other can of worms, you know, without appropriate safeguards to ensure that they're not using this data for other purposes like targeting or other algorithmic uses, and I think that this is definitely an area of platform transparency where we're seeing more happening show and perhaps companies can't collect or report on these kinds of issue in the transparency report, but perhaps there are other ways they can share more sensitive information or impact -- information of impact of content moderation processes with smaller groups like researchers, and I think that this is an area where the principle doesn't really touch on, but that you know the principles can act as like a foundation for these kinds of conversations in the future.

   >> DAVID GREENE:  I'm going to bring Vladimir back in as well.  I'm giving him time to take off his mask to talk about -- to just comment on that also 6789.

>> VLADIMIR:  Yeah, definitely when we were like seeing all the inputs around discrimination, when we were like seeing these -- when we were like trying to get a better understanding of how social media companies moderate with regards to LGBT communities, with regards to minorities, with regards to indigenous communities and some other communities that exercise their different rights and that they're using social media companies, and the things that they are facing in terms of violence, in terms of takedowns, in terms of what they are experiencing, there were for sure this need of better understanding and better known how they're taking these decisions, and I think in that regard it's really, particularly relevant to the principle on explainability.  But then we have like this also kind of inputs and discussions of like what is like the limit in terms of like, do we have really to know how social media companies are directly regarding to specific group?  Or it's level of granularity, it was like also considering that this might just affect not just those that are impacted by the kind of moderation, but also for example when we were talking about the moderators, or if really need to know the background of the people who is like moderating or conducting the human moderation.  So, I think that there were a series of questions that were relevant to take part on the conversation, but perhaps also like thinking that how do we move towards a meaningful transparency, which not necessarily means like having more data or more numbers or this type of information. 

So, I think it was an important discussion, but at the end also like thinking how this is relevant in terms of privacy, in terms of not collecting or not enforcing also or requesting -- not for social media companies and not for also states demanding the information or data from users and from social media companies.  So, I think it was also an important part of the discussion, and just like to point it out that it's relevant to pay attention on the explainability principles, and for sure granularity and meaningful transparency, but also in terms of how do we protect also users from the information they are providing and all the data that for sure is maybe part of another discussion, but, yeah, just to point those things out.  Thank you.

   >> DAVID GREENE:  And one of the other -- oh, big camera spin there.  One of the other that I think is important to realize, and this is evident when you read the report that much of the feedback was also very concerned with online -- with online harms and with making sure that there was integrity in the whole process so that when there were users that expressed interest, that something should be -- they wanted things to be taken down, that things were harmful within their community and they wanted that there to be integrity in that process as well so that the right decisions were made there, and I do think that one thing that's noticeable about the Santa Clara Principles is in some way they're content moderation neutral, they don't come out and say it should happen, they recognize that there are situations that companies may well choose and have good reasons for wanting to remove content or limit accounts, but they should do so within the human rights framework when they do, and that was something that again was very evident in the comments that we received.

We do have a question.  We have OFCOM in the room so we have a question from the audience.

>> OFCOM:  Colin Curry from OFCOM, the communications regulator and soon to be offline when the offline safety bill reaches at some point in the future.  I want to mention what everyone said the potential pitfalls of transparency or adverse impacts, that's something that we're thinking about not least because we're subject to the provision of general monitoring that was from the e-commerce directive, and so I just wanted to ask the panelists and forgive me that I haven't had a chance to review the report, but I'm very much looking forward to it, and if you had come across any kind of suggestions or concrete recommendations around what I'm increasingly thinking about more qualitative transparency, as opposed to the kind of sheer metrics or even metrics that might be privacy protecting, so you know, Frances when she came and spoke to the UK Joint Parliamentary Committee made certain recommendations around aspects of transparency that could shed light on the content or the processes of platforms without necessarily putting people's user data in jeopardy, so one of the things that she had put up, or things that is flying around is this P95, so how much hate is the 95th percentile seeing or things like just the URLs that are being shared most often.

So, I wondered if within the course of the consultations that if you had come across any kind of nuggets like that, because I think that would be in addition to the principles and points of high-level consensus, those types of recommendations would also be particularly useful at this point in time.  Thank you.

   >> DAVID GREENE:  Yeah.  Thanks for the question.  I'm not sure off the top of my head.  In some ways -- so, one of the things that I think is a little different than sort of how OFCOM and other regulators would be focused, and one of the reasons I think there was such a strong sentiment to say that the Santa Clara principles are not template for regulation is that its very much user focused and the transparency really serves the point of users to understand and have trust in and believe that their rights are being respected online.  So, the recommendations really were very much about what we as users, what information we as individual users would be helpful for us.  And not so much helpful for -- it's just different from what I think regulators would want.  And I think that was pretty consistent running through the comments that we received.

   >> JILLIAN YORK:  I think when it comes to qualitative metrics around content removal in certain categories, that's an area that we found to be very difficult and I'm not sure that I can, you know, in good faith agree with anything that Frances Haugen has said there.  And I think also that the questions are to how much those benefit users versus regulators, and I think that we've seen over the years that users benefit greatly from seeing numbers because it helps them to make better decisions about platforms.

On the other hand we did get some a qualitative recommendations around some certain other categories, and one of the most interesting ones, of course, is really understanding the role of different humans in the content moderation process, and obviously so much of the content moderation we see right now is algorithmic, but it was interesting that around the world from different places, and I think in the report we see this from lawyers hub in Kenya, from Montreal AI ethics institute, and wanting to see the not just percentages but also the people's backgrounds, roles in content moderation process, professional experience, are they lawyers, are they just low-wage workers trained, which is most of the case at the moment.  And as well as what the policies are for their protection, and the incentives afforded to them, how their performance is measured and other workplace initiatives and I think that's an element often left out of this conversation because you're right so much of it does focus on qualitative metrics.

And so these were certainly some of the most interesting recommendations that we saw, and that's in the Section under two, due process throughout the content decision-making system.

   >> DAVID GREENE:  And one of the other ones you might want to look to is there was a very detailed submission from -- from Lapin in Brazil that talked about the qualitative goal of explainability and proposed an explainability principle and that's in the report as well, and much of that was pulled out into the, what became the integrity foundational principle, and I think that would be a place to look as well.

And if any of the other co-authors who are on the line have an answer, feel free to jump in as well.

But barring that, we have a little time left and I think it might be worth spending a little more time talking about the specific concerns for AI and automated decision-making, and so I'm going to ask Richard, again, to come back on and talk about those.

   >> RICHARD WINGFIELD:   Thanks, David.  It may in part help respond a little bit to Colin's question I think because regulators who are perhaps looking more at these systems and the processes that accompany users, which informs this content moderation as opposed to other metrics, that question around algorithmic transparency, I think is really one of the hot topics of the moment and how do we make sure whether as users or regulators, that we understand how companies are using automated processes and machine learning when it comes to content moderation and how can we have confidence that those systems are being used in ways which are effective but also human rights respecting.

And I think we're going to -- there is going to have to be a bit of an iterative process and trial and error in terms of what good transparency in the field looks like.  I don't think we just have code published.  I don't think we want to leave transparency only to a small, select group of invitees but that there is something for users as well, but it must be information that is genuinely useful and helpful to them in understanding why a company has made a particular decision.

So, throughout the Santa Clara Principles 2.0, you'll see a number of references to automated processes, and almost always the transparency being sought is qualitative rather than quantitative, and so it is both an understanding of the circumstances when a platform uses automated processes at all, and it is looking at the sort of occasionally looking at numbers, but in a sense of what is the proportion of different kinds of content flagged or removed that comes from automatic detection rather than using flagging or some other kind of process.

And then explainability around, for example, quality control around how those AI systems are accurate, and in fact we say the at the very early on part in the second iteration of the Santa Clara Principles the automated processes to identify or remove content or suspend accounts should only be used when high confidence in the quality and accuracy of the processes, so is automation being used for types of content where it's actually very low and how is that being improved over time, and do you have different success rates for different languages, for example, and so those are I think some of the examples that we've tried to draw into the Santa Clara Principles to try to provide some guidance at the early stage of what good transparency could look like when it comes to the use of the algorithms.  It's a bit of cliche to say now that the COVID-19 has sort of created lots of new challenges when it comes to this space, but we do know it has accelerated the am shift of automation of content moderation already on a significant shift but it's been accelerated and I understand transparency in the space is critical and I hope the new iteration of the SCP can be part of that and maybe some help to the regulators even though the caveat being that this isn't a template to be copy and pasted.

   >> JILLIAN YORK:  Thank you so much, Richard, for the extra context there and I think that's a really, really positive note to close on.  We've got just three minutes left, and we wanted to share just briefly the list of authors so we can give full credit to the folks who participated in the consultation process and the writing of the new principles and as well as the report and toolkits.  There are also some acknowledgments to some of the people not all, some of the people that provided to the consultation process or participated in one of the online consultations.  Many thanks to all of those organizations listed there.  I will not read them off because I think you can all find them.

But thank you to all of our partners.  This was very, very much a collaborative effort, and we're very happy to see these out in the world.  Thank you all for coming.  We'll stick around a little bit outside, I'm sure, if you have questions for us.  I hope that you can find a way to use the principles in your advocacy.  Thanks.