IGF 2021 - Day 2 - Launch / Award Event #63 Fighting disinformation on the Internet beyond censoring: a study on public officials responsibility

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> We all live digital world.

>> We all need it to be open and safe.  We all want trust --

>> And to be trusted.

>> We all despise control.

>> And desire freedom.

>> We are all united.

   >> AGUSTINA DEL CAMPO:  Okay.  That's our cue, the opening video for all sessions at IGF.  Welcome, everyone.  Thanks for joining us.  Special thanks to our panelists and presenters this afternoon.  My name is Agustina Del Campo and I Direct the Studies for freedom of expression and access to information.  It's a pleasure to be here and to have such great panelists joining me today.  So, what we're discussing today is a paper that we prepared from CELE with the Inter-American Institute of Human Rights for issue of disinformation of public officials.  As you have probably seen from the schedule of the IGF, there are several panels that are dealing with the issue of disinformation.  There is a renewed relevance and attention to this issue since 2016 and grave concern on the states, Civil Society, academia, on how to deal with the circulation and dissemination of fake news and disinformation, and what we planned here was to present research that we conducted this year on the role of public officials and their discourse in the dissemination of disinformation and how, at least some of the legal systems within Latin America deal with this -- with this concern.

A recent study published from the UK showed that public officials were responsible for around 20% of the total of disinformation circulating online.  But that 20% that they are responsible for gets nearly 75% of all the interaction that people have with fake news and disinformation, so it's a huge impact for only 20% of that content that is being generated.

To present the paper, we have Eduardo Bertoni, and Eduardo is the representative of the Regional Office for South America of Inter-American Institute of Human Rights and former director of national access to public information agency, and former director of national data protection authority in Argentina.  Previously, Bertoni was the Special Reporter for freedom of expression at organization of American states within the Inter-American Commission on Human Rights.

And thereafter, we're going to give the floor to Professor Fernand de Varennes, Adjunct Professor at the national University of Ireland and e visiting professor of faculty of law at University of Hong Kong.  And he was appointed UN United Nations Special Rapporteur on Minority Issues by the Human Rights Council and assumed his functions on August 1, 2017.

Last but certainly not least, we have Mariana Valente Director of InternetLab a leading Brazilian think tank dealing with human rights and technology and she holds a doctorate, master's degree and Bachelor from the University of Sal Paulo Law School and professor at (?) University and visiting researcher at the University of California Berkeley and CV for all of them goes on and on.  But since we have a short time, I'll give the floor first to Eduardo and then to our two commenters.  Eduardo?

   >> EDUARDO BERTONI:  Thank you.  Thank you, Agustina.  Thank you for your kind presentation, and thank you very much for the invitation to paper that we contributed to develop with CELE.  I'm the President of regional office for Central America institute of human rights and these kind of papers or this kind of research are the typical research that an academic institution like the Inter-American Institute usually does, the Inter-American Institute is almost 41 years old and the regional office was established in Uruguay where I am right now in 2009.

I would say that the main foras for the paper was that little has been done to identify the region of misinformation and also to evaluate the phenomenon in light of specific, specific obligations from certain sectors.  And we consider, and we in the paper, we mention that the responses to misinformation in social media have mainly aimed at penalizing its authors or establishing liability for those who facilitated its dissemination.

For example, Internet companies, especially large platforms have numerous techniques and measures to address the phenomenon.  In the paper we concluded that, first, it would be wrong to attribute to social media an exclusive, and I highlight exclusive role in the new disinformation crisis that impacts the information ecosystem.  And I like to highlight the exclusive because we're not denying the role of the platforms but we're not focused in the paper on their role.

Second, we also concluded that disinformation has different impacts, depending on who promotes it, and public officials have special responsibilities regarding their speeches, what they say.  This last point, actually, is a central goal or central object of the study, and our research looks into some of the obligations of public officials, and also other public persons like candidates for public offices, obligations to tell the truth, and/or take measures to avoid mistakes in the information they disseminate in the duties or during their duties in their offices.

Finally, and I don't want to be too long to open the discussion with my colleagues here on the panel with the people attending this panel as well, but finally I said that we also concluded that public officials have a duty, have a duty to tell the truth in their speech and expressions.  And it is not only an ethical duty.  They have also legal duties, and this is what we demonstrated in our research.  And for that reason, following the logic of this obligation, we suggested at the end of the paper to look into new possible lines of investigation in the search of for solutions to the dissemination of disinformation and the harm to public debate.

This was a very, very short summary of what we did in the paper, and we are very open for discussions, comments, and conversation on the main issues that we included in the paper.  Thank you very much.

   >> AGUSTINA DEL CAMPO:  Thanks, Eduardo, for that introduction to the paper.  I give the floor to Fernand de Varennes.  Distinguished speakers, ladies and gentlemen, let me first thank the organizers and in particular the center of studios -- (no English translation) -- for privilege and honor to be part of a distinguished group of experts who will I'm sure be here and deal with some of the great of the challenges of recent decades, arise not only of misinformation but also disinformation of hate speech and even incitement to violence, including most worryingly calls to genocide, mainly targeting minorities.

I think we all know that this is dividing and even breaking up societies in many parts of the world, and my comments will be more from a global perspective rather than limited to Latin America.  And I think that conceptually, to narrowly present this as involving misinformation and censorship, kind of glosses over the closely related issues that we're facing.

We need to appreciate that what we're dealing with is not only misinformation, it's not only censorship, and it's not only about disinformation on the Internet and human rights.  It's too much for say that the complexity that we're dealing with goes beyond regulation and self-regulation or digital security or fact-checking.  It requires all of those.  It's a kind of holistic approach that contains all of these dimensions and more.

And I think, first, I need to share very briefly some of the much needed contextualization to more fully appreciate what we're facing.  Platform owners such as Facebook, Google, and others, are among the world's new hyper(?), they're business models and many of the algorithms that they use are premise that are based on, as you all know, creating rabbit holes and amplifiers of prejudice, racism, and disinformation, and they profit from it.  And I think beyond talking about the responsibilities of public officials, we need to understand what we're dealing with.

Social media platforms have had a very profitable free run for a long time.  And public officials and governments and the world cannot afford any longer the wild west of misinformation and disinformation and hate without consequences or liabilities for those major platform owners to a large degree.

And let me be very blunt here.  Social media literally profit from hate, they profit from disinformation.  Essentially as a result of the algorithms that they use in most of their business models.  They are also among the most profitable private enterprises in the world today, and they have little or no financial liability or responsibility.  And this I've suggested needs to be addressed head on by public officials and other.  No private business should be immune to harm and violence which they directly contribute to and can unleash.  Currently, that's what they have.  Largely they have immunity through the effects, among be others of the United States Communications Decency Act, a legislation that generally provides immunity for website platforms with respect to third-party content, with a few exceptions dealing with copyright of all things or sex trafficking issues.

In other words, we have and public officials are treating the major social media platforms as being allowed without any serious financial consequences to spread misinformation leading to violence, real harm in the real world, including atrocities, and even calls to genocide, and they can do that with little or no liability, except for copyright issues 6789 come on, something smells rotten and it's not only in Denmark in this kind of context.

So, to be absolutely clear, while social media platforms offer people the opportunity to connect, share, and engage, an unfortunately and unhealthy side effect is that harmful and misinformative content can go viral in a matter of minutes and spread to thousands and even millions before platform owners can catch it, and even try to mitigate sometimes the effects.

So, I mentioned this because it means that governments and public officials need to go beyond self-regulation by private platforms themselves, which is the preferred approach in many countries.  Self-regulation is necessary, but it's clearly not enough.  And we saw it even just last week, some of you may have heard that Twitter announced it would start penalizing users who tweet private media or images of other users that is shared without their consent.  But because the policy is vague, recent reports of just the last few days have shown that Twitter's reporting and appeal's process is unreliable, automated, and not able to judge when in fact the policy is being misused by racist or xenophobic groups and to be blunt and clear, it's reported that far right movements like the Proud Boys and QAnon have actually called on the followers to weaponize the new rules to target human rights activists who had posted about them.  And Twitter went on just last week, and in fact to suspend the accounts of groups that are actually defending human rights and trying to fight against racist and anti-semantic groups, among others.  Once again, self-regulation here is clearly not enough.  And therefore, there needs to be from the government side, from the side much public officials, a clearer step taken to address even calls to violence against minorities, and even calls to genocide amongst others.

And I think it's important to remember that the genocide that we know as the Show Up or the Holocaust did not begin with gas chambers and actually started with misinformation and disinformation against a minority.  The Nazis effectively used propaganda misinformation and disinformation to win the supports of millions of Germans to facilitate persecution and ultimately genocide, of mainly the Jewish and Roma minorities.

So social media platforms have become through misinformation and disinformation, propaganda megaphones, and now they amplify intolerance and prejudice and spew propaganda of hate and racism reaching almost immediately huge numbers of people causing real harm, literally leading to individuals around the world being vilified.  Lined up, lynched, and in some parts of the world massacred because they belong to amongst others, usually, minorities.  The data we have in some countries suggests that three-quarters of hate crimes of social media, lets me repeat that, three-quarters of all hate speech is in the end forms of misinformation and disinformation aimed at minorities 6789.

So, I know the time is limited.  The use of social media is am a large role in the attempt of minorities such as Rohingya in Myanmar and led to massacres and destruction of Supreme Court for other minorities in Sri Lanka and India more recently.  Two points that I would like to make, perhaps in closing, that neither governments more social media nor social media platforms, and when I say government, I also mean public officials, neither of them are doing near enough despite some recognitions and a few steps in the right direction.

Two dimensions are still missing, admission and liability.  Admission in a sense that almost none admit the generalities that the main victims of misinformation and disinformation and also hate speech are overwhelmingly minorities, and that freedom of expression does not protect or does not exclude responsibility for certain forms of speech, such as hate speech that can constitute incitement to violence or discrimination.

There are forms of information and disinformation which must be prosecuted in international law and in treaties such as Article 20 of the International Covenant of Civil and Political Rights that clearly states any advocacy of national, racial, or religious hatred that constitutes incitement to discrimination, hostility or violence, must be prohibited by law, it's a legal obligation on governments and really has to be recognized as such and applied by public servants, which is not always the case.

And most media platforms seem to content themselves with bland statements of protecting everyone who is vulnerable, but when you're doing that, it's refusing to name who are -- who are the main targets.  You must act on information and misinformation and disinformation where it is most applied, and it is mostly applied in ways that target vulnerable groups, especially minorities, unless you name the evil for what it is, and the evil propaganda that we're dealing with, you're not going to be able to do it 6789.

Yes, it's complicated, but that's why I suggested in my last report to the UN General Assembly that the time has come for a global approach, and leadership to regulate hate speech, misinformation, and disinformation in social media in which actually changes some of the obligations that we have in international law and to deal directly with the issue of the immunity of social media platforms for most of the harm that they cause.  A global regulatory approach, specifically a legally binding treaty would allow to protect freedom of expression and particularly recognize its dimensions, while providing guidelines on the forms of incitement which needs to be regulated and limited, once again, while protecting freedom of expression, but also tackling the anomaly of social media currently not being liable for any of the harm or damage caused by them, by social media, contrary to what occurs to other types of media in the world.

My time is up.  There is much more that could be said, but we'll have opportunities to dwell more deeply on some of the issues which I've raised rather superciliously in a few minutes.

   >> AGUSTINA DEL CAMPO:  That the for the comments.  One of the things that comes out from what you were saying is the complexity of misinformation and disinformation particularly when it relates to minorities and hate speech, which are in a lot of cases, unfortunately, they're related as you were saying.

I think one of the key intentions of the paper was precisely to look at this issue from a perspective that could bring to light complexities that we thought were not on the spotlight, and I agree with you that there is a number of concerns around how social media platforms deal with this kind of speech.  In my personal opinion, at least, the amount of energy that's being put towards finding solutions to the social media dilemma exceeds by far the amount of energy being put on what you said, admission and liability, which probably should be part of this conversation as well.  Mariana, can I give the floor to you?

   >> MARIANA VALENTE:  Sure.  Thanks, Agustina, thanks Eduardo, Fernand.  Thank you for presenting and setting the stage for this discussion. I wanted to start by just briefly discussing what happened in Brazil in the past couple of years because I think it's a good example of information disorders and the role of public agents, and this is not me saying, but we had a congressional investigation in the past months in Brazil that subpoenaed many people from the private sector, the public sector, conducted really a very large document investigation to understand the role of public authorities in the huge, immense number of deaths due to the pandemic in Brazil, and as many of you know, we had -- we had a very hard pandemic.  It hit us really hard.  We had more than 600,000 deaths in the country, and we had high-ranking officials and particularly the President right from the beginning saying things like this is just a little flu, and making open propaganda for medicine such as hydroxychloroquine and other medicines rendered ineffective by science.  And the congressional delegations went into all of these episodes and well they're all recorded, many of them are on social media too, and one of the conclusions of the investigations was that the Federal government as a whole, it contributed significantly to the disaster that took more than 600,000 lives, and that authorities were allowing for the deaths of Brazilians in the pandemic.

It's a really interesting document to look at, and actually the whole report has 1200 pages, and it's huge, and it allocated many different crimes, including crimes against humanity, but it also had things that are related to spreading disinformation, spreading false information about the pandemic and established the responsibility of the public officials, and now the document has to go to the authorities that are responsible for prosecuting these people, and this is what's happening in Brazil right now.

I'm telling you that because I think it's a good example of how, of course, we're speaking of disinformation, of the disinformation spreading through social media and when we speak of this disinformation online, we're also speaking of this challenge of communications that are decentralized and that are coming from everywhere, but I think this is an episode that shows the centrality that public authorities can have, especially in a situation of an emergency like what's happening now -- (phone ringing) -- I'm sorry that the phone is ringing.  So, like the emergency that is happening now, and how -- and how that can really take people's lives because people are looking -- in that situation people are looking for information, right.  They were looking for the authorities to just provide any information.  So that was part of my presentation, but what I also wanted to say is that in this report, that Eduardo Bertoni mentions there are some examples of how we have a very -- it's not a very good, actually framework in Brazil for dealing with this problem of public authorities spreading misinformation or their role in disinformation, but as many of you know, we have been discussing in the country since last year, this bill that's called -- it has been called the Fake News Bill and now it's become something much broader and not addressing just the problem of disinformation, but it's a bill for accountability on social media, and the way it states right now is it's being discussed now in Congress, and it has a whole chapter on what's been called public interest accounts, and there are two approaches to these public interest accounts, and then we're looking into how public authorities express themselves on the Internet.  Right.  Because in many of these cases, we're not speaking just of the Internet but looking at the Internet.  There is this concern that on the one side of social media blocking or restraining discourse of these public authorities, that's one of the concerns, but the other one relates to the behavior of public authorities and there is this definition of which authorities we're speaking of and which are accounts that are going to be considered public interest accounts.  But I'm referring to Article 22 in case you end up looking at the bill, and what the bill is stating is that because they're public interest accounts, first there is this problem of blocking users, journalists, that's something that's happening everywhere, right.  They're not allowed to block users because it's understood they're performing a public role, but also and that is one of the things that's interesting, is that they're subject to the constitutional principles governing public administration, and these are five constitutional principles in Brazil, and legality, personality, morality, publicity, and efficiency.

And this particular part of the bill has been seen as a way of stating that these public authorities, they have the duty to act according to all of those principles online, and to make sure that they're not spreading information that anyhow violates these principles, and in many cases, the examples that I was giving, could be considered examples of speech that violates these principles, so there is this idea that these are specific accounts and that they're not accounts like all the other accounts, and that they should be subject to these principles.

And then one thing that's also interesting is Article 23 of this bill, and it states that those who are holding elected offices and also judges and members of the public, prosecution office, they're forbidden to receive are remuneration from advertising on their accounts, and that has come from this diagnosis that because these public officers are many times receiving -- they're receiving ad money from their activities online, and that could be -- that could be an incentive for sensational speech, for trying to like make a hit out of a certain post, out of certain information, instead of speaking according to public principles, principles of the public discourse, and I find that really interesting because I think it's an article that sort of brings together both discussions, the responsibility, the accountability of public officials, and also the architecture of networks and how all of those things play out together, and I think it's quite original in that sense.

There are a couple of other things I'd like to say, but I guess we could go and move to the debate.  There is this one final remark, which I'd like to say.  I really appreciate that Fernand de Varennes, is mentioning common information because we think we're not aware of this enough.  And I think we have quite a few examples of that also happens when we speak of basically authorities, right.  We've been seeing many cases of online affected by the speech of public authorities that has been the case also during the pandemic and during the whole information crisis that somehow Brazil is going through.  We're seeing high-ranking officials speaking of female journalists, for example, in very discriminatory manners, but also things that are more subtle.  And so we've developed a research this year about gender disinformation in Brazil in the pandemic, and one of the things that we were seeing is that one of the discourses that was being spread, so the President himself was saying for a while that he was immune because he's got this athlete record, and that started to be repeated many times by many people that if you have an athlete record, if you're strong or if you're healthy things like that, you wouldn't get that little flu as he called it.  But we were analyzing the discourses around it, and this had a very gendered perspective to it which really meant masculinity and really meant strength in that sense, so that was also really interesting to see, just to bring those connections because I think they're really, really interesting too.  I give the floor back to Augustine, actually.

   >> AGUSTINA DEL CAMPO:  Thanks, Mariana.  I'd like to, thinking of where you left it off, so we have this instance and we know that disinformation, misinformation, fake news, have a tend to circulate more and have a more radical impact, particularly on minorities and vulnerable populations when it's coming from public officials, and there is certainly attention between a right to access information, what you were saying about the accounts and which accounts should be guided by which standards, which should be considered public interest accounts and what rules should we apply to those public interest accounts.  I think shows an existing tension between access to information and the need to know what are public officials think and what they say and how they argue the policy that they make, and the potential damage that they can do when the information that they spread is hate speech, misinformation, disinformation that impacts specifically minorities or vulnerable populations.

And my question for you guys before I open it up to everyone else who can, by the way, please include in the chat your questions or raise your hand, and we'll definitely give you the floor to ask questions.

My question to you guys is, we all know what the principles are and we know the standards that are out there, the standards are international standards, they're within the ICCPR, within the inter-American Convention and there are specific obligations like Professor de Varennes was saying that there are specific provisions against hate speech that constitutes incitement to violence and to discrimination, and there are rules like we've seen in the research that we've done, and there are legal rules within our internal legal systems that prescribe the limits or the obligations that public officials should respect or abide by in their speech, so what could be done, what can be done to better implement these duties?  Should companies implement these for the state?  Should the state implement these by prosecuting public officials according to the laws that they already have?  Is this an ethical thing?  A lot of the regulations pertaining to public official speech is -- is stated in ethical obligations of public officials, for example.  How do we move the conversation forward, paying particular attention to the impact that these discourses may have on vulnerable populations?  Eduardo, do you want to start?

   >> EDUARDO BERTONI:  Okay.  What you just said is the core of our research of the paper, but having said that, I have to mention that we did not include the definite answer there on what should be done, but let me give you some ideas that jump to my mind after hearing my colleagues in the panel and you as well.

First of all, if someone asks me if I'm in favor to regulate the platforms in such a way that is to diminish misinformation and disinformation, I would say, yes, why not, because they are part of the problem and we need that.  Actually, I just published another research, another paper talking about responsibility of non-state actors for human rights violations, and also the responsibility of the states when the states are not doing nothing on that.  So, but that is not what I discussed in the paper that I presented today.

What I presented today is trying to find other ways to cope with the problem of misinformation and not denying the importance of paying attention on what the intermediaries, the Internet companies or other social media platforms are doing, and not denying that at all, but this is not the core of the paper.  The core of the paper is trying to find other ways to cope with the problem.

And it is very interesting because if you see what's going on, and this is just an example, but what is going on these days in Colombia, these days in Colombia, the government is promoting a legislation to penalize the people that say some wrongs or some things that are -- that could damage the honor or reputation of public officials, so there is a lot of -- there is a lot of effort coming from the government to penalize common citizens that say something that probably is a mistake or could be a mistake in reference to a public official.  I'm not seeing, in any part of the world, a similar effort to penalize, as you say Agustina, public officials knowing that they are lying publicly by using social media platforms and creates all of the problem that Fernand and also Mariana just said.  So, what I mention and what we worked in the paper on is saying, hey guys, pay attention to that.  Public officials have a duty, they have a responsibility, they cannot because they have, you know, some sort of coverage or some sort of immunity to say whatever they want, hate speech, lying, whatever, that affects concrete community.  So that is the core of the paper.  The core of the paper is trying to find something else, not only what Fernand presented in his report, yet I agree, but to start thinking in some sort of regulation of platforms.

Actually, what Fernand said about the immunity of the intermediatory, the Internet intermediatory, the Internet companies, particularly the Section 230 (c) in the Communication Act of the United States, that kind of immunity is an immunity created in the U.S. and as far as I understand it, it is just for the U.S.  I mean all the companies try to cover behind with the 230, but in the U.S. today there is a very high discussion around reforming 230.

My final words are that I am not, you know, an advocate for any kind of regulation.  Of course, I'm advocating for good regulations and I don't want to see in the future more problems than solutions because we start regulating this platform, but this is for other in debate.  Thank you 6789.

   >> AGUSTINA DEL CAMPO:  Professor de Varennes, four minute.

   >> FERNAND DE VARENNES:  We are seeing other governments persecuting human rights defenders and critics of government policies and also critics of public officials and you have the ironic situation where it's the human rights defenders and advocates that are being persecuted under legislation that is supposed to protect, one would think, freedom, expression, and human rights in those countries, so it's actually a phenomenon occurring right now in a number of countries.

I would also like to add that the devil is in the detail.  Contrary to what we might believe, what is freedom of expression, the obligations to prohibit legally the incitement to violence and discrimination, but also the permissible other types of permissible restrictions necessary to protect public health or public order is actually not clear yet, not necessarily very clear, and that's why to avoid the kind of situations that Eduardo described, to protect the human rights defenders and critics of government, I think that's why we do need a regulatory framework at the global level to actually more clearly set out how, what are these human rights, in these very complex areas because we haven't done it yet, and I've seen even social media platforms allege and claim that they are enshrining and incorporating human rights obligations in their approach and various mechanisms and in fact they don't do it.  Very often, for example, they don't have full recognition of what is the prohibition of discrimination in international human rights law.  So, there is a lot that needs to be done to clarify this area.  Thank you very much.

   >> AGUSTINA DEL CAMPO:  Thank you.  Mariana?

   >> MARIANA VALENTE:  From my side, Agustina, one of the things I was thinking when preparing this is precisely how difficult it is to enforce such a measure.  So, for example, establishing that these are public interest accounts and that they should follow these principles.  That's great.  What does that mean?  Right.  Does that mean that when platforms realize that constitutional principles are not being followed by public official accounts, that they should, that they have the duty, that they have the responsibility, that it doesn't seem to be the case.  Okay.  But that could be one of the discussions, right, that this something that social media should control.  Does that mean that someone can bring it to a court and have it determined that these principles haven't been followed?  If that's the case, is that going to have any role?  Because we know that disinformation travels super faster, so is it about holding someone accountable after they said it?  It has been seen that this has been precisely the direction with politicians that deal with chaos, right, creating chaos even just a few days after the true information comes out, it doesn't matter.  Right.  It is the chaos that's the strategy, so our time is almost up, and I just want to say that I don't have a good answer, but this is what we should be discussing, I think.

   >> AGUSTINA DEL CAMPO:  Well, exactly our time is almost up and I want to thank you all.  I think we've at least tackled a few of the very complex issues that make up this whole disinformation dilemma that in a number of conversations, it seems oversimplified at times, and I think at least in our time together, we've been able to give the conversation a little bit more shadows and more lights to it, so as to make the debate a little bit deeper and a little bit stronger.  So, I thank you very much, and I invite you to read the paper, and if you haven't read the paper, it's in Portuguese and Spanish and in English.  Thanks, everyone, for joining.