IGF 2023 Lightning Talk #160 A New Process To Tackle Misinformation On Social Media

Time
Thursday, 12th October, 2023 (03:20 UTC) - Thursday, 12th October, 2023 (03:50 UTC)
Room
SC – Room H
Subtheme

Cybersecurity, Cybercrime & Online Safety
Disinformation
Misinformation

Independent Researcher

Speakers

Kamesh Shekar, 2023 Youth Ambassador, Internet Society, Civil Society

Kazim Rizvi, Founding Director, The Dialogue, Civil Society

Onsite Moderator

Pranav Tiwari, Empowerment Advisor, Internet Society, Civil Society

Online Moderator

Shruti Shreya, Programme Manager, The Dialogue, Civil Society

Rapporteur

Jameela Sahiba, Programme Manager, The Dialogue, Civil Society

SDGs

9.4

Targets: The suggested solution enhances trustworthiness.

Format

Presentation + Moderated discussion

Duration (minutes)
30
Language

English

Description

While misinformation and disinformation is not a new threat, it is accelerated by social media platforms. High-stakes information like election-related information, health-related information, etc., has critical consequences on individuals and communities in real life, but it is muddled with mis/disinformation. Presently, the functions of establishing policies around tackling dis/misinformation are distributed across many actors, including social media platforms, governments, users (specifically in the context of decentralised platforms), multistakeholder groups, intergovernmental organisations etc. However, the role of social media platforms themselves in tackling the information disorder needs greater attention, as their intervention poses both questions of competence as well as unintended consequences, potentially causing ill effects like infringing freedom of expression, discretion over dissent etc. Many centralised and decentralized platforms currently indulge in content-level ‘hard’ moderation—that is, “only systems that make decisions about content and accounts”, as defined by Gorwa et al.—to reduce the spread of misleading/fake information. Platforms use various technological measures like word filters, automated hash-matching, geo-blocking, content IDs, and other predictive machine learning tools to detect unlawful content like child sexual images, pornography, dis/misinformation, etc. Platforms make decisions abut the detected content and account of the individuals using human moderators or algorithms themselves. These technological measures have their merits to an extent, especially where platforms can act faster and at scale. For instance, Facebook’s transparency report claims that it actioned 95.60% of hate speech before users flagged it (though there is contention on this figure). At the same time, we increasingly see content falling through the crack due to false negatives and getting struck or taken down due to false positives. For instance, despite efforts taken by social media platforms to flag false information on election integrity, the U.S. Capitol was attacked on 6 January 2021 in part due to the spread of dis/misinformation. On the other hand, false positives can hamper freedom of expression and opinion. For instance, Facebook famously took down the award-winning image of a naked girl fleeing napalm bombs captured during the Vietnam War. One of the critical reasons posts fall through the cracks is that platforms are presently confined to content-level intervention in the absence of process-level clarity and intervention within the content moderation pipeline. This lack of process-level intervention causes platforms to utilise resources and time inefficiently. Therefore, the status quo calls for a proactive and not just reactive content moderation process and means to implement the same efficiently. Against this backdrop, in this talk, I discuss/propose one such novel process-level intervention that would refine the content moderation pipeline and enable the efficient use of tools and resources across its entirety. While the process-level interventions discussed during the talk can alleviate false negatives and positives through more efficient use of resources, they may not perform any better at eradicating borderline content like mis/disinformation that is highly contextual. What is Process-level Intervention discussed during the talk? With mounting pressure on the platforms from the government and individuals to tackle narrative harms, they resort to hard content moderation yet face the problem of scale. Scale is less of a problem with soft moderation— “recommender systems, norms, design decisions, architectures”—due to one of the factors, i.e., utilization of data points on individuals to recommend prevalent content which aligns with their preferences. Here the prevalence of the content is not just in terms of popularity but about the ranking of content within the individuals’ preferences. When platforms can use the prevalence-based system to determine the real-time ranking and recommendations to the individuals in the form of soft moderation, therefore, through my talk, I propose a “prevalence-based gradation process” (PBG) – a system that uses prevalence as an integral element for hard moderation to tackle mis/disinformation. The talk will also show how the PBG process would act as a means through which social media platforms can evaluate content using ex-ante measures and exercise optimal corrective action in a calibrated format adjusted according to the exposure level of the information. Here the exposure and prevalence level of information is? Again, not just about popularity but rather calibrated information bucketing. Moreover, the PBG process would progressively streamline the existing hard moderation pipeline presently followed by the platforms as discussed below. Bucketing Content Based on Prevalence - Currently, platforms follow plain and simple hard content moderation without much granular gradation to efficiently utilise the limited resources at their disposal for more serious concerns. While platforms categorise election-related information, health information, etc., as high-stakes information that needs additional restrictions and scrutiny, within that high-stakes information, platforms typically don’t primarily prioritise content according to the reach and prevalence. This is suboptimal as high-stakes information with two different prevalence levels shouldn’t be treated the same way. Therefore, platforms need to utilise the data collected on the individuals, like followers levels, likes, shares, comments on their content etc., to bucket the information within the gradation matrix. During my talk, I will showcase a simple prevalence-based gradation matrix that uses data points such as the number of followers, likes, and shares to bucket high-stakes information ranging from high to low levels and chances of prevalence. While the matrix discussed during the talk simply depicts the prevalence-based gradation matrix, platforms can create a more nuanced and complex matrix with the amount of data they collect. Evaluating high prevalence information: Ex-ante scrutiny - Ex-ante scrutiny for many social media platforms is to act on inappropriate content before individuals flag it, or to screen content before the information is made public or shared. This pre-screening of almost every piece of information on the platform is not pragmatic and disproportionate, infringing on the freedom of expression of individuals. However, as the prevalence of the information shared or made public increases, the importance of shared information increases, and so does the responsibility to scrutinise content proportionately. Therefore, instead of traditional pre-screening, the talk discusses how it is ideal for platforms to act before harm is caused in a calibrated way, depending on the levels of information prevalence (The proposed model considers this as ‘ex-ante.’). For instance, information in buckets ranging from severely high prevalent to high prevalent within the simple prevalence-based gradation matrix discussed during the talk should be subjected to ex-ante scrutiny, i.e., evaluating the prevalent high-stakes information to find disinformation and misinformation, while the information in buckets ranging from chances for severely high prevalence to high prevalence stays on high alert. While it is important to acknowledge that it is difficult to determine whether the information is misleading or fake, however, this process could aid platforms in getting more resources and time to determine the veracity of prevalent high-stakes information, which is more of a concern. Responsive Corrective Action: Calibration of Enforcement - Suppose the ex-ante scrutiny of prevalent high-stakes information proves misleading or false. In that case, the platforms must take proportionate corrective actions to control the spread and harm. However, the platforms exercise a limited range of corrective actions that don’t necessarily align with hard moderation enforcement goals, sometimes panning out excessively. For instance, platforms excessively use content takedown, user account blocking and content flagging to achieve almost every hard moderation enforcement goal. Therefore, during the talk, I emphasise that the platforms must have various corrective actions to align with the different enforcement goals of hard moderation, like a moratorium for deterrence, misleading, etc. The harms to be recognised for corrective action should be tangible, like financial, health repercussions, etc., and intangible, like emotional and psychological, reputational, etc. Moreover, the talk also discusses that the causation of harm should not be the only factor for a platform to take action. The corrective actions must be ex-ante (like discussed), i.e. acting before the harm is caused by the prevalent misleading/fake information and, to an extent, speculative, i.e., concerns could arise in the long term like keeping the information in the bucket of chances for high prevalence in high alert. Besides, the talk shows that corrective actions must be calibrated such that the information of two different prevalences must be subjected to appropriate corrective action. For instance, information in the bucket severely high prevalent should be treated differently from information in the bucket very high prevalent and subsequent buckets. For instance, information in the bucket of highly prevalent can start with the platform flagging and masking the information as misleading/fake and stopping people from sharing it further. As the same information starts moving to the budget of information with a very high prevalence, more severe actions can be taken. Conclusion: Though the intention behind social media platforms instituting technological measures is to increase accuracy and agility, during the talk, we increasingly still see the opposite. Therefore, my talk emphasises that adopting a prevalence-based gradation process by social media platforms is critical. The talk would discuss how this process will help tackle false negatives, which cuts positive reinforcement for negative behaviour, i.e., posting and sharing misinformation and disinformation. Moreover, how it will create a more transparent environment where users understand measures enforced by the platforms according to the prevalence. I will also discuss how this process will indirectly aid the platforms in reducing false positives, as it would give them time and resources to understand the context of the information better. A gradation matrix can also help platforms efficiently utilise resources to monitor content for narrative harms, ​​where they can spend the most time and resources on the more serious issues. Therefore, as a way forward, the talk will conclude by emphasising the need for the platforms to adopt this process or any other process-level intervention to strengthen the foundation that underpins the Internet’s success, i.e., trustworthiness, by intervening in the application layer of the Internet. Link: https://techpolicy.press/a-new-process-to-tackle-misinformation-on-soci…

The session format will be structured, considering both onsite and online attendees, where everyone, irrespective of the medium, would be equally treated and could reap maximum insights from the session. We will make the session interactive by allotting sufficient time for attendees' discussion and contribution to the topics. Followed by the presentation, the floor will be set for moderated open discussion where any attendees could post their comments, interventions, research ideas etc., on the topic to the forum. The attendees will also be encouraged to pose questions to the speakers and authors of the report. The onsite moderator will probe the online and onsite attendees to feel free to contribute to the discussion, and an equal chance will be given to both online and onsite attendees. Also, the online moderator will keep the chat on Zoom live and active by stimulating conversations.