IGF 2020 WS #287 Robots against disinformation - Automated trust building?

    Time
    Thursday, 12th November, 2020 (12:20 UTC) - Thursday, 12th November, 2020 (13:20 UTC)
    Room
    Room 3
    About this Session
    This workshop will use the format of a roundtable discussion to explore initiatives and tools currently being used to automate the countering of online disinformation while also highlighting the main challenges and opportunities of using helpful bots to fight harmful bots in the context of online disinformation.
    Subtheme

    Organizer 1: Christopher Tuckwood, The Sentinel Project
    Organizer 2: Debora Albu, Institute for Technology and Society
    Organizer 3: Christian Perrone, ITS Rio

    Speaker 1: Christopher Tuckwood, Civil Society, Western European and Others Group (WEOG)
    Speaker 2: Debora Albu, Civil Society, Latin American and Caribbean Group (GRULAC)
    Speaker 3: Christian Perrone, Civil Society, Latin American and Caribbean Group (GRULAC)
    Speaker 4: Jan Gerlach, Civil Society, Western European and Others Group (WEOG)

    Additional Speakers

    Name: Jenna Fung 

    Affiliation: NetMission

    Region: Asia, Asia Pacific

    Stakeholder group: Civil Society

     

    We were missing an Asian perspective on the issue and therefore included a new speaker from the region and who works with the issue in the region.

    Moderator

    Christian Perrone, Civil Society, Latin American and Caribbean Group (GRULAC)

    Online Moderator

    Debora Albu, Civil Society, Latin American and Caribbean Group (GRULAC)

    Rapporteur

    Debora Albu, Civil Society, Latin American and Caribbean Group (GRULAC)

    Format

    Break-out Group Discussions - Round Tables - 90 Min

    Online duration reset to 60 minutes.
    Policy Question(s)

    1) How are different stakeholders - governments, civil society, online platforms, media - involved in the issue of automated disinformation through the use of social bots? 2) How can they address the challenge to fight the public debate imbalances caused by this phenomena?

    In terms of issues, this session will focus on the problem of online disinformation campaigns, especially those which are implemented by sophisticated actors who use networks of automated social media accounts, commonly known as “bots,” in order to manipulate target populations. This is a significant challenge for many societies around the world since disinformation erodes trust both between citizens as individuals and between citizens as institutions both of the government and society as a whole, such as the media. This threatens to undermine democracy even in countries where it is well established and can also pose a threat to peace and stability worldwide. This is a major challenge, especially when the people and organizations spreading disinformation for political purposes often have resources and advantages that are difficult for their opponents to match. Governments and civil society organizations, for example, do not often have access to funding or technological tools which could be used to implement impactful counter-disinformation campaigns. However, there is fortunately an opportunity here since the same technology which can be used for the harmful purpose of disseminating disinformation is also becoming increasingly accessible for other actors. If the right understanding of this challenge can be established then anti-disinformation actors can start to more effectively establish the initiatives and policies needed to improve digital literacy and access to reliable information among beneficiary populations.

    SDGs

    GOAL 4: Quality Education
    GOAL 9: Industry, Innovation and Infrastructure
    GOAL 16: Peace, Justice and Strong Institutions

    Description:

    It has become almost common knowledge in some parts of the world that automation plays an influential role in the spreading of disinformation globally, especially during electoral periods. The 2019 “Troops, Trolls, and Troublemakers” report identified organised manipulation campaigns in at least 48 countries. At least 30 parties of different ideological alignments and using both social media channels and instant messengers played central roles in these dynamics. In light of this situation, it has become almost a default reaction for many stakeholders - including policy makers - to blame bots for the spread of disinformation. However, automation is not necessarily always a negative factor in the dynamics of disinformation. The intelligent use of automated tools can be a compelling and innovative way to combat disinformation campaigns using technological elements such as algorithms, machine learning, and big data to more effectively monitor and counter disinformation campaigns. These technologies can assist civil society organizations, academic researchers, journalists, and even members of the private sector to identify such harmful content, analyse its effects, and to create narratives that expose and bring transparency to the use of bots. Ultimately, such efforts can contribute to improved media literacy and access to reliable information. This workshop will use the format of a roundtable discussion to explore initiatives and tools currently being used to automate the countering of online disinformation while also highlighting the main challenges and opportunities of using helpful bots to fight harmful bots in the context of online disinformation. As an interactive space, this session will promote an active dialogue with the participants besides the brief introductory remarks by the speakers. Having a multi-stakeholder perspective will bring a diversity of different views and insights from civil society, academia, journalists, and social media platforms in order to understand disinformation as a complex problem that needs to be tackled by a multiplicity of actors with the Global South as a starting point. This will ideally lead to consensus-based recommendations on the way forward.

    Expected Outcomes

    Disinformation is a global phenomenon which affects all sectors of society across a great array of actors ranging from governments to activists, from NGOs to academia, and from journalists to everyday citizens. The usage of automated tools (bots) most commonly denotes an escalation of disinformation as malicious actors use this technology to disseminate disinformation. The session proposed here is based on the understanding that bots can also be used to have a positive effect. The session therefore aims to host a high-level, multi-stakeholder discussion of the possible applications of such tools, the main risks involved with deploying them, and how they can help to advance media literacy. The session will therefore help to establish a consensus-based foundation and recommendations for how to proceed with both policies and active campaigns that use positive automation to counter harmful automation in the contexts of disinformation. Session participants will exchange their experiences with using such tools and will have the opportunity to build an international network of like-minded people and institutions that work in this field. That will help to establish continuous discussion and sharing of best practices moving forward.

    Interaction and participation are critical for the success of this session since it is intended to be a collaborative sharing of perspectives on the problem of online disinformation and the use of automated tools to address it. While the speakers and moderator have been selected for their relevant expertise, they will constrain their comments to relatively brief introductory remarks which set the context before a series of guiding questions are used to encourage other participants to share their thoughts. This will help to ensure that the session goes beyond a one-way flow of information and is truly able to incorporate multiple perspectives in order to move towards consensus-based recommendations on the use of automation for countering online disinformation.

    Relevance to Internet Governance: Disinformation (and other forms of misinformation) are major threats to many societies precisely because of the increasing global digital connectedness, which makes it easier for such harmful content to spread. As an online phenomena, it involves multiple actors with shared responsibilities: governments can act through regulation and public policies to decrease the harmful consequences of these dynamics; online platforms can change their internal policies - and even design - to lessen the impact of disinformation; civil society can enhance media literacy as a long-term strategy to enhance critical information consumption and, ultimately, individuals can act, for example, by reporting disinformation pieces when confronted with them. In that sense, exploring disinformation as a socio-technical issue means investigating what the challenges and opportunities for shaping internet governance are on a topic that is likely to remain a critical item on the agenda for all stakeholders with an interest in good internet governance.

    Relevance to Theme: One of the main risks presented by online disinformation is that it erodes societal trust as citizens lose confidence that online content is being created and shared by authentic actors. This will call into question the trustworthiness of conventional media content and announcements from other institutions, which will have a negative impact on public discourse and citizen decision making. It is therefore important to focus more on the question of how disinformation impacts online trust, what can be done to address this situation, and how automation fits into both sides of this competition. Ultimately, the internet's creation as a tool for empowerment and free communication has been threatened by disinformation, especially in times of political events such as elections. Many democracies globally have been affected by the artificial manipulation of public discourse and the online arena has been used as the locus for this. It is crucial to restore this original essence of the Internet as a place of collaboration and freedom.

    Online Participation

     

    Usage of IGF Official Tool.

     

    1. Key Policy Questions and related issues
    (i) how the different stakeholders perceive bots and whether they saw them as potentially having a positive influence in the imbalances caused by the phenomenon;
    (ii) questioned which were the best approaches to apply these tools and whether it should be different depending on the fact that misinformation originated from a malicious campaign or if it had a less intentional origin;
    (iii) explored whether there were any social risks associated with deploying bots to counter misinformation, whether it did not restrict speech - in essence whether people did not have a right to be wrong or say something that might be wrong.
    2. Summary of Issues Discussed

    The members of the workshop discussed the soundness of using automated tools and bots for countering disinformation. They highlighted the potential for positive uses in enabling and empowering the work of individuals dealing with disinformation campaigns. It was noted as well their social positive impact in raising awareness and serving as media literacy tools. 

    The discussion addressed as well the risks involved in deploying them. It was mentioned that these tools may limit speech and may interfere with other individual rights. 

    The debate moved on to whether there should be different technical approaches to deal with spread of  misinformation (less intentional) and disinformation (with a malicious intent). The participants seemed to agree that it was less a matter of approach or technical tools and more a matter of tactics. A coordinated campaign to spread disinformation would require a higher degree of coordination from the actors trying to stop its spread or to counter its deleterious effects. 

    The discussion evolved to deal with the legitimacy of deployment of such tools and participants suggested that transparency and a human-centered approach were at the heart of the matter. To finish there was a discussion whether using these tools may not run counter to other rights such as a “right to be wrong” and share views that may be considered wrong.

    3. Key Takeaways

    The key takeaways from the workshop related to the soundness of deploying bots to counter desinformation, the instances where these tools can be deployed, the policies to mitigate risk and under which basis and criteria to address their efficacy and legitimacy. 

    The first takeaway is that bots and automated tools can play a role in fighting disinformation. They can be important innovative and compelling ways to address this multifaceted phenomenon. Their use to identify and monitor instances of disinformation tends to be the most  effective way to apply them and the less prone to risk. They present an important opportunity to concentrate resources on instances where human oversight is more crucial. When used directly to moderate speech they may involve a higher risk of limiting rights such as freedom of expression access to information. 

    The deployment of any such tool should  be accompanied by efforts of transparency. Explanation of the inner workings of the tools, the criteria they follow and their effects are of significant importance.

    The legitimate use of bots may depend not only on how it is used and its objective but also on the actors that are deploying them. The public administration should be held to a higher standard, deploying them only on instances where it can be justified. Social media platforms should also be held to account when implementing such tools and processes. The imbalance of power is a significant factor and raises the social risks associated with their application

    6. Final Speakers

    Speaker 1: Christopher Tuckwood, Civil Society, Western European and Others Group (WEOG)
    Speaker 2: Debora Albu, Civil Society, Latin American and Caribbean Group (GRULAC)
    Speaker 3: Jenna Fung, Civil Society, Asia and Asia Pacific Group, Affiliation: Net Missions
    Speaker 4: Jan Gerlach, Civil Society, Western European and Others Group (WEOG)
    Moderator: Christian Perrone, Civil Society, Latin American and Caribbean Group (GRULAC)

    7. Reflection to Gender Issues

    Gender issues were only marginally addressed through the consideration that disinformation and hate speech affect particularly women and gender-diverse groups. Such minorities suffer from coordinated inauthentic behaviour campaigns, especially during election periods regardless of the region / country addressed. 
    Besides this, there are numerous examples on how the use of artificial intelligence tools and processes discriminates on the basis of gender and race, which can pose a challenge and a threat to the deployment of such initiatives when countering disinformation.

    8. Session Outputs

    There is an agreement between the group of speakers that the use of automation, bots and artificial intelligence tools to counter disinformation has to be human-centered and supervised. Also, it should be noted that this usage concentrates on the phases of identification and filtration and not on the phase of responding to disinformation - or misinformation - campaigns.