1. Key Policy Questions and related issues:
(i) how the different stakeholders perceive bots and whether they saw them as potentially having a positive influence in the imbalances caused by the phenomenon;
(ii) questioned which were the best approaches to apply these tools and whether it should be different depending on the fact that misinformation originated from a malicious campaign or if it had a less intentional origin;
(iii) explored whether there were any social risks associated with deploying bots to counter misinformation, whether it did not restrict speech - in essence whether people did not have a right to be wrong or say something that might be wrong.
2. Summary of Issues Discussed:
The members of the workshop discussed the soundness of using automated tools and bots for countering disinformation. They highlighted the potential for positive uses in enabling and empowering the work of individuals dealing with disinformation campaigns. It was noted as well their social positive impact in raising awareness and serving as media literacy tools.
The discussion addressed as well the risks involved in deploying them. It was mentioned that these tools may limit speech and may interfere with other individual rights.
The debate moved on to whether there should be different technical approaches to deal with spread of misinformation (less intentional) and disinformation (with a malicious intent). The participants seemed to agree that it was less a matter of approach or technical tools and more a matter of tactics. A coordinated campaign to spread disinformation would require a higher degree of coordination from the actors trying to stop its spread or to counter its deleterious effects.
The discussion evolved to deal with the legitimacy of deployment of such tools and participants suggested that transparency and a human-centered approach were at the heart of the matter. To finish there was a discussion whether using these tools may not run counter to other rights such as a “right to be wrong” and share views that may be considered wrong.
7. Reflection to Gender Issues:
Gender issues were only marginally addressed through the consideration that disinformation and hate speech affect particularly women and gender-diverse groups. Such minorities suffer from coordinated inauthentic behaviour campaigns, especially during election periods regardless of the region / country addressed.
Besides this, there are numerous examples on how the use of artificial intelligence tools and processes discriminates on the basis of gender and race, which can pose a challenge and a threat to the deployment of such initiatives when countering disinformation.