IGF 2019 WS #85 & WS #268: Misinformation, Responsibilities & Trust ‎

Organizer 1: Amrita Choudhury, CCAOI
Organizer 2: NADIRA AL-ARAJ, Internet society
Organizer 3: Anju Mangal, Pacific Community
Organizer 4: Yik Chan Chin, Xi'an Jiao-tong Liverpool University
Organizer 5: Arthur Gwagwa, Centre for Intellectual Property and Information Technology Law, Strathmore University Kenya

Speaker 1: Minna Horowitz, Civil Society, Western European and Others Group (WEOG)
Speaker 2Arthur Gwagwa, Civil Society, African Group
Speaker 3: Ansgar Koene, Technical Community, Western European and Others Group (WEOG)
Speaker 4: Shu Wang, Private Sector, Asia-Pacific Group
Speaker 5: Yongjiang Xie , Civil Society, Asia-Pacific Group
Speaker 6: Amrita Choudhury, CCAOI, Civil Society, Asia-Pacific Group
Speaker 7: Walid Al-Saqaf, Technical Community, Middle East
Speaker 8: Micheal Ilishebo, Government Sector, Africa

Moderator

Yik Chan Chin, Civil Society, Asia-Pacific Group

Online Moderator

NADIRA AL-ARAJ, Technical Community, Asia-Pacific Group

Rapporteur

Yik Chan Chin, Civil Society, Asia-Pacific Group

Format

Panel - Auditorium - 90 Min

Policy Question(s)

1. What are the reasons of the proliferation of misinformation and fake news in different countries and regions? Is the current challenge of misinformation and fake news, its manifestation and effects, including reaction to disinformation, similar in different nations, regions?

2. Are the initiatives (policy, technical, capacity building, others) taken so far by different stakeholders, especially the internet platforms and governments to curb spread of misinformation and fake news globally, regionally and within nations adequate?

3. Is it possible to moderate misinformation and fake news through government and private actors, while ensuring freedom of expression and privacy of users? How can trust and accountability in the Internet platforms and government interventions be upheld or restored?

4. Are there any best practices and approaches which may be adopted to counter misinformation being spread through private messaging and social media platforms in light of freedom of speech and necessary legal certainty for those platforms? How can those pratices and approaches be upheld?

5. Is there any role of the multistakeholder process, other than Governments and intermediaries, such as technology (e.g. algorithm and bots) in disinformation and fake news mitigation?

SDGs

GOAL 8: Decent Work and Economic Growth
GOAL 9: Industry, Innovation and Infrastructure
GOAL 10: Reduced Inequalities
GOAL 16: Peace, Justice and Strong Institutions

Description: 

The creation, dissemination and accumulation of information is one dimension of structural power. The vast majority of conflicts today are not fought by nation states and their armies; increasingly, they are fought not with conventional weapons but with words. A specific sort of weaponry—“fake news” and viral disinformation online—has been at the center of policy discussions, public debates, and academic analyses in recent years (Horowitz, 2019). Technology, including digital platforms, that enable connections and participation can be used for misinformation and fake news. In addition, what has been called the “emerging information arms race” (Posetti& Matthews, 2018, July 23) is plaguing mature and emerging democracies alike (Horowitz, 2019). A variety of approaches have adopted in different regions/nations, to fight misinformation and fake news: from content intervention (fact-checking and filtering), technical intervention (dedicated anti-rumor platforms, algorithm) economic intervention (undermining advertising sources), legal intervention (civil and criminal penalties ) etc. Different stakeholders including state actors, NGOs, platforms, news media are involved. However, it is important to determine,

How effective are those approaches? What are the shared policy principles, norms and mechanisms across regions and nations?
What are the responsibilities of actors such as Internet platforms and government regulators?
What roles do technology (e.g. Algorithm and bots) and others play in the process?
What are the best practices in light of freedom of speech and the necessary neutrality and legal certainty for platforms?
How can we restore and uphold the trust of the public to the Internet platforms, news media and politics? How can we hold the actors accountable for their interventions?

In this session, speakers from different regions/nations including UK, Finland, China, India, Africa, Middle East and Latin America will discuss the above questions from diverse geographic and stakeholder’s perspectives.

Government Sector

Mr.Micheal Ilishebo, Law Enforcement Officer, Zambia Police Service, Africa.

Private Sector

Mr. Shu Wang, Deputy Editor, Sina Weibo, China

Civil Society Sector

Professor. Yongjiang Xie, Beijing University of Posts and Telecommunications

Dr. Minna Horowitz, Docent professorship at the University of Helsinki; Expert, Digital Rights and Advocacy, Central European University, Center for Media, Data, and Society

Mr. Arthur Gwagwa, Centre for Intellectual Property and ICT Law: Strathmore Law School, Kenya, Africa

Ms. Amrita Choudhury, CCAOI, India

Technical Sector 

Dr. Ansgar Koene, Chair of IEEE P7003 Standard for Algorithmic Bias Considerations working group; Senior Research Fellow, University of Nottingham, HORIZON Digital Economy

Dr. Walid AI-Saqaf, Senior Lecturer at Södertörn University, Stockholm  & member Internet Society Board of Trustees

Expected Outcomes: 

1) Facilitate the debate as well as shaping the evolution of norms, principles, best practices of online disinformation and fake news refutation and model of Internet governance.
2) Identify differing viewpoints regarding Internet governance approaches regarding AI to help the creation of an environment in which all stakeholders are able to prosper and thrive
3) Policy recommendations and key messages report to the IGF community
4) A collaboration amongst speakers who are from different stakeholder sectors, in fake news and disinformation refutation and researches.

The session will be opened by the onsite moderator to provide participants an overview of the policy questions discussed in the session, the professional background of the speakers, and the format of interaction. In the second part, the session will move to the discussions and debate. The moderator will invite each speaker to express their views on a set of questions and guide the debate amongst speakers and the audience to foreground their common ground and differences. The workshop organizers and moderators will discuss the content of questions with speakers in advance to ensure the quality and flow of the discussion and debate. The moderator will ensure the audience from both offline and online being able to ask questions to the speakers immediately following their discussions to encourage active participation. In the third part, moderators will invite questions from the audience and online participants, the question time will last about 30 minutes in order to provide sufficient interactions amongst speakers, audience and online participants. Online participants will be given priority to speak, and their participation will be encouraged by moderators. The onsite moderator will summarise the findings and recommendations and future actions of the panel.

 

Relevance to Theme:

Various research done by organisations including the Internet Society, point that most internet users are losing their trust on the Internet. Misinformation, fake news, hate speech, issues of data privacy of users, the role of intermediaries are some of the greatest contributors to this dwindling trust deficit. The panel is aimed at discussing the issue of misinformation and its impact on nations and individuals, discussing the steps being taken as countermeasures so far; looking past the problems of misinformation in this digital age to coming up with ideas and solutions to counter the issue. The session also seeks to give participants an opportunity to share and explore their current concerns, discuss adequacy of the regulations being introduced by governments, steps taken by intermediaries and to think of new models and solutions that will help us create new ways of sharing information that is authentic and does not cause widespread harm to people in the future and helps building back the trust and accountability.

Relevance to Internet Governance:

The session will discuss the timely issues of fake news and misnformation, information security and online safety, responsibility and accountability of digital platforms, and function of government regulation and trust in platform and government in the Internet governance. It will involve stakeholders from the private sector, civil society, and technical sectors at both developed regions (EU ) and developing regions (China and Africa) to share their professional knowledge, experiences, best practices, policy framework in misinformation and fake news regulation. The proposed session will facilitate the global debate as well as shaping the evolution of norms, principles, best practices of online disinformation and fake news mitigation and model of Internet governance.

Online Participation

The online moderator will participate in the online training course for the Official Online Participation Platform provided by the IGF Secretariat's technical team to ensure the online participation tool will be properly and smoothly used during the proposed session.

 

Agenda

Setting the scene:    Onsite moderator, Dr. Chin, 3 minutes

Policy Question Discussion:

The moderator will invite speakers to answer five policy questions in turn. Each policy question will be discussed amongst speakers for 10 minutes with 1 minute of immediate audience response

Free discussion amongst and comments by speakers for 10 minutes, moderated by Dr. Chin

Speakers include:

Government Sector

Mr.Micheal Ilishebo, Law Enforcement Officer, Zambia Police Service, Africa.

Private Sector

Mr. Shu Wang, Deputy Editor, Sina Weibo, China

Civil Society Sector

Professor. Yongjiang Xie, Beijing University of Posts and Telecommunications

Dr. Minna Horowitz, Docent professorship at the University of Helsinki; Expert, Digital Rights and Advocacy, Central European University, Center for Media, Data, and Society

Mr. Arthur Gwagwa, Centre for Intellectual Property and ICT Law: Strathmore Law School, Kenya, Africa

Ms. Amrita Choudhury, CCAOI, India

Technical Sector

Dr. Ansgar Koene, Chair of IEEE P7003 Standard for Algorithmic Bias Considerations working group; Senior Research Fellow, University of Nottingham, HORIZON Digital Economy

Dr. Walid AI-Saqaf, Senior Lecturer at Södertörn University, Stockholm  & member Internet Society Board of Trustees

Interactive question and answer session: 30 minutes moderated by Dr. Chin and Ms. Al Araj

The wrap-up:  Onsite moderator, Dr. Chin, 2 minutes

1. Key Policy Questions and Expectations

Policy questions:

  1. Reasons behind  proliferation of misinformation and fake news. Similarity and dissimilarity  of its manifestation in different countries and regions.
  2. Adequacy of  initiatives  taken so far
  3. Balancing moderation by government and private actors, while ensuring FOE and privacy of users to curb misinformation. Building  trust and accountability on the internet platforms and government interventions.
  4. Best practices and approaches to counter misinformation while  ensuring Freedom of speech, neutrality and legal certainty.
  5. Role of  multistakeholder process, in mitigating disinformation and fake news

Expectations:

  1. Facilitate a debate for  shaping the evolution of norms, principles, best practices of online disinformation and fake news refutation and model of IG.
  2. Identify differing viewpoints and approaches on using AI to curb misinformation
  3. Policy recommendations and key messages report to the IGF community
  4. Collaboration amongst speakers, participants on fake news and disinformation refutation and researches.

 

2. Summary of Issues Discussed

The panel discussed four policy questions:

1) the reasons of the proliferation of misinformation and fake news in different countries and regions;

2) the initiatives (policy,  law, technical, capacity building) and best practices taken so far by different stakeholders to curb spread of misinformation and fake news globally, regionally and within nations;

3) the role of government in moderate misinformation and fake news while ensuring freedom of expression and privacy of users;

4) Policy recommendation and suggestions for better approaches and solutions.

3. Policy Recommendations or Suggestions for the Way Forward

Understand the nuances misinformation phenomenon. Consider how different regional, national, local contexts, demographics, platforms interact with information.

Need three level analysis to understand and address the issues: macro (states, political,legal system), meso (national media, civil society), and micro (individuals).

Need multistakeholder and multidisciplinary approaches. While technology (blockchain) can help refuting misinformation, but cannot solve complex societal issues.

Need to promote media literacy and fact-checking. Governments and industry need to promote media literacy for regulating and refuting dissemination of misinformation/fake news.

Governance measures: collaboration, self-governance, developing quality journalism, using fact-checking are important than regulatory tools. Law should be the last resort.

 

 

4. Other Initiatives Addressing the Session Issues

Fact checking project: Panelist created a blockchain-based global registry of fact checks produced by fact checkers: https://faktaassistenten.sh.se/?p=240

Trust building project:  Panelist conducting a multi-method study to understand people's trust in the media: social media, and news.

Platform’s rumour refuting project: Sina Weibo, China, launched rumor refuting project to collect daily rumors, publicize them, introduced user credit with penalty of deduction of score on speading rumour in different levels.  

Media Literacy: Panelist shared on media literacy campaign in India among youth to build their capacity and awareness on critical thinking, highlighting need for more initiative across different age groups.

 

5. Making Progress for Tackled Issues

Understanding multidimensional nature of trust.

Need for robust, diverse, national media ecosystems; multistakeholder and multidisciplinary collaborative approach.

Promoting self-governance, developing quality journalism, fact-checking and media literacy.

Government needs to take regulatory measures while being transparent, accountable, scrutinizing  their actions and processes

Policy discussion should understand nuances of misinformation phenomenon, consider how different regional, national, local contexts; age groups; platforms interact with authentic information and disinformation.

All of these ideas and suggestions are within IGF’s ecosystem.

6. Estimated Participation

70  onsite and 6 online. Unaware of YouTube participants.

35 women onsite participates

7. Reflection to Gender Issues

The session discussed the age and gender factor in affecting people’s trust in information and their media literacy levels.

8. Session Outputs

Facilitated a debate for  shaping the evolution of best practices and principles of online disinformation and fake news refutation and model of IG.

Identified a multi-level and multi-cultural approach to understand the sources and proliferation of the misinformation and fake news 

Identified a multi-level, multistakeholder, multi-cultural and multi-disciplinary approach to resolve the problem of misinformation and fake news.

Recongised the importance of capacity building and fact-checking functions in misinformation refutation, especially in the young and elder generations. 

Understanding the advantages as well as limitations of new technology such as AI for reducing misinformation.

Policy recommendations and key messages report to the IGF community

Policy recommendations and key messages are reported and circulated via  Digital Watch Observatory’s report “Misinformation, Responsibilities and Trust” to the IGF community (https://dig.watch/sessions/misinformation-responsibilities-and-trust?from=timeline).

Policy recommendations and key messages are reported and circulated via  the University of Xi’an Jiaotong-liverpool’s newsletter and website to academic community.