IGF 2023 Lightning Talk #116 Canada’s Approach to Regulating Online Safety

    Time
    Wednesday, 11th October, 2023 (07:30 UTC) - Wednesday, 11th October, 2023 (08:00 UTC)
    Room
    SC – Room H
    Subtheme

    Cybersecurity, Cybercrime & Online Safety
    Child Online Safety
    Disinformation
    New Technologies and Risks to Online Security
    Online Hate Speech and Rights of Vulnerable People

    The Dais at Toronto Metropolitan University

    Speakers

    Viet Vu, Manager, Economic Research, The Dais, Toronto Metropolitan University, Western European and Others Group (WEOG)

    SDGs

    5.b
    10.2
    16.10
    16.2
    16.6
    16.a

    Targets: Online platform governance has the potential to enhance accountability and action regarding online discrimination and harassment directed at women (5.b) and along disability, racial, ethnic and religious grounds (10.2), as well as address the online abuse of children (16.2). Proposals to strengthen laws and develop new regulators advance the SDGs of developing effective, accountable and transparent institutions and strengthen national institutions to prevent violence and crime (16). Doing so must also advance and balance the ongoing access to information and protection of fundamental freedom, including freedom of speech and expression (16.10).

    Format

    Presentation and Q&A

    Duration (minutes)
    30
    Language

    English

    Description

    Canada is in the midst of a major overhaul to its Internet governance approach. It has introduced bills to rewrite its privacy law, regulate ‘high-impact’ AI systems, require digital ad registries, and compel more Canadian content on major streaming platforms. Another law has been proposed for a new regulator to tackle illegal content on online platforms, such as hate speech, violent content and sexual exploitation -- and some are already advocating to expand its scope to include disinformation and election manipulation. How are Canadians reacting to these moves to regulate the internet in the name of online safety? What can be learned from similar efforts around the globe? Hear and discuss results from a representative survey conducted annually over the last four years that shows Canadians remain worried about their online safety, have low trust in online platforms to fix things, and want additional action and accountability.

    Background Paper: Survey of Online Harms in Canada https://dais.ca/reports/survey-of-online-harms-in-canada/

    Key Takeaways (* deadline 2 hours after session)

    In Canada, there is significant interest to regulate serious harms that results from online interaction, with many recognizing a need for a systems approach, as opposed to one just focused on individual-level content

    The direction of legislative design seen in various governments are, in many cases, reflective of the legislative context (existing legislation, constitutional provision) that creates legislative constraint, than differences in fundamental opinions

    Call to Action (* deadline 2 hours after session)

    For regulators creating regulations on online harm to be clear of legislative intent, and focus on solving that specific legislative intent as opposed to other potential unachievable goals

    For conversations to be clearly centered on the experiences of harm as experienced by people living within that jurisdiction

    Session Report (* deadline 26 October) - click on the ? symbol for instructions

    In the session on online harms in Canada, we started by discussing the Canadian definition surrounding online harm, reminding participants that the talk was centered on Canadian usage of terms, which may differ from how the same term is used in other jurisdictions, inviting participants to stop the presenter and ask questions if there were any points that were unclear. We then defined online harms to mean financial, physical, psychological, and emotion harm that results from interactions that take place through the internet, whether they respect local, regional, or national borders. We then listed a number of examples of online harm, making clear that some instances of it (such as child sexual exploitation material) was illegal under existing legal framework, while some (such as misinformation) was harmful but legal.

    We then moved to a discussion of the results arising from the survey of Canadians’ experience in online harm, demonstrating a significant number of Canadians are exposed to harmful content frequently. In particular, we noted that while many Canadians saw individuals as being largely responsible for generating harmful contents, they did not see individuals as being primarily responsible for reducing the amount of harmful content online, instead seeing a larger role played by online platform and the government in solving such. This particular finding was discussed in detail, in particular as informing public policy conversation on the topic.

    We then moved to a discussion of the current legislative creation process taking place in Canada to tackle online harms, situating the potential legislation within a slew of legislative activity that has occurred in the past 3 years that concerns internet governance and digital economy broadly, stressing the fact that efforts to tackle online harms in Canada cannot be understood in isolation. From that point, a deeper exploration of regulatory tension surrounding online harms legislation followed, focusing particular on how it interacts with public sentiment held in Canada, as well as the law’s potential impacts on the preferred economic system, as well as other existing legislation (including constitutional law - in Canada in the form of the Charter of Rights and Freedoms) as directing the potential direction the legislation might take. The formal presentation finished with situating the Canadian conversation in a global context, stressing that while there are no unified approach to tackling online harm, many deviations seen globally likely may not reflect irreconcilable fundamental differences in definitions of online harm, but are much more likely to reflect the legislative constraints different country faces, and the possible regulatory action (both from a legal and political perspective) one can take.

    After the talk, a number of questions were asked by the participants. One surrounded how legislative action can incorporate the idea of “benign exposure” to less harmful content, as a training to inoculate a user against being exposed to more harmful content. The presenter discussed at length current thinking on that topic in areas of policy approaches to tackling mis and disinformation, including approaches to increase digital media literacy amongst different groups.