IGF 2022 Day 2 Town Hall #5 Global regulation to counter terrorist use of the internet

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> Hello, everyone.  Let's start by checking if those who are online can hear us?  Team on Zoom, do you hear us?

     >> GRACE ROLLISON:  Hi, I can hear you, perfectly. 

     >> Okay, ladies and gentlemen, we're going to ‑‑ sorry, sorry.  We have a session in the room, please ‑‑ we think a session in the room.  Okay.  I'd like to introduce this session on the global regulation to counterterrorist use of the internet. 

     Let me find my ‑‑ we have a team on Zoom who will make the presentation and then we will have a quick Q&A, if the time will allow us. 

     So, let's start by giving the team online, we have two person, please go ahead.  I don't know exactly who will make the presentation, but you can introduce and go ahead for the presentation. 

     >> GRACE ROLLISON:  Thank you so much.  So distinguished colleagues, good afternoon.  It's an honor to be here with you today at the Internet Governance Forum.  Please allow me to introduce myself.  I'm Grace Rollison and I'm a policy analyst at Tech Against Terrorism.  Thank you for joining my presentation today.

     To briefly provide some background, for those who are not familiar with our work, Tech Against Terrorism is a public private participate focused on knowledge sharing and providing support to smaller platforms in tackling terrorist exploitation of the internet and upholding human rights.

     We work with a broad range of stakeholders from industry‑led initiatives to governments in academia, as well as Civil Society organizations and work closely with the Global Internet Forum to provide tech mentorship to tech companies.

     One of our core undertakings at Tech Against Terrorism is analysis and reflection on legal responses to terrorist and violent extremist use of the internet.

     What forthcoming acts and proposals are of note?  What trends can be recorded across different legislations?  And what's the current state of play of legal responses to terrorists using the internet? 

     So, in its present condition, terrorist use of the internet is currently regulated by three different types of legislation.  Firstly there, are counterterrorism acts, themselves.

     These pertain to the sharing of terrorist propaganda or recruitment, as well as material support to terrorist groups. 

     So, an example of this would be a since‑repealed act in France's penal code which outlawed consulting online public communication services from making messages, images or representations, either directly provoking the commission of terrorism or glorifying the acts of terrorism. 

     There's also Pakistan's 2016 Prevention of Electronic Crimes Act which similarly outlaws preparing or disseminating information with the intent to glorify terrorists or terrorism. 

     Another type of legislation is cybercrime acts.  These pertain to cyberterrorism and use of the internet to commit a terrorist offense. 

     Examples of cybercrime acts include Nigeria's 2015 Cybercrime Act, Kenya's 2018 Computer Misuse and Cybercrimes Act and Sierra Leone's 2020 Cybercrime Act. 

These outlaw cyberterrorism which is defined as a person who acts as a computer, system or network for the purpose of cyberterrorism.

Finally and arguably the most complex legislation addressing online content are laws aimed at countering the spread of terrorist content and laws on illegal or harmful content. 

     An example of this type of legislation is the EU's 2021 regulation on addressing the dissemination of Terrorist Content Online.  The TCO.  This legislation introduced a one‑hour removal deadline for terrorist content after receiving a removal order from a competent authority.  It also introduced the preservation of terrorist content for six months as well as transparency reporting, points of contact and legal representatives as well as other measures. 

     Another key piece of legislation is Australia's 2019 Sharing of Abhorrent Violent Materials Act.  This was in the wake of the Christchurch terror attack.  Requiring social media platforms and other websites to quickly remove abhorrent violent material and refer it to federal police or risk facing fines.

     Another final example would be India's 2021 Information Technology rules which formalized what content online is prohibited in the country and allows Indian authorities to request content removal. 

     So, it's a fairly fragmented regulatory landscape.  Looking at key regulations passed in the last few years, examples would be the Australian and EU Acts that I just mentioned, but also in 2017, the Network Enforcement Act made to Germany, one of the first countries to require tech platforms to remove terrorist content within a short‑term timeframe, 24 hours to be exact.

     Since then, the EU has adopted this regulation mandating a one‑hour removal deadline.  Pakistan passed the 2026 protection rules.  2021 guidelines introduced a 36‑hour removal deadline after being notified by the Indian authorities. 

     And looking forward to key proposed legislation, the UK's online safety bills outcome and practical applications for tech platforms are still uncertain.  In its current form, the bill emphasizes protecting children and preventing terrorist propaganda. 

     On terrorist content specifically, the online safety bill may require platforms to specify in their terms and conditions how they're protecting users from illegal content.  Addressing terrorism, specifically. 

     Canada's proposal on addressing harmful online content aims to protect Canadians from this kind of content by requiring online communication platforms to block access to this sort of content in Canada.

     New Zealand is developing a regulatory framework for content moderation, aimed to tackle harmful content across all media channels, print, broadcast and digital forms of media.

     So, having considered legal responses to terrorist use of the internet from recent years, I'm looking forward to proposed legislation there, are a number of key trends and takeaways that Tech Against Terrorism has identified. 

     Generally, there's a lack of legislation specifically addressing terrorist use of the internet.  It's addressed by legislation on cybercrime, counterterrorism acts and laws on illegal or harmful content.  This creates a fragmented regulatory landscape.

     There's general trend with online regulation that one country will form a new type of regulation such as the online safety bill's legal but harmful framework and other countries will replicate the rules based on this legislation framework.

     Countries with more democratic record have aimed to justify their own regulation by pointing to western country's regulation as an example.

     When online regulation such as legal, but harmful frameworks are replicated in non‑democratic countries, there's a potential that risk for these frameworks to be abused and for this to have a critical impact on fundamental rights. 

     So, democratic countries must consider the impact that their regulation can have across the globe.  We also notice that existing legislation seeking to address terrorist use of the internet is largely focused on countering terrorist content and preventing dissemination by mandating tech platforms to prevent and rapidly remove terrorist content.

     This neglects other important aspects of terrorist use of the internet, such as terrorist‑operated websites and also terrorist use of the internet for operational purposes.  When legislation covers terrorist use of the internet for operational purposes, it often calls for mandated back‑door access to encrypted messages and data.

     Requiring platforms to remove information within a short timeframe.  Regulation also increasingly out sources legal adjudication to tech platforms with tech companies required by law to assess whether content is illegal, following a report from the authorities or in certain instances, from reports from users.

     And this is predominantly a trend in Europe, but also present in Australia's Online Safety Act. 

     And lastly, regulation increasingly mandates transparency and accountability measures from platforms.  This is being brought forward in Europe, but also in India and Australia through requirements to reduce transparency reports on requirements with new regulations and removal of illegal content. 

     So, this brings us to the Online Regulation Series.  A centralized resource collated Tech Against Terrorism's analysis of regulate developments.

     Since 2017 and the passing of Germany's Network Enforcement Act, the first regulation mandating platforms to remove certain illegal content within a set timeframe, there've been many developments in the regulation of online speech and content and in particular, to our scope, at Tech Against Terrorism, how we counter the spread of Terrorist Content Online.

     Regulation of online speech is often complex and with many laws pioneering an ambition in scope, it can be difficult for tech companies to comprehend.

     In light of this rapidly‑changing and complex landscape, Tech Against Terrorism provides a resource to understand the changing regulatory landscape.  Tech Against Terrorism was first to provide this resource on legal responses to terrorist use of the internet and online regulation.

     So, together, this forms the Online Regulation Series.  We reviewed over 100 pieces of legislation, proposals and guidelines that aim to regulate the online sphere in order to better‑understand the state of online regulation and implications for tech platforms. 

     So, we focus on three key questions in the Online Regulation Series.  One, what is the global state of play with regard to online regulation?  Two, what are the regulatory initiatives that aim to regulate online content and three, what are the implications for tech platforms? 

     For the first edition, we covered a broad range of regions from France, EU, UK, Turkey, Canada, U.S., Kenya, Jordan and Brazil to name a few and with the second online regulation series, we made the decision to focus on Sub‑Saharan Africa with a general intention to make the Online Regulation Series lest western‑focused going forward. 

     We broadened our scope in the second edition to include eleven new countries, such as Poland, Nigeria, Uganda and others.

     There are statistics in the bottom left of the first and second editions of the online regulation series.  We covered over 60 regulations and legislative proposals and analyzed 18 global jurisdictions. 

     The third online regulation series will be published in January 2023 and will mostly provide updates on countries previously covered, which is reflective of the regulatory landscape. 

     So, here we have a visualization of the countries we have covered and our expansion in the second and third editions of the Online Regulation Series.  With regards to methodology, sometimes high‑profile legislation that's been proposed or coming into play can make it obvious when deciding what to include in each edition of the Online Regulation Series.

     This was the case, for example, with the United Kingdom and the European Union.  We also tried to take a look at each region and see where the key developments are.  We are still a small team and the regulatory landscape is constantly changing.  Forming parts of our selection process, we opted to focus on legislation that can have a direct impact on online counterterrorism or counter violent extremism efforts for now, due to the focus on online content governance.

     So, we do not assess legislation or proposals that cover, for example, just data privacy.  Though these are important in their own right, we specifically focus on online counterterrorism, but aim to keep going and keep expanding with every year. 

     All of our analysis on online regulation and handbooks can be on knowledge sharing platform which is free and easily accessible for tech platforms, policymakers, Civil Society and so on.

     Our knowledge sharing platform also notes key takeaways directly relevant for tech platforms. 

     So, having assessed some of the general trends in the online regulatory landscape, Tech Against Terrorism has identified a number of key concerns as well as recommendations for governments and policymakers when countering terrorist use of the internet through online regulation. 

     One of our major concerns is the focus on big tech and lack of consideration for smaller tech companies.  This is particularly problematic with regards to terrorist content since most terrorist groups exploit smaller platforms for their practical constraints on their capacity to be pro active, due to their lack of resources or less sophisticated content moderation systems.

     By drafting legislation with larger platforms in mind, and sanctioning smaller platforms instead of offering them the support they need to counter the threat, there's a concern that public policy will have a counter productive impact on the problem. 

     This also risks disproportionately affecting smaller platforms financially and risks harming tech sector competition and innovation. 

     As only larger tech companies would have the resources necessary to ensure legal compliance or to pay fines.  Another key concern that we have is for the effectiveness of online regulation. 

     Terrorists actors are sophisticated in their use of technology and have shown resilience in exploiting online platforms.  Whether that be to disseminate content, recruit to raise funds or to organize. 

     For instance, terrorists often rely on a multiplatform approach to content dissemination.  Using services to update content across multiple platforms.

     However, the complex and fast‑changing strategies deployed by terrorists are all too often absent from the debate and the legislative drafting of policy. 

     In practice, this means that the focus of regulation is often misplaced and it fails to rise to the challenge by terrorist use of the internet.

     Considering the removal of Safe Harbor protection for platforms to hold providers or their employers liable and law for user‑generated content.  They misplace the burden of responsibility for terrorist use of the internet.

     This exposure to liability penalizes those acting to counter terrorist content and ignores propaganda and use of the internet. 

     Another key concern, predominantly a trend in Europe and present in Europe's Online Safety Act.  Assessing whether content is illegal and to act accordingly.

     International human rights standards require acceptability of limits to freedom of expression should be decided by independent judicial bodies. 

     By mandating tech companies to remove content at scale, many shift responsibility of deciding what is harmful or illegal onto private entities, which risks undermining due process and the rule of law.

     We have concerns around short removal timeframes and the impact this can have on freedom of expression.  Notably, platforms compelled to remove content within a short timeframe are at risk of penalties and further liability.

     This artificial choice between rapid content removal or hefty fines means that platforms will lack time to properly adjudicate on the legality or harmfulness of content.  They're likely to err on the side of overremoval to avoid financial risk.

     This can also incentivize increase for reliance on automated content moderation. 

     Our last concern is that impractically broad definitions of harmful content and circular explanations of terrorist content rarely indicate do tech platforms operationalize these definitions in practice.

     This presents serious risks for freedom of expression.  With vague definitions of legal, but harmful content, countries are introducing mechanisms that risk undermining the rule of law. 

     Tech Against Terrorism has compiled key recommendations for governments and policymakers to consider when formulating legal responses to terrorist use of the internet. 

     Firstly, to safeguard the rule of law.  This can be done by avoiding introducing regulation that depends on subjective interpretations of harm.  These can be difficult for tech companies to implement at scale without negatively impacted freedom of expression.

     Also to refrain from criminalizing of online, what is legal offline.

     Governments should provide a clear legal basis for requesting platforms to remove content including through counterterrorism laws and counterterrorism designation lists. 

     Also, to protect the freedom of expression in line with international human rights standards, by reserving adjudication on its lawful limits to independent judiciary.

     And to provide legal certainty to tech platforms by clarifying how regulatory compliance will be assessed an by providing guidance on the specific steps companies should take to comply with legal requirements. 

     Secondly, we urge governments to honor commitments to due process when implementing online regulations.  So, to provide transparent accounts of the steps taken by regulatory bodies in the exercise of their authority.

     This allows for public assessment of the extent to which these bodies are fully aware of the risks to human rights and freedom of expression, associated with content moderation measures and information requests. 

     That they're consistent in their application of the law and free of political bias when making removal orders.  They're consistent and accurate in issuing penalties, free of incentive to be overzealous in moderating content and importantly, that they are accountable for their operational assessments and judgements and to clarify, safeguard and address mechanisms for users by stating what safeguards are in place and how to remedy this when requested by a country's judicial or governmental authority. 

     We also urge governments to encourage transparency and accountability.  So, commendably, the majority of online recommendation introduced between 2019 and 2021 include provisions that seek to increase transparency and accountability from the tech sector.

     This is done through detailed terms of service and guidelines that explain what is and is not allowed on the platform.  How violating content or behavior will be actioned, which is crucial to ensuring accountability.

     This informs users of the ground rules and acts as a reference to understand why content was removed or how to contest a removal decision if they believe it was removed in error. 

     Mandating platforms to have clear and detailed terms of services have become increasingly common in regulation aimed at countering illegal or harmful content.

     Governments can mandate transparency reporting on counterterrorism efforts.  Whilst we recommend transparency from the tech sector, we caution against mandating transparency reporting to a uniform standard across platforms.  This would disregard the diversity of services offered and differences in resources and capacity.

     We recommend that governments support our guidelines for transparency reporting on online counterterrorism efforts, which focus on a small number of core metrics to facilitate the evaluation of performance over time.

     And which fully comprehend the importance of platform diversity.  And we also do recommend that governments publish their own transparency reports on their online counterterrorism efforts.

     We urge governments to consider the capacity and resources of smaller platforms and uphold principles of proportionality in regulation and equality before the law. 

     Most regulations proposed or passed in the last four years, apply indiscriminately to platforms of all sizes, without consideration for differences in resources and capacity.

     Instead, provisions should provide realistic expectations for all platforms affected by the regulation and they should not be applied disproportionately and publish smaller platforms. 

     Government should acknowledge that the difference in platforms, resources and capacities and should draft legislation accordingly.

     For example, by allowing more time for smaller tech platforms to adapt their processes and systems to new legislation.  Or by providing them with the support they need to comply. 

     And this could be facilitated through public/private partnership endeavors and digital literacy programs.

     Some online regulations do include specific compliance requirements for larger platforms such as in India, Kentucky and France, the definition of what constitutes a large platform is not always clear. 

     Policymakers should clarify in the regulatory framework, the categorization of platform size and they should consider not only the user base, but also platforms resources, so, human, financial and technical resources in the categorization process.

     We encourage governments to safeguard human rights by excluding measures that pose a risk to freedom of expression.  So, particularly, the requirement for platforms to remove terrorist and other harmful content by a set deadline can encourage overzealous removal of content.  We caution provisions that don't allow sufficient time for platforms to adequately assess the legality of content, nor provide the necessary practical support for platforms to make assessments correctly.

     We also urge governments and policymakers to incorporate risk assessments into regulatory procedures.  So, understanding the threat is a crucial first step to effectively countering terrorist use of the internet and diffusion of Terrorist Content Online.

     The tech platforms, this means, probably comprehending the threat that they face and strategies employed by terrorist actors to exploit their services and evade moderation.

     Despite the obvious need for this proper understanding of the threat, we note that most tech platforms and, in particular, those with fewer resources, are unable to achieve this clear understanding. 

     So, we do welcome regulatory provisions highlighting the importance of conducting risk assessments.  Governments should conduct assessments, made for the probability of threat actor adversity or shift or displacement as opposed to disruption of terrorist activity online as intended.  Should publish summary of responses and any potential regulatory changes made as a result. 

     We also recommend governments to balance and process outcome targets. 

     So, consistently accurate moderation of online content at scale is possible.  Regulations should place adequate focus on process and outcomes.  Government should consider and encourage solutions that look beyond just the removal of content. 

     They should be realistic about outcome targets and avoid introducing regulation that assumes that removing all terrorist content without any impact on legal or otherwise legitimate content is possible. 

     And we also call on governments to take a holistic approach to countering terrorism and violent extremism. 

     In addition to regulating terrorist and harmful material, government should ensure that regulatory frameworks address the root causes of radicalization and hold accountable those individuals that engage in such activities. 

     So, in sum, the online regulatory landscape is fragmented and rapidly changing.  Tracking regulation on terrorist and violent extremist content is particularly difficult for tech companies, as it's scattered across various types of regulation. 

     We created the Online Regulation Series with this in mind to help keep tech platforms up to date and to support tech companies by providing key takeaways directly applicable for them.

     Overall, online regulation should be more‑sensitive to tech platform size and it should consider how regulation can affect small companies and can negatively impact tech sector diversity. 

     Instead of penalizing small tech platforms for their lack of capacity and resources, we should provide the necessary support in countering terrorist and violent extremist exploitation of their services. 

     We urge governments and policymakers to consider our concerns and recommendations, in particular, regarding safeguarding the rule of law, human rights and transparency.

     If you do have any questions, I think I might have run out of time, but please reach out to the e‑mail address on the screen and thank you, again, so much for your time and to the Internet Governance Forum for having me today. 

     >> KARIM MOHAMED:  Thank you, I have a small comment.  We have sessions during previous IGF, we had to divert on internet fragmentation, internet, the new era of the rights on global diplomacy and what we know, we see regarding the (?) of the internet.

     Do you think we should regulate such categorization?  Because today, the concept of terrorist can change regarding the society you are. 

     So, how can we regulate?  How can you set standards on such a concept regarding the internet regulation we are facing?  We have one hand raised in the room.  Let's give the floor and you can answer the comment.  Thank you. 

     >> ALEX:  I work in UK's department for digital.  I think this is a really interesting presentation, and I think it's really, obviously, there's a multiplicity of legislation going on across the world.  The first thing I want to draw attention to is the changes to the online safety bill, including particularly, the removal of the legal, but harmful measures you referred to in your presentation.

     There've been a few changes there.  We'll be focusing on the legal content, such as terrorism and child protection as well. 

     I want to make one point, obviously there's been quite a journey in terms of legislation to tackle terrorism over the years.  I think systems and processes really is key, so, I think systems and processes which form the heart of the Online Safety Bill, making sure that companies have those processes in place so that terrorist content doesn't appear in the first place.

     SDG laws pioneering in 2017, I think there's been a recognition, we need to stop it in the first place.  I think you mentioned why that more countries should legislate the terrorism in particular, rather than tackling it as part of a wider set of online issues.

     I wanted to ask why, why you think that.  I think it makes more sense to tackle Terrorist Content Online with other linked issues such as disinformation and other kind of illegal content as well.

     So, I was wondering if you might be able to expand on that a little.  Thank you.

     >> GRACE ROLLISON:  Thank you so much for your question.  I definitely understand that they are linked issues that can kind of have an impact on each other.  I think one of the key points is when, specifically, kind of mandating terrorism measures, is that it can result in kind of defining terrorism more specifically.  It should result in a better understanding of what terrorism means by referring, specifically to designation lists and it also just helps stop this kind of fragmentation of terrorism being mandated under different, under different acts, which can kind of cause confusion and especially for smaller tech companies, who I mentioned, don't have the resources to look through all those different sorts of legislation.

     Yeah, that's definitely a great question and they definitely do have kind of, a sort of, impact on each other in that way. 

     >> ALEX:  Thank you and congratulations to hear about your great work over the years as well.

     >> GRACE ROLLISON:  Thank you very much.

     >> KARIM MOHAMED:  Do we have any questions online?  On Zoom?  May I invite you to open your camera to capture the session?  Because I think it was the, the last one on the day.  And we have more people online than on‑site.  So, if you can open your camera, we can have a small capture to...

     >> GRACE ROLLISON:  I can also drop my e‑mail in the chat if anyone wants to e‑mail questions.  I can follow‑up in written feedback if that'd be easier.

     >> KARIM MOHAMED:  Okay, thank you all and I think we can stop at this stage and as you said, we'll have the report on contact detail around the IGF website and thank you for the presentation online.  And also, for accepting me as local moderator.  I learned a lot of things, thank you very much.

     >> GRACE ROLLISON:  Thank you very much for having us.