IGF 2023 Reports

IGF 2023 #24 IRPC Human Rights Law and the Global Digital Compact

Updated:
Human Rights & Freedoms
Session Report

The DC session of the Internet Rights and Principles Coalition (IRPC) took place in two separate sections, a first presentation of remarks by speakers, and a second half of interactive group workshops. Between in-person and online attendance up to 50 people participated in the session.

Panel Phase

Helani Galpaya

The Global Digital Compact (GDC) is praised for its focus on human rights but lacks clear guidelines on implementing business and human rights principles within its framework. The voluntary nature of these principles and ineffective stakeholder integration slow progress, with separate discussions among businesses and civil society instead of meaningful interactions. The draft document does not support effective collaboration among stakeholders, and the GDC struggles to hold nations accountable for human rights violations by private companies and states. Although the GDC acknowledges the link between human rights and socio-economic rights, addressing inequality, there is a significant gap between its vision and real-world application. Proposed regulations often restrict rights, and there's urgency in their enactment, compromising the GDC’s credibility. Additionally, the GDC underscores the importance of teaching online civic responsibility, drawing parallels with environmental protection efforts to highlight the role of individual actions in creating a safe digital space. Despite efforts to make its consultation process inclusive, it remains imperfect and dominated by privileged voices. Effective ground-level actions are essential after the GDC’s establishment to ensure national policies reflect its principles and objectives. While the GDC commits to human rights and socio-economic issues, it faces challenges in clarity, stakeholder engagement, and enforcement, undermining its impact. However, emphasizing individual online responsibilities and improving consultation inclusivity are crucial for meaningful outcomes.

Raashi Saxena

The GDC is a joint effort by the UN, governments, and civil society to integrate technology with the Sustainable Development Goals (SDGs) through a multi-stakeholder approach. The Internet Rights and Principles Coalition plays a key role in the GDC by promoting digital inclusion and connectivity for marginalized groups like women, migrants, and refugees, supporting SDG 10 – Reduced Inequalities. Ensuring that human rights in digital spaces match those offline, particularly in areas like freedom of expression and net neutrality, is vital for SDG 16 – Peace, Justice, and Strong Institutions. The influence of Artificial Intelligence (AI) in sectors like finance and health, and its challenges, including privacy and the rise of deep fakes, impact SDG 3 – Good Health and Wellbeing, and SDG 8 – Decent Work and Economic Growth. The IRPC emphasizes partnerships and youth involvement, aligning with SDG 17 – Partnerships for the Goals and SDG 9 – Industry, Innovation, and Infrastructure. During IRPC events, interactive group activities foster engagement and idea exchange, promoting a collaborative digital future and advancing the SDGs.

Santosh Sigdel

The Dynamic Coalition on Internet Rights and Principles is committed to upholding human rights online, creating the multilingual Charter of Human Rights and Principles for the Internet, now available in 28 languages, to facilitate global and regional stakeholder engagement. The IRPC promotes a rights-based approach to Internet frameworks and has participated in various global forums, including EuroDIG and UNESCO conferences, raising awareness about digital rights and fostering collaborations. Highlighting the importance of accurate and culturally relevant translations, the IRPC engages local stakeholders in the translation process to ensure integrity and build local understanding of the charter's principles. This not only aids in language translation but also inculcates a deep understanding of human rights among local communities, empowering them to advocate and enforce these principles. However, balancing the regulation of online misinformation with freedom of speech presents challenges, particularly as governments may use regulatory measures to suppress free expression. This issue is prevalent e.g. in South Asia, where Internet regulations often threaten freedom of speech. Moreover, awareness of the United Nations' Global Digital Compact remains low in South Asia and other developing regions, impacting its effectiveness. Ensuring broad stakeholder involvement from diverse regions is crucial for the GDC’s success. In summary, the IRPC's work in translating and promoting human rights online, while building local capacities and raising awareness, is vital. Nonetheless, ongoing efforts to maintain freedom of speech in the face of regulatory challenges and to enhance global engagement with initiatives like the GDC are essential for a rights-centric digital world.

Wolfgang Benedek

Wolfgang Benedek criticizes what we know about the Global Digital Compact for its limited progress in advancing human rights. He points out two main flaws: the compact's lack of enforcement mechanisms and the difficulty in achieving consensus among stakeholders. These weaknesses, Benedek argues, hinder the GDC's ability to effectively promote and protect digital human rights. He emphasizes the need for stronger enforcement and more collaborative decision-making to enhance the GDC’s impact on digital human rights.

Dennis Redeker

As far as work of the Coalition is concerned, key developments included the translation of the 10 Principles document into Japanese to engage more local stakeholders and there are concrete plans to translate the entire Charter into Japanese until IGF 2024. To support this, a task force is seeking experts in Internet governance or international law. Dennis also pointed out that the Platform Governance Survey, conducted at the University of Bremen, highlighted a discrepancy between the expected and actual influence in shaping the GDC, with technical experts viewed as ideal leaders but businesses seen as overly dominant. In addition, the general population of 41 countries does not appear to be aware of the important role of governments in the negotiation of the GDC. The results underscore the need for broader public consultation, involving citizens, NGOs, and academics to create a more inclusive digital governance framework. The Internet Rights and Principles Coalition is advancing this inclusivity by collaborating on translations with universities and student groups, enriching students' understanding of digital rights.

Group Phase

Dennis Redeker led a group discussion as part of the workshop, encouraging participants to analyze and discuss future challenges related to specific IRPC Charter articles. This activity aimed to deepen understanding and disseminate knowledge about the charter's relevance.

Audience

The audience discussed and emphasized the following topics:

1.  Youth and diversity in Internet governance emphasizing the importance of involving young people in updating and translating governance documents to reflect diverse perspectives.

2.  Freedom of expression: Discussed the need to balance regulation and protection of free speech, emphasizing principles like legitimacy, necessity, and proportionality to prevent government overreach.

3. Responsibilities in the digital space, stressing the importance of clearly defining roles for states, businesses, and stakeholders in upholding human rights when regulating online content and protecting against harmful information.

4. Inclusivity and accessibility, acknowledging advancements and ongoing challenges in making technology accessible for individuals with disabilities, including variations in regional sign languages and Internet accessibility.

5. Protection of children and their rights, addressing the need for careful regulation to protect children online, the potential impacts of digital certificates on human rights, and the necessity of strategic litigation to safeguard digital rights against overreaching government actions.

Vint Cerf (in the audience) emphasizes the need for users and providers in the digital space to understand and fulfill their responsibilities alongside rights. He links this to Rousseau's social contract concept, which balances individual freedoms with societal obligations. This approach, including the role of social norms, aims to foster a responsible, secure online environment. Ultimately, recognizing and upholding our duties can enhance harmony both online and in broader society.

IGF 2023 Open Forum #161 Exploring Emerging PE³Ts for Data Governance with Trust

Updated:
Data Governance & Trust
Session Report

Introduction

As digital technologies increasingly intersect with privacy concerns, the OECD's insights on Privacy-Enhancing Technologies (PETs) demonstrate that these technologies can enhance privacy and data protection and foster trust. This workshop built on such foundational work, aiming to expand our understanding and application of PETs to explore the multifaceted role of privacy enhancing, empowering, and enforcing technologies (PE³Ts) in fostering privacy and data protection, while enabling the trustworthy use of personal data for growth and well-being.

Distinguished panelists discussed not only how the combination of PETs such as synthetic data, homomorphic encryption and federated learning can enable the trustworthy collection, processing, analysis, and sharing of data, but also explored how digital technologies can be leveraged to enforce privacy laws, enhance transparency, improve accountability, and empower individuals to take more control over their own data. In so doing the session provided a platform for multistakeholder dialogue, aiming to generate insights into the opportunities and challenges of PE³Ts, exploring how these technologies can foster greater trust in the digital landscape, critically contributing to a safer and more inclusive Internet for all.

Privacy-Enhancing Technologies in Action

Panelists detailed how PETs such as homomorphic encryption and differential privacy can be instrumental in data protection. This included the ICO's approach to PETs as tools for secure data sharing, emphasizing alignment with legal compliance and their role in facilitating safer data practices.

Panelists also shared examples, such as Mozilla's deployment of PETs and their integration into products like the Firefox browser. In this context, panelists discussed the broader implications of PETs in enhancing user privacy without compromising functionality, particularly in the context of advertising.

Digital Technologies and Privacy Enforcement

Panelists also illustrated how digital technologies for automation can streamline privacy enforcement. Insights from NOYB’s approach were shared where digital technologies are used for monitoring and addressing privacy violations effectively, highlighting the potential for scaling enforcement activities using digital tools. In this context the need for ongoing education and adoption of PETs within EU institutions and across its member states was also highlighted.

Proactive Privacy Management and Integration

Panelists later discussed concepts for proactive privacy management through software analysis, proposing methods to assess and ensure privacy from the development phase of software products. Such approach suggested a shift towards embedding privacy considerations early in the technology design process. Panelists also stressed the importance of integrating PETs with traditional privacy management practices. In this context, panelists discussed the challenges and opportunities of adopting PETs in various organizational contexts, emphasizing the need for strategic privacy risk management.

Conclusion and Recommendations:

The workshop underscored the multifaceted role of PETs in enhancing and enforcing privacy within digital landscapes. The collaborative discussions highlighted the importance of integrating technological solutions with regulatory and organizational frameworks to achieve robust privacy protections. It led to the following recommendations:

  • Enhanced Collaboration: Encourage multi-stakeholder dialogues to further develop and refine PETs.
  • Increased Awareness and Training: Promote broader understanding and skill development in PETs across all sectors.
  • Guidance and Best Practices: Develop comprehensive guidelines that help organizations implement PETs effectively.
IGF 2023 WS #421 Quantum-IoT-Infrastructure: Security for Cyberspace

Updated:
Cybersecurity, Cybercrime & Online Safety
Key Takeaways:

Strengthening IoT Security by Design: The Internet of Things must incorporate security by design to counteract inherent vulnerabilities. This approach should include adopting internationally recognized standards and best practices to ensure robust security across all IoT deployments.

,

Adapting to Emerging Quantum Technologies: As quantum computing advances, it presents both potential threats and solutions for cybersecurity. National and global strategies should evolve to include quantum-resistant cryptographic methods to safeguard future digital communications and data integrity.

Calls to Action

For Governments and Large Industries: Initiate and enforce policies requiring that security by design is a fundamental criterion in the procurement of ICT services and products. This shift will drive broader adoption of secure practices throughout the technology ecosystem, contributing to a safer internet environment.

,

For the Technical and Academic Community: Collaborate on research and development of quantum-resistant cryptographic techniques. This collective effort is crucial to prepare our digital infrastructure for the arrival of scalable quantum computing technologies.

Session Report

Session Content
Introduction and Opening Remarks:
Carina Birarda opened the session by highlighting the escalation of cybersecurity incidents globally and the importance of international standards in countering these threats.

Presentations:

Wout de Natris focused on IoT security from a consumer and procurement perspective, emphasizing the necessity of security by design in IoT devices and the role of governments and industries in demanding secure ICT services.
Carlos Martinez Cagnazzo discussed the security of critical internet infrastructure, detailing the deployment of DNSSEC and RPKI and the broader implications for routing and domain name resolution security.
Maria Luque explored the implications of quantum technologies on cybersecurity, addressing the vulnerabilities of current cryptographic systems and the potential of quantum computing to disrupt security protocols.
Olga Cavalli shared insights from the perspective of governmental cybersecurity strategies, focusing on capacity building and public policy in Argentina, and highlighted the unique challenges faced by developing countries.
Discussions and Q&A:
The discussion session facilitated by Carina Birarda allowed panelists to delve deeper into strategies for advancing cybersecurity across different sectors and the need for a unified approach to tackle emerging challenges. Topics of particular interest included the adoption of security by design, the implementation of international standards, and the potential impact of quantum technologies on global security.

Key Outcomes:
Consensus on Security by Design: There was unanimous agreement on the necessity of integrating security features at the design phase of IoT and other critical technologies.
Call for Collaborative Action: The need for collaborative efforts among governments, industries, and the technical community was emphasized to enhance the adoption of best practices and standards.
Focus on Quantum Preparedness: The discussions underscored the urgency of preparing for quantum technological advancements by developing quantum-resistant cryptographic methods.
Conclusions:
The session concluded with a strong call to action for all stakeholders to enhance cybersecurity measures, adopt robust security protocols, and prepare for the challenges posed by quantum computing. The insights shared will contribute to the formulation of comprehensive guidelines and best practices for securing cyberspace.

IGF 2023 Networking Session #64 Worldwide Web of Youth: Cooperation for Enlightenment

Updated:
Digital Divides & Inclusion
Key Takeaways:

1. Youth engagement is one of the key drivers of Internet Governance because they are the most open-minded and enthusiastic generation that can listen to opinions from all over the world and take real actions

,

2. Enlightment projects in the IT field are needed in the Global North no less than in the Global South because it can help to share the approaches from all over the world

Calls to Action

1. Involve more tech youth in the IG field via education projects and NRI's

,

2. Create a more inclusive space for online participants of IT conferences, so that more people can share their view on the IG topics

Session Report

Worldwide Web of Youth: Cooperation for Enlightenment let young people to present their projects aimed at involving youth in the field of internet governance, developing digital literacy and involving youth in IT.

Pavel Pozdnyakov: the Summer School on Internet Governance, the “Digital Reality” discussion club, the CC Youth Council and a special course for young people held on the eve of the Russian Internet Governance Forum. Pavel also added that the Russian IGF traditionally concludes with a youth session, where young people present Russian projects related to internet governance, and speakers from other countries share their experience in involving young people in the field of internet governance.

The Center for Global IT Cooperation spoke about its youth projects, in particular about the Youth Internet Governance Forum (Youth RIGF), which has been held every year since 2021, as well as about the initiative born on the sidelines of this Forum - the Institute of Youth Digital Ombudsman.

Marko Paloski and Shradha Pandey explained how you can be involved in the ISOC Community and what opportunities will be open for you after that. They pointed out that this is a great platform to start learning about Internet Governance and be involved into the IG movements. Also, this is a good platform to present your own projects in digital literacy or other relative field. Supportive youth will always help you.

All of the speakers advised to start your projects within your own country or region. They also encouraged more tech youth to be involved in the IG movement and IGF itself make online participation more meaningful for everyone. 

IGF 2023 Networking Session #80 Radical Imaginings-Fellowships for NextGen digital activists

Updated:
Human Rights & Freedoms
Session Report

Radical Imaginings-Fellowships for NextGen digital activists (Day 1, 16:30-17:30 UTC+9)

Young people need to be at the forefront of shaping the digital institutions and economies of today. But how do we create enduring pathways to effectively support this goal? How can we bring the voices of the most vulnerable into youth-led action? What models and approaches can we look to that are already in play? What are their successes and where are they falling short?

This networking session aimed to kick-start a community dialogue around re-imagining a model for fellowships that can facilitate early-career scholars and activists to be engaged in truly transformative work on the digital economy, and pioneer visions for feminist, sustainable, equitable and just alternative futures. It focussed on:

Understanding the needs and challenges of young activists working in CSOs, research organizations, academia and trade unions

Identifying key areas and types of work that remain significantly under-resourced and overlooked towards digital justice

Determining short to mid term priorities for action and the fora for advocacy

The networking session brought together young activists and professionals, organizations and grant makers to draw from their experiences and debate and deliberate upon the challenges for and possible solutions and good practices concerning fellowships in the field of digital governance/digital activism.

Challenges discussed

  • Highly competitive nature of fellowships (competition between fellows of similar background/field)

  • Online-only character of some fellowships that might limit accessibility

  • Lack of continuity between the fellowship period and after, where former fellows join a very competitive field afterwards without further support

  • Limited funding avenues apart from from Big Tech funders

  • Lack of involvement of fellows in the design of projects they work on

  • Defining fellowships in a too narrow way “digital” (e.g. not considering social or environmental challenges)

Solutions/good practices discussed

  • Inviting fellows from different fields and places to lower the internal competition

  • Involving fellows in the governance of the fellowship programs, e.g. through electing future cohorts of fellows or by co-designing the projects

  • Being flexible with the goal of the fellowship as people’s lives change and new opportunities or limitations arise

  • Involving fellows in all activities of the hosting organization and not just a distant add-on

  • Fostering networking among fellows with internal and external partners

  • Extending greater trust to fellows in balance with guidance and mentorship

  • Tapping into alumni networks as a way to support fellows post tenure

  • Centering respect and trust in funding, having limited rules on what to spend the money on and allowing fellows to prioritize resources

     

IGF 2023 Day 0 Event #79 A Global Compact for Digital Justice: Southern perspectives

Updated:
Global Digital Governance & Cooperation
Calls to Action

We need to move the GDC in a manner that grapples honestly and boldly with its implementation challenges – how principles and rules can and must address inequality and injustice in the digital paradigm. Anything less will only embolden the few corporations and countries that desire to keep the status quo, This is untenable and will be unacceptable.

Session Report

The inequality of the digital economy presents an urgent challenge to development and democracy. If Agenda 2030 is to be realized, bold and committed action is needed to a) share the benefits of digitalization with all countries and peoples, b) govern digital resources democratically, and c) make digital policies and laws fit for catalyzing innovation that counts. The ultimate test for a well-guided digital transition is in the public and social value it can create, and the human freedoms it can expand. The political declaration adopted at the High-Level Political Forum on Sustainable Development in September 2023, rightly alludes to the participation of all countries in the digital economy. Its focus on infrastructure, connectivity, and the affirmation of digital rights of people is noteworthy. The Global Digital Compact (GDC) will need to carry this consensus forward, with nuances of the particularities required for our common digital future.

The 2023 Internet Governance Forum (IGF) pre-event in Kyoto on ‘A Global Compact for Digital Justice: Southern Perspectives’ was proposed by the Global Digital Justice Forum, the Dynamic Coalition on Platform Responsibility, and the Dynamic Coalition on Internet Rights and Principles to explore the central question: how can we build a GDC that furthers digital justice, especially in the majority world?

The event brought together speakers from governments and civil society in a multistakeholder dialogue structured in an innovative ‘BUILD IT, BREAK IT, FIX IT’ format.

The BUILD IT round delved into the promise of the GDC to fix global governance deficits in digital cooperation as seen from the prism of intergovernmental organizations in charge of the World Summit on the Information Society (WSIS) lines, governments, and civil society representatives. The following speakers made inputs during this round.

  • Amandeep Singh Gill, UN Secretary-General's Envoy on Technology

  • Regine Grienberger, Cyber Ambassador, German Federal Foreign Office

  • Shamika N. Sirimanne, Director, Division on Technology and Logistics, United Nations Conference on Trade and Development (UNCTAD)

  • Alison Gillwald, Executive Director, Research ICT Africa

  • Renata Avila, CEO, Open Knowledge Foundation

The session began with the UN Tech Envoy Amandeep Singh Gill’s inputs, who affirmed the idea of building through the GDC, a shared vision and a global framework for digital governance that is negotiated by governments but is open to participation by regional organizations, private sector, and civil society. He emphasized the need to a) shape a transition away from a solutions orientation to ecosystems and infrastructures for digital development, and b) go beyond the connectivity paradigm, and shift the attention towards digital public infrastructure to create inclusive innovation spaces that focus more on capacity.

Regine Grienberger, Cyber Ambassador from the German Federal Foreign Office, began by acknowledging the continued digital gap/divide and its significant impact on the SDG process and suggested that this be an important focus of the GDC. Grienberger also advocated for the consultative process to take a local/national to global approach, and emphasized the need to engage in more cross-regional discussions, especially on issues like artificial intelligence (AI). Additionally, he made the critical observation that the GDC process needs to be anchored in the basic tenets enshrined in cornerstone UN documents, such as the Human Rights Charter.

In her input, Shamika Sirimanne from UNCTAD observed how the gains of connectivity have been skewed, with a few transnational corporations and nation-states being able to embrace the digital revolution optimally while others lag behind. Given that the structural inequalities in the digital order compound the effects of other inequalities, we are confronted increasingly by a digital inequality paradox, where, as more people are connected, digital inequality is amplified. In this context, Sirimanne underscored that the GDC process had an imperative to go beyond the connectivity paradigm and bridge the gap between actors who possess the technological and financial resources needed to harness the digital and those who don’t. She outlined the need for quality and affordability of access, skilling opportunities to navigate the digital economy, and equal participation of countries in the global regime to shape the rules of the game so that the opportunities of the digital paradigm could be reaped more equitably.

Meanwhile, Alison Gillwald from Research ICT Africa pointed to the most pressing global challenges of our time, which include the climate crisis and the issue of widening inequality, including digital inequality as a starting point to her input. These need to be addressed through a collective and collaborative renewal of the social contract that was anchored in human rights and gender equality in order to rebuild trust and social cohesion and enhance digital inclusion. Like Sirimanne, Gillwald observed that the layering of advanced digital technologies over underlying structural inequalities compounds the effects of digital inequality, especially in regions with glaring infrastructure and capacity deficits like Africa. In this regard, she noted that the GDC process needed to focus on infrastructure and digital public goods.

The concluding input of the round came from Renata Avila from the Open Knowledge Foundation who argued that for many countries of the Global South contending with a severe debt crisis and lack of resources, decisive action that could address the geopolitics of global inequality and injustice was the top priority. Avila emphasized an urgent need for financing and international commitments for the development of digital infrastructure, skills, and regulatory capacities for all countries to navigate the terrain, as well as renewed commitments from international financial institutions towards these goals. Additionally, she pointed to the unmet promise of knowledge equality and the trend of knowledge capture of think tanks, academia, and civil society by Big Tech. In this regard, she held the reform of the IP regime as an important agenda for the GDC to take up.

The BREAK IT round in turn, critically interrogated the efficacy and effectiveness of the proposals in the GDC across its various dimensions, focusing on information disorder, AI and human rights, reining in Big Tech power, guaranteeing a free and open internet, and IGF reform for effective digital governance mechanisms at the global level. The following speakers made inputs as part of this round.

  • Helani Galpaya, CEO, LIRNE Asia

  • Alexandre Costa Barbosa, Fellow for the Weizenbaum Institute and Homeless Workers Movement - Technology Sector, Brazil

  • Nandini Chami, Deputy Director, IT for Change

  • Megan Kathure, Afronomicslaw

  • Dennis Redeker, University of Bremen and Digital Constitutionalism Network

Helani Galpaya from LIRNE Asia noted in her critique of the GDC process that several developing countries when faced with an immense challenge of fiscal squeeze, focused on devoting resources to basic development needs and were unable to spare attention on digital governance issues, which compromised the dialogue and involvement within the process overall. Galpaya also highlighted the inability of the GDC to address the disparity of national regulations on critical issues such as taxation and grapple with the unacknowledged reality of a highly digitally fragmented landscape, which made consensus building a difficult proposition. Additionally, she pointed out the failures of the multilateral system in being unable to hold its own member states accountable for draconian digital laws and policies that were harmful to citizen rights, something that the GDC process had not really taken into account.

In his input, Alexandre Costa Barbosa from the Weizenbaum Institute and the Homeless Workers Movement - Technology Sector, Brazil, focused on the key aspect of sustainable digital public infrastructure (DPI) and the lack of clarity around the concept. In the absence of a multistakeholder dialogue or collective definition, this important aspect of the GDC was in danger of being defined and captured by a Big Tech spin of the discourse, rather than allow for the possibilities of interoperable, open, and accessible DPIs that are locally responsive. Barbosa additionally pointed to the silence on the critical issue of labor and contended that the GDC process must have more discussions on this topic in particular its connections to the field of generative AI.

Nandini Chami from IT for Change in her critique, underscored how the aspirations of the WSIS seem to be forgotten and waylaid in the GDC processes. She further observed that the reduction of data rights to privacy as is prone to, in current discourse simply erases data extractivism, which continues to be the fault line of geopolitical and geo-economic power. In this context the GDC process did not fully recognize that rights in data extend to people’s claims over data resources, and their right to collectively determine how they see value generation from digital intelligence.

Pointing to the inversion of basic rules for the marketplace in the way Big Tech controls public functions, recasts society and citizens into individual users and consumers, and squeezes labor in the transnational AI chains, Chami urged the audience to push back against the silent consensus that Big Tech cannot be regulated. She called on political commitment to begin the change and member states to measure up in this regard.

Meanwhile, Megan Kathure from Afronomicslaw observed that the historical choices in internet governance that had enabled the rise of Big Tech had also given rise to a narrative of ‘limits of multistakeholderism’ in bringing forth a global digital constitutionalism. She stressed that the fundamental issue with the current GDC process is that it risked entrenching the regulatory dilemma of global governance of the digital and affirming this narrative. In her input, Kathure highlighted two gaps in the current GDC process. The first is that it failed to acknowledge the complementarity of rights with state duties and simply expected states to refrain from certain actions without enshrining correspondent duties. She argued that the GDC must go beyond taking multilateral commitments from states and corporate actors and needed to outline a regime of consequences for inaction, thus dealing head on with the realpolitik of global digital governance. Second, Kathure observed that the GDC process did not conceptualize human rights holistically and discussed the fact that current proposals did not capture the indivisibility of human rights adequately.

In the concluding input for the round, Dennis Redeker from the University of Bremen and Digital Constitutionalism Network, highlighted emerging findings from research on how the general public in various countries viewed the consultative process. Redeker highlighted the discrepancies in agendas that dominated vis-à-vis those that people held as important and expressed wanting more involvement in, and pointed to the a consensus among general public about reduced involvement of the private sector in policy processes.

In the FIX IT round, the session rounded up responses towards the issues raised in order to conclude with a forward-looking roadmap on what the GDC needs to foreground for furthering an inclusive, people-centered, development-oriented digital future. The following speakers made inputs as part of this round.

  • Ana Cristina Ruelas, Senior Program Specialist, United Nations Educational, Scientific and Cultural Organization (UNESCO)

  • Anriette Esterhuysen, Senior Advisor, APC

  • Prapasiri “Nan” Suttisome, Project Officer, Digital Rights, Engage Media

  • Emma Gibson, Global Coordinator, Alliance for Universal Digital Rights for Equality Now

  • Luca Belli, Professor, Fundação Getulio Vargas (FGV) Law School, Rio de Janeiro

Ana Cristina Ruelas from UNESCO, highlighted the regulatory efforts undertaken by UNESCO for a new platform society. Ruelas observed that a lot of ground needed to be covered in the local-to-global regulation of social media platforms and the algorithmic control. Additionally, she pointed to the fact that no one actor could solve all issues and proposed the idea of a regulatory framework of networks, which would allow stakeholders to take a more interconnected approach to digital governance.

Anriette Esterhuysen from APC urged stakeholders to look at the existing norms and principles in the digital space as a starting point. She also held that the GDC was not being meaningfully informed by the current state of digital inequality and urged for this tokenism to be challenged. What is to be put at the center is not the techno-fascination of the corporate narrative but a people-created and -controlled narrative. Esterhusyen called for a feminist and radical vision of digital transformation in this regard. She stressed on the importance of granular data and public statistics to allow for a clear cognizance of the depth and breadth of economic injustice and the uneven distribution of opportunities associated with the digital.

Prapasiri “Nan” Suttisome from Engage Media, in her input, pointed out how powerful countries use free trade agreements to stifle digital rights of peoples and countries in the Global South. Trade rules are used to arm twist governments to hyperliberalize data flows, take away local autonomy of public authorities to govern transnational corporations and their algorithms, prevent the scrutiny of source code, and legitimize a permanent dependence of developing countries on the monopoly corporations controlling data and AI power. This kind of infrastructural dependence is tantamount to a neo-colonial order and Suttisome observed that unless the indecency and impunity of some actors in the digital space is countered, and countered now, any compact is bound to fail.

Meanwhile, Emma Gibson in her input presented the work being undertaken by the Alliance for Universal Digital Rights (AUDRi) for Equality Now, and called for the adoption of a universal digital rights framework, rooted in human rights law and underpinned by an intersectional feminist perspective. The GDC needs to be a feminist process to be truly transformative. She presented the nine principles developed by AUDRi based on equal protection from persecution, discrimination, and abuse; equal access to information, opportunity, and community; and equal respect for privacy, identity, and self-expression

In the concluding input, Luca Belli from FGV presented three structural challenges that made the GDC process ineffective. Belli pointed to the issues fragmented landscape, which went beyond geography and also extended to the trend of taking siloed regulatory approaches to digital issues; the presence of outsized political and economic interests that played against policy strategies (for instance between private sector and domestic governments) and the fact that for the private sector, the bottom line of shareholder interest always trumps public interest, making regulatory compliance a challenge at all times. By way of remedies, Belli suggested moving the GDC in a manner that grapples honestly and boldly with its implementation challenges.

 

IGF 2023 DC-DAIG Can (generative) AI be compatible with Data Protection?

Updated:
AI & Emerging Technologies
Key Takeaways:

AI transparency and accountability are key elements of sustainable AI frameworks but different stakeholders and policy debates define and interprets such concepts in heterogeneous fashion.

,

Most AI governance discussions focused on and are led by primarily developed countries. The Data and AI Governance (DAIG) Coalition has proved to be one of the few venues with strong focus on AI in the Global South.

Calls to Action

The DAIG Coalition will keep on promote the study on key data and AI governance issues such as algorithmic explicability and observability which are critical to achieve sustainable policy frameworks.

,

The DAIG Coalition will maintain and expand its focus on Global South perspectives, striving to increase participation from African countries.

Session Report

Session report: Can (generative) AI be compatible with Data protection?

IGF 2023, October 10th, 2023, WS 10 - Room I


The session explored the tension between the development and use of AI systems, particularly generative AI systems such as ChatGPT, and data protection frameworks. The DC aims to present a diverse set of views, in the spirit of multistakeholder debate, from various sectors, countries, disciplines, and theoretical backgrounds.

Professor Luca Belli, Director of the Centre for Technology and Society at FGV Law School, opened and moderated the session. He discusses the concept of AI Sovereignty – “the capacity of a given country to understand, muster and develop AI systems, while retaining control, agency and, ultimately, self-determination over such systems”. Regulating generative AI is a complex web of geopolitical, sociotechnical, and legal considerations, whose core elements compose the AI Sovereignty Stack.

Armando Manzueta, Digital Transformation Director, Ministry of Economy, Planning and Development of the Dominican Republic – gave insights on how governments can try to use generative AI in their infrastructure and public services. When an AI complies with data privacy laws along with a transparent decision-making mechanism, it has the power to usher in a new era of public services that can empower citizens and help restore trust in public entities improving workforce efficiency, reducing operational costs in public sectors, and supercharging digital modernization.

Gbenga Sesan, Executive Director, Paradigm Initiative, Nigeria – emphasized the role of existing data protection laws, but also how this ongoing discussion on generative AI opens an opportunity for the countries that do not yet have a data protection law to start considering introducing one to regulate mass data collection and processing. There is also a need to de-mystify AI and make it more understandable to people. Sesan also pointed out that there is a lack of diversity in the models of generative AI like ChatGPT, as well as a need to establish review policies or mechanisms when they deal with information on people.

Melody Musoni, Policy Officer at the European Centre for Development Policy, South Africa – spoke on how African countries are taking steps to carve out their position as competitors in the development of AI. There is a need for AI to solve the problem in the African region. E.g., the digital transformation strategy showed the urgency for Africa to start looking into AI and innovation to develop African solutions. The speaker also mentioned setting up data centers through public-private partnerships.

Jonathan Mendoza, Secretary for Data Protection, National Institute of Transparency Access to Information and Protection of Personal Data (INAI), Mexico - explores current and prospective frameworks, giving a descriptive account of ongoing efforts to promote transparency and accountability. Due to the diverse nature of the population in the Latin American region, generative AI can pose a threat and therefore a policy to process personal data must be in place. There is also a need to balance the ethical designing of AI models and the implementation of AI to make these models more inclusive and sustainable while reducing potential threats.

Camila Leite, Brazilian Consumers Association (Idec) - explored the general risks of AI on the Brazilian consumer population. Financial and Mobility services can immensely benefit from generative AI, however there have been instances in which the output from generative AI was found to be manipulative, discriminatory, and violated the privacy of people. It is important to put consumer rights and protection at the heart of policies regulating generative AI.

Wei Wang, University of Kong - elucidates the disparate conceptualizations of AI accountability among various stakeholders at the Chinese level, thereby facilitating an informed discussion about the ambiguity and implementability of normative frameworks governing AI, specifically regarding Generative AI. China has a sector-specific approach contrary to the comprehensive one as seen in the EU, UK, etc. China has established measures to comply with sectoral laws and Intellectual property laws.

Smriti Parsheera, Researcher, CyberBRICS Project, India - discusses the why and how of transparency obligations, as articulated in the AI governance discussions in India and select international principles. She argues that the need for transparency permeates through the lifecycle of an AI project and identifies the policy layer, the technical layer, and the operational layer as the key sites for fostering transparency in AI projects.

Michael Karanicolas, Executive Director, UCLA Institute for Technology, Law and Policy - argues for the need to develop AI standards beyond the “auspices of a handful of powerful regulatory blocs”, and calls for the inclusion of the Majority World into standard-setting processes in international fora.

Kamesh Shekar, Senior Programme Manager, Privacy & Data Governance Vertical, The Dialogue - argues for a principle-based approach coupled with a detailed classification of AI harms and impacts. He proposes a detailed multistakeholder approach that resonates with the foundational values of responsible AI envisioned by various jurisdictions geared toward ensuring that AI innovations align with societal values and priorities.

Kazim Rizvi, Founding Director, The Dialogue - spoke about domestic coordination of regulation and then international coordination. Alternative regulatory approaches can also be looked upon through public-private partnerships.

Giuseppe Cicu, PhD Student at the University of Turin and corporate Lawyer at Galgano Law Firm - spoke about a framework to regulate AI by Corporate Design to fit together business management and AI governance concerns into a step-by-step implementation process, from strategic planning to optimization. He provided a game plan for responsible AI by bringing transparency and accountability into the organizational structure of the firm and having a human in the loop. The approach is grounded in the human rights global framework and privacy policies. He suggests that corporations introduce an ethic algorithmic legal committee.

Liisa Janssens, LLM MA, scientist department Military Operations, unit Defence, Safety and Security, TNO the Netherlands Organisation for Applied Scientific Research - provides a focused responsible AI framework for military applications, developed through a scenario-setting methodology for considering AI regulation’s virtues and shortcomings. The disruptive nature of AI is considered in the face of the demands of Rule of Law mechanisms to trace the requirements that make up responsible use of AI in military.

Comments and questions: What are the key privacy principles at a normative level (e.g., transparency and data minimisation, purpose limitation) that should be ensured so that generative AI can comply with them? Will the data protection laws expand their scope to include non-personal data since most of the data to train a generative AI is non-personal data.

IGF 2023 DCPR A new generation of platform regulations

Updated:
Key Takeaways:

1. Need for Platform Regulation: Professor Yasmin Curzi and Professor Luca Belli have consistently stressed the urgent need for the regulation of digital platforms. The DCPR, for nearly 10 years, has been a prominent entity in advancing research and championing actionable solutions. Their comprehensive studies highlight the significance of digital platforms on democracy, markets, and human rights.

,

2. Emphasis on Transnational Dialogues: Professor Belli accentuates that mere regulation isn't sufficient. A deeper understanding of systemic risks requires global conversations that consider the unique aspects of local contexts. The DCPR has concentrated on various legislative frameworks, such as those in Brazil, India, China, and the EU regulations, to appreciate how these platforms influence and adapt within different environments

Calls to Action

Importance of an open, accessible internet governed by multiple stakeholders, encompassing gender equality, children's rights, sustainable development, and environmental aspects. All entities, from governments to the private sector, must utilize these principles as benchmarks for their internet governance; Platform governance discourse needs to delve into substantive concerns that platforms pose, such as their environmental and labour impacts

Session Report

The Dynamic Coalition on Platform Responsibility (DCPR) session at the Internet Governance Forum (IGF) 2023 provided an invaluable forum for discussing the multifaceted challenges and opportunities in digital platform governance. This session was marked by insightful dialogues among experts from diverse fields, reflecting the DCPR's commitment to fostering a multi-stakeholder approach in addressing the complexities of platform regulation.

Key Highlights and Discussions:

  1. Professors Luca Belli and Yasmin Curzi, DCPR coordinators, highlighted the decade-long commitment of the DCPR in researching and addressing the challenges posed by digital platforms. They stressed the importance of not only acknowledging the necessity for platform regulation but actively engaging in research and practical solution-seeking.
  2. Professor Belli underscored the need for fostering transnational dialogues to address systemic risks presented by digital platforms. The session delved into legislative frameworks in countries like Brazil, India, China, and the European Union, emphasizing the need for context-sensitive regulation.
  3. Tatevik Grigoryan from UNESCO introduced the concept of Internet Universality, advocating for a global approach to internet governance based on principles of openness, accessibility, multi-stakeholder participation, and addressing cross-cutting issues.
  4. Samara Castro highlighted Brazil's proactive stance in social media regulation and misinformation control, discussing legislative, executive, and judiciary efforts. Brazil's experience serves as an inspiration for other nations in creating a safer, transparent internet.
  5. Anita Gurumuthy and Monika Zalnieriute emphasized the need to go beyond procedural principles and address substantive concerns like platforms' environmental impact and labor effects, calling for a holistic approach in platform governance.
  6. Rolf Weber emphasized the importance of accountability beyond compliance and the necessity of observability in platform governance, suggesting a model where platforms are transparent and answerable in their operations.
  7. Shilpa Jwasant provided an in-depth analysis of the Indian context, focusing on the recent developments in the IT Act. She highlighted how the Act is shaping the digital landscape in India, discussing its impact on user rights, data privacy, and the regulatory challenges faced by digital platforms operating in India. Jwasant’s insights into India’s regulatory approach underscored the balance between harnessing technological advancements and protecting fundamental rights.
  8. Sofia Chang delved into the Chinese scenario, particularly the country’s approach to algorithmic regulation. She elaborated on how China is navigating the complex interplay between technology, state control, and user rights, offering a unique perspective on how algorithmic governance is evolving in a highly digitized, prioritizing digital sovereignty. 
  9. Monika Zalnieriute brought a critical lens to the discussion on informational pluralism on social media platforms. She raised concerns about the private and opaque interests of big tech companies, emphasizing the need for greater transparency and accountability in how these platforms manage and disseminate information. Zalnieriute argued for a more equitable digital ecosystem that respects diversity of thought and counters the monopolistic tendencies of major tech firms.

The session benefitted from active participation from in-person attendees, who provided feedback and posed questions, enriching the discussions. Their contributions highlighted the global interest in developing effective platform governance models and underscored the need for inclusivity in these dialogues.

Conclusion: The DCPR session at IGF 2023 successfully facilitated a comprehensive exploration of digital platform regulation, stressing the importance of a multi-stakeholder, inclusive approach. The discussions and calls to action from this session are expected to guide future strategies and policies in the realm of platform responsibility.

IGF 2023 DC3 Community Networks: Digital Sovereignty and Sustainability

Updated:
Digital Divides & Inclusion
Key Takeaways:

There are different dimensions of sustainability, environmenta sustainability being one of them. Communuty networks provide added value, beyond connectivity - local services, content, promote circular economy.

Calls to Action

Raise awareness about CNs in urban areas - it's not only valuable for remote rural areas. Option to build community networks to support existing local services, instead of providing connectivity first and adding services on top.

Session Report

The IGF 2023 session of the Dynamic Coalition on Community Connectivity focused on the digital sovereignty and environmental sustainability aspect of community networks. Session participants provided their perspectives and best practices. Some panelists authored and co-authored the official DC3 outcome, a report titled "Community Networks: Building Digital Sovereignty and Environmental Sustainability".

The session started with the launch of the report, presented by Luca Belli. The report is a compilation of of five different papers/ chapters.

Atsuko Okuda from ITU Asia Pacific opened the session, presenting some ITU statistics on the state of connectivity in the region and globally.

Raquel Gatto from CGI Brazil spoke about community networks initiatives in Brazil and the newly formed working group within Anatel, the telecom regulator.

Amreesh Phokeer from the Internet Society presented some of ISOC's community network initiatives, and provided insights on their environmental impact.

Pedro Vilchez from guifi.net presented guifi's efforts to incorporate circular economy into their project.

Nils Brock from Rhizomatica / DW Academy spoke about using local materials such as bamboo for building towers, and the positive impact on the environment that comes with the use of local resources.

Carlos Baca from Rhizomatica presented the initiative of National Schools on Community Networks and how capacity building contributes to environmental sustainability.

In his closing remarks, Luca Belli highlighted the link between community networks and digital sovereignty.

 

IGF 2023 Open Forum #57 Procuring modern security standards by governments&industry

Updated:
Cybersecurity, Cybercrime & Online Safety
Key Takeaways:

1. Modern internet standards (such as IPv6, DNSSEC, HTTPS, DMARC, DANE and RPKI) are essential for an open, secure and resilient Internet that serves as a driver of social progress and economic growth. Such standards have been developed, but their use needs to increase significantly to make them fully effective. Procurement policies have proven to be an effective means of ensuring that these standards get traction and are used more widely.

,

2. Not using modern standards is a risk for the individual internet user. However, often users are not aware of it (because standards are "under the hood") and there are economic network effects that prevent users from fully benefiting immediately ("first mover disadvantage"). Research by IS3C has shown that public-private partnerships can play a crucial role in the creation of transparancy and awareness which is crucial to reach critical mass.

Calls to Action

1. To governments and TLD registry operators: Monitor the usage of modern internet security standards (such as IPv6, DNSSEC and RPKI) in the public sector and in society. For this, they can make use of open source tools such as https://Internet.nl and even extend it (eg tests for Universal Acceptance and for accessibility). Such tooling provides transparancy, helps end-users articulate their demand, and creates an incentive for vendors to comply.

,

2. To governments and industries: Publish procurement policies regarding modern internet security standards. These can be reused by others when creating procurement policies. Furthermore vendors could use these as requirements for their software and systems. The list with most important internet security standards that was created by IS3C (https://is3coalition.org/) can be used as a reference (consultation untill  5 Nov 2023).

Session Report

Moderator Olaf Kolkman introduced this Open Forum by elaborating on the role of modern security standards in securing the internet. He emphasized that we need to secure the internet for the common good. One of the challenges that comes with securing the internet is the slow adoption of security standards. Therefore, this Open Forum highlights tools that enhance the adoption of modern security standards.

The Role of Open Standards particularly in procurement, experiences in the Netherlands

Modern internet standards (such as IPv6, DNSSEC, HTTPS, DMARC, DANE and RPKI) are essential for an open, secure and resilient Internet that serves as a driver of social progress and economic growth. Gerben Klein Baltink and Annemieke Toersen explained the role of standards in procurement and their experiences in the Netherlands. The role of open standards in promoting a safer, more secure, and well-connected internet has become increasingly recognized, with initiatives like the internet.nl test tool which contribute significantly to this progress. The tool is primarily aimed at organizations, attracting both technical personnel and board members, and allows them to assess if their mail, website, and local connections comply with established standards.

In the procurement and supply chain management domain, the Forum Standaardisatie think tank has been actively promoting the use of open standards, advocating for more interoperability. With 25 members from government, businesses and science, the forum advises governments on the adoption of open standards, emphasizing their importance in promoting information exchange, ensuring interoperability, security, accessibility and vendor neutrality.

The Dutch government has pursued a three-fold strategy to promote open standards. Firstly, through the implementation of a "comply or explain" list of 40 open standards, carefully researched and consultated with experts. This have led to increased adoption, particularly in areas such as internet and security, document management and administrative processes, like e-invoicing. Government entities are mandated to use these standards, with required reporting if not followed.

Secondly, the government has fostered national and international cooperation, facilitating workshops on modern email security standards within the EU, and engaging with prominent vendors and hosting companies such as Cisco, Microsoft, and Google. They have also facilitated the reuse of internet.nl code in various projects, such as aucheck.com and top.nic.br.

Finally, the Dutch government actively monitors the adoption of open standards, evaluating tenders and procurement documents, and ensuring that the standards are included. Reports are submitted to the government, and efforts are made to support and guide vendors who may  lagging behind in the adoption of these standards.

Lessons learned from these efforts emphasize the importance of consistently checking for open standards in procurement processes and providing guidance and support to encourage their usage. The comprehensive approach taken by the Dutch government, along with collaborations with various stakeholders, has contributed significantly to the wider adoption and implementation of open standards, fostering a more secure and interconnected digital environment.

Procurement and Supply Chain Management and the Business Case

Wout de Natris and Mallory Knodel elaborated on the role of the Internet Standards, Security, and Safety dynamic coalition in enhancing internet security and safety through various initiatives. The coalition has established three working groups targeting Security by design on the Internet of Things, Education and Skills, Procurement and Supply Chain Management and the Business Case, aiming to contribute to a more secure online environment.

Their ongoing projects involve the deployment of DNSSEC and RPKI, exploring emerging technologies, and addressing data governance and privacy issues. They strive to persuade decision-makers to invest in secure internet standards by developing a persuasive narrative incorporating political, economic, social, and security arguments. The Procurement and Supply Chain Management and the Business Case working group have released a comprehensive report comparing global procurement policies, shedding light on existing practices and advocating for more transparent and secure procurement processes.

The coalition highlights the need for greater recognition and integration of open internet standards into government policies, emphasizing the importance of universal adoption of standards for data protection, network and infrastructure security, website and application security, and communication security. They aim to provide decision-makers and procurement officers with a practical tool that includes a list of urgent internet standards to guide their decision-making and procurement processes.

By focusing on streamlining and expediting the validation process for open internet standards in public procurement, the coalition seeks to enhance procurement policies, resulting in more secure and reliable digital infrastructure. Overall, their collaborative efforts and initiatives aim to create a safer online landscape for individuals, organizations, and governments by promoting the secure design and deployment of internet standards and advocating for the adoption of open internet standards in government policies.

The report from is3coalition.org highlights a concerning trend where governments fail to recognize the critical components that enable the internet to function effectively. This issue has been a recurring question in various research endeavors, prompting the Working Group (WG) to prioritize and compile existing security-related internet standards and best practices in the field of ICT.

Best practice awards go to: the GDPR in the European Union provides common understanding and harmonization with regards to the security of information systems; the Dutch Ministry of the Interior and Kingdom Relations makes mandatory standards deployment. The ‘Pas toe of leg uit’-Lijst (comply-or-explain list) of the Dutch Standardisation Forum is a document containing 43 open standards that all governments in the Netherlands have to demand when procuring ICT; and Internet.nl: the tool used to track standards adoption by an organization’s website based on three indicators: website, email and connection. The software has been adopted in Australia, Brazil, Denmark and Singapore.

IS3C provides decision-takers and procurement officers involved in ICTs procurement with a list containing the most urgent internet standards and related best practices. This assists them to take into account internet security and safety requirements and procure secure by design ICT products, services and devices, making their organizations as a whole more secure and safer. By raising awareness and emphasizing the significance of internet security and safety requirements, the report seeks to prompt officials to consider and integrate these crucial standards into their operational frameworks.

To gather insights and perspectives on this critical issue, the coalition is conducting a consultation on the report until November 5th at 10:00 UTC. This consultation aims to engage stakeholders and experts to discuss and address the challenges associated with the recognition and implementation of internet security standards by governments.

Report: https://is3coalition.org/docs/is3c-working-group-5-report-and-list/

Perspectives from India

There are many examples of good efforts and effective tools enhancing internet security. One of these examples comes from India. Mr. Satish Babu highlighted that the Trusted Internet India Initiative was initially established at the India School of Internet Governance (inSIG) in 2016 and has since 2018 been collaborating with the Global Forum for Cyber Expertise.

InSIG organized GFCE’s Internet Infrastructure Initiative (Triple-I) Workshop in 2018, 2019, 2022 and 2023 as Day 0 events of inSIG. The Triple-I workshop seeks to “...enhance justified trust in the Internet” by building awareness and capacity on Internet-related international standards, norms and best practices. In its 2023 edition, the Triple-I workshop announced a new initiative that attempts to measure periodically the compliance of Indian websites, DNS and email services to modern security standards (to begin in 2024).

During the T3I workshop, it was emphasized that digital technology plays a crucial role in fostering India’s growth. The digital public infrastructure, which serves over a billion citizens, facilitates applications related to financial health, logistics, and more. However, the workshop shed light on the existing weak levels of compliance within these systems. In response to this observation, volunteers associated with T3I conducted extensive research to identify areas of improvement.

Building on their research findings, the initiative now plans to conduct comprehensive testing and disseminate the results to all stakeholders. The aim of this effort is to enhance compliance levels across Indian digital platforms, ensuring that they meet modern security standards and contribute to a safer and more secure digital environment. 

Perspectives from Brasil

Mr. Flavio Kenji Yanai andGilberto Zorello shared their experiences from a Brazilian perspective. The Brazilian Network Information Center (NIC.br) is a non-profit civil entity that since 2005 has been assigned with the administrative and operational functions related to the .br domain. NIC.br is actively investing in various actions and programs to improve internet services across different sectors. Their initiatives are geared towards disseminating knowledge and best practices, contributing to a safer and more secure internet environment in the country.

A key project they are currently undertaking is the TOP Teste os Padrões (Test the Standards) tool, which was initiated in December 2021 and utilizes Internet.nl provided by the Dutch government. As part of the Safer Internet program, their objectives include providing support to the internet technical community. This involves collaborating with various groups to develop technical teaching materials and promote good practices aimed at raising awareness within the technical community. Their efforts have yielded positive results, as statistics indicate a reduction in misconfigured IP addresses.

Furthermore, they have implemented the Mutually Agreed Norms for Routing Security (MANRRS) in Brazil, leading to a notable increase in the number of participants. The statistics reflect continuous improvements in various aspects of internet security within the country. With significant incumbents responsible for approximately 50% of the internet traffic in Brazil, the implementation of version 1.7 of internet.nl, currently in the validation phase, has been instrumental. The tool is being widely disseminated in conjunction with the Program for a Safer Internet, with government entities also starting to utilize it to test their websites and email services. The TOP tool has proven to be of immense value in fortifying the internet infrastructure in Brazil.

IGF 2023 WS #109 The Internet in 20 Years Time: Avoiding Fragmentation

Updated:
Avoiding Internet Fragmentation
Key Takeaways:

There is a level of Internet Fragmentation today, which manifests at a technical, regulatory and political level. There is a chance, however, to act upon present and future fragmentation and design the future Internet we want. We should consider incentives (where economics has played a central role), think how to find convergence, design a future Internet for people, and be ready for this debate to be impacted by geopolitics and climate crisis.

,

To get to a best case, future scenario, we should take an incremental, interactive approach to devising solutions (including regulation); our actions should have a compass and be principles-based (openness and permissionless innovation emerged as central guiding principles); strive for inclusivity in governance and standards, take guidance from human rights frameworks and engage actively in difficult areas where there is tension or “chaos.”

Session Report

This workshop proposed to discuss Intern Fragmentation through a forward looking exercise. The session opened with the moderator inviting the panel and audience to think of the Internet in 2043, what good would look like, and what it would take us to fulfil the hoped-for future we want.

The panellists started off by sharing their thoughts on what it entails imagining the future, based on past experience. 

Olaf Kolkman from the Internet Society highlighted it is hard to predict the future and what technologies would triumph, exemplifying with his erroneous prediction that webpages would not go beyond academic libraries. Sheetal Kumar from Global Partners Digital spoke about ubiquity of smartphones and connectivity as a crucial development and in looking to the future, encouraged the audience to think about what we want the Internet to feel like; she believes the internet will continue to grown in embeddedness and finds that how the internet will evolve will depend on what we Internet we choose to create. French Ambassador for Digital Affairs, Henri Verdier —who created his first web-based company in the 90s— shared a story about how he erroneously predicted that Wikipedia would fail to take off. Professor Izumi Aizu from Tama University mentioned that we are oftentimes overly optimistic of the future, which in reality may be composed of different shades and colours. The future is bound to surprise us with unpredictable events like Fukushima or the unfolding conflict in Gaza. Lorraine Porciuncula from the Datasphere Initiative spoke of being a digital native, and the optimism felt during the Arab spring. She recalled the sense of opportunity and “capability” brought by technology. Time showed that there are good and bad aspects to technology, yet she encouraged the audience to reconnect with a sense of optimism. 

The moderator introduced the discussion paper submitted as part of the session (https://dnsrf.org/blog/the-internet-in-20-years-time-what-we-should-hav…) which lays out there potential future scenarios:

  • Scenario 1: Continued Status Quo. In the first scenario, we muddled along, continue the current course of action and end up with an internet that continues in its present trajectory with some signs of fragmentation;
  • Scenario 2:  Fully Fragmented Internet. the second scenario is one of complete fragmentation, either divided at the technical layers, at ideological layers or regulatory layers or all three; 
  • Scenario 3: Strengthened, non-fragmented Internet. The third scenario is one of a bright future where we get our act together.

The moderator invited the panel and audience to comment on what they see as the most likely future and why, and at what layer they see the most risk.

Olaf said that in reading the scenarios, he was struck about how the future is already here. Many of the things described in the scenarios —such as drivers for the fragmentation of the technical layers of the Internet, are already happening, and if they take off, they will splinter the internet. He explained that the value he sees in the Internet lies in its openness, the scientific method of sharing knowledge, and to be able to probe, query and scrutinise one another. He commented in particular on scenario 1, where we see a mix of closed networks coexisting with the Internet. This is about being proprietary, about the Internet being closed, about the Internet developing services that people pay for, where people connect to servers to access specific services, and the interconnectivity is less important. This is an entirely different notion from the Internet that exists to connect us to the rest of the world, where we get to choose services. To Olaf, openness is a best case scenario, where the richness of the Internet really lies. 

The moderator took a round of early comments from the audience. 

  • Barry Leiba said that what has driven the evolution of the Internet is the innovation in applications and services. He therefore thinks that a great idea for an application (perhaps yet to come) is what will drive the Internet of tomorrow, including another set of standards and technologies. He highlighted the role of standards in shaping the way we will experience technology. 
  • Andrew Campling stated that we are also at an inflection point. Up to now, the Internet was seen as a force for good. He finds we are now at the point where the balance is shifting to the Internet becoming a source for harm with the rise of disinformation and CSAM. Adding to the point of standards, he urged for standards development organisations (SDOs) to become more diverse.
  • Michael Nelson from the Carnegie Endowment for International Peace came in next. He taught a class about the internet future(s), where he highlighted to his students that the best way to understand what is coming in terms of technology, is not to understand what the technology can do, or what governments want it to not do, but rather to look at what the users want. So we should ask ourselves, what will drive companies and governments to do better? He concluded by saying “I am a technology positivist but political negativist.” 

The moderator returned to the panellists. Izumi described the first scenarios of mixed networks co-existing with the Internet as a scenario of chaos. He consulted a number of AI tools on the subject of the panel and shared the findings with the audience. Chat GPT said that, while there is fragmentation due to economic and political reasons, the ethos of the Internet as a tool for global communication will likely persist. Bard was even more optimistic and said the Internet might become even more unified. He challenged the audience to think not of a better internet, not for the sake of the Internet itself, but for the sake of a better society, which is a different perspective on how to understand the Internet. 

Lorraine, on the other hand, said that in her view, we will not have an issue of fragmentation around the Internet’s technical layers, but we will have a very concrete challenge on the regulatory side. This issue is reflective of not only the fragmentation of the Internet, but of the fragmentation of society. She urged the audience to consider “how are we (as societies) going to get along? What are the incentives?”  Regulators will regulate what they are scared off: they want to control national security, democratic processes, content, and so on. So when taking of regulatory-driven fragmentation, the question becomes “How will we work to find convergence?” 

 Ambassador Verdier said that he is uncertain what scenario will materialise, but that he knows what we should fight for. We know what the Internet brought us in terms of possibilities. Now there is great centralisation, if you look for example at submarine cables. He finds that big tech does not care for decentralised internet, and that “we need to fight for that interconnected, free, decentralised internet.” He also reflected on John Perry Barlow’s notion of Cyberspace (https://www.eff.org/cyberspace-independence), where the Internet felt like it was somewhere far off in “cyberspace”. Now the digital is embedded in all aspects of life: education, health, and even war and peace. He finds that the fragmentation of the technical layer would be an extremely bad scenario, as now interdependence holds it all together. If the internet were to fully fragment, the temptation to disconnect each other’s internet would be very high, war would be waged on infrastructure itself. So far we have cyberwarfare, but no attempts to disconnect internets. Beyond the technical layer, there is a political and legal layer. From a legal point of view, he sees it would be better to have regulatory convergence but if you believe in democracy, you need to respect regulatory proposals that are reflective of local prerogatives, as is the case in France. 

Sheetal came in next and said she finds that we have the capacity to build and design our own future, even though there are power asymmetries to be aware of. She picked up on the notion of how the Internet of the future should feel: it should feel liberating, especially to those who do not occupy those positions of power. She hopes for a future Internet that does not reflect the inequalities of our society. This will require that those who build the technologies and develop the standards, open up spaces to those communities affected by technology developments. In terms of what we should do, she highlighted “we know exactly what we need to do, we just don’t do it at the moment.” There are many useful tools and guidance on how to build a better, human-rights-respecting Internet. We should utilise and leverage those in shaping the Internet of tomorrow.

The audience came in with a new round of comments:

  • Web 3 and money. Georgia Osborn picked up on money being a huge incentive on the Internet, and currently money being a massive driver for the development of blockchain technologies, Web 3.0, alternative naming systems, and cryptocurrencies. She asked the panel to reflect on whether those forces are bound to further fragment the Internet, or not.
  • Interoperable laws. Steve del Bianco from NetChoice highlighted the impact of fragmentation through regulation, and stated that regulation is the main challenge we will confront, one that is already unfolding. There appears to be no cost associated or consequences for governments, particularly authoritarian governments that want to control what their citizens see. He highlighted how IGF 2023 was largely about AI, but not about collaboration. “We have been hearing competing views about how it should be regulated and where it needs to go. That is not going to work transnationally.” He encouraged the audience to think of ways of documenting the cost of fragmentation, and raising the “pain level” for bad regulatory proposals.
  • Bertrand Le Chapelle from the Internet and Jurisdiction Network also spoke about legal interoperability. He said that fragmentation is not driven by technical objectives but by politics. The legal fragmentation is a reflection of the international political system, which today is heavily influenced by notions of national sovereignty. The legal fragmentation is what prevents us from dealing with online abuse in many cases. The framework for accessing electronic evidence is non-existing or insufficient. He agreed with Ambassador Verdier that countries have a “democratic freedom/capacity” to do what they deem right for their citizens, but if we want to preserve interoperability we need to reduce the friction at the legal level. He also thinks we need to have heterogeneous governance frameworks that allow the coexistence of government regulation, company’s self regulation, and other frameworks that operate independently yet are able to speak to and with one another.
  • Involvement of the global south and regions with ideological disagreement. Nikki Colosso from Roadblocks came in next. She  pointed out how  a lot of conversation in IGF 2022 dealt with incorporating the global south and inclusivity. She asked the panel what specific steps companies and civil society can take to involve users from countries that are not represented in these conversations or those from countries where there are differences from a geopolitical perspective.
  • Digital Colonialism. Jerel James picked up on the issue of profit as an incentive. Money is how power gets flexed on certain communities. He asked about digital colonialism and how it may be sanctioned. As we see antitrust regulation for monopolies exists in our traditional finance system, he asked whether there are possibilities to sanction resource extraction by big tech as a means to stop digital colonialism.
  • Bad behaviour in the online realm. Jennifer Bramlet from the UN Security Council spoke next. She focuses on how bad actors exploit ICTs for terrorism, including use to recruit and radicalise individuals. They look at what is considered unlawful and harmful language across jurisdictions, from a regulatory perspective. Looking to the future, they are concerned about crime and terrorist activity in the metaverse, and how it may be tackled going forward when regulation hasn’t quite yet caught up with online criminal challenges we see today.  Her question to the panel was how do you deal with bad behaviour in the online realm.
  • Call not to lose sight of the social value of the Internet. Vittorio Bertola came next. He believes Europe is producing regulation precisely to preserve the global nature of the Internet, not to break it. Also, if the future of the internet is decided by what people want from it, people want entertainment and social media attention. If we focus on that we lose sight of the social purpose of the technology. Just doing things because we can or because money is not enough.

Ambassador Verdier responded first by saying he shares the aspiration by Bertrand of interoperable legislation. But while we can work on making progress in that direction, we are not there yet. France is fighting for regulation of big tech, which they see as a private place built on the internet. In his view, “you can do that and still protect the global internet.” 

Sheetal elaborated on what we can do. On legal fragmentation, she expressed there is need for harmonisation. She finds we have human rights standards to guide us, we have the rule of law and our institutions. We can use those human rights standards and guidance for shaping the online space. She also seconded the need to protect the openness of the Internet and the ability to build your own apps and technology. She also supported the need to protect the critical properties of the internet, and how that comes hand in hand with the need to make standards bodies more inclusive. She also encouraged all participants to take the conversation home, to ensure that we are vocalising the values we want to be reflected on the Internet of tomorrow, and ensuring that those get executed. She concluded with an invitation: “Let's not be nostalgic, let’s look forward.” That requires giving users control, and not letting governments or companies determine what the future is about.  

Izumi reacted to Vittorio and Bertrand. He agreed that the future of the Internet depends on the wills of people/users, and that it also depends on legal frameworks. He wanted to add additional dimensions to consider, two factors that are unknown: climate and politics. We may get together hosted by the UN in 20 years time, independently of how politics plays out, who wins what war. Climate change, however, is an existential threat, we may think it is an external factor to the internet, but may well shape the future of the Internet, it may even lead to war. In the 1940s, we killed each other a lot. We then had the Cold War, and then came the Internet. Perhaps the timing was right, as the East and West were open to coming closer together. That political will is what allowed the Internet to get picked up. China wanted to have technology and science, that is why China accepted the Internet, to have growth and innovation and technology. Now China and India have reached the point where they do not need the West anymore. He concluded by inviting us to think of not the Internet of the future. The question has to be how the present and future will offer something better for society.

Lorraine picked up on notions of what the Internet can do for people. She highlighted that narratives matter, so it is not about the Internet, but about the digital society. Now, when we reflect on ”what is our vision for the Internet? what do we want the Internet to feel like?” she finds that we do not have a clear, shared vision. If the issue were walled gardens, we could use tools for antitrust and competition for users to move to other platforms. But the truth is that with the Internet, one government can’t fix it all, so it’s all about governance. We need to focus on asking ourselves “how do we cooperate? how do we govern? What are our economic and social objectives?”

Olaf concluded by explaining that not having infrastructure at all is the ultimate fragmentation. Empowered communities is the way forward, like IXPs, communities networks, that is truly bottom. He also added thoughts on standardisation. When you talk about economics and standardisation, standardisation is to a large extent industry driven and industry politics; we need to put that on the table and understand it. With economics, consolidation happens, even if you have open technologies, companies will try to extract money from using those open technologies. And you will have an accumulation of power to the point governments might say this is too much, and want to regulate it. But we need to remember you don’t need standards for every innovation. The founder of blockchain did permissionless innovation, open innovation (he did not innovate via standards making bodies). Innovation happens today, not just in standards organisations. If you ask me from a technical perspective, where to go in the future I say: Open architecture, so that people build on the work of others, open code so that it can be reused, and open standards.  

There was a last round of comments from the audience:

  • Yug Desai, ISOC Youth Ambassador, thinks in 20 years from now we will have fragmentation, not by design, but by default due capacity gaps. He finds the standards are unable to keep up with the pace of innovation, and not sufficiently inclusive of the users.
  • Mark Dattysgeld highlighted the importance of open source and the role of research driving AI. He said we should ask ourselves whether that is the new paradigm that takes things forward. This point was reinforced by Lucien Taylor on the example of TCP/IP.

The session wrapped with final recommendations from the panel about what to do next:

Raul Echeberria from ALAI finds we already have a level of internet fragmentation, and we need to live with that. The incentives of policy makers are diverse, and not always driven by the search for the best outcomes for all. Our mission has to be protecting the Internet. In terms of what to do, his proposal is to go for “gradual objectives and commitments, instead of going for the whole packet.”  In sum, he suggests an incremental approach.  He also said that in speaking to policy-makers, we need to make our messages sharper and clearer, and better outline what governments should not do. Lastly, he shared he recently participated in a discussion with parliamentarians, all of whom were over 50 years old. They spoke about fears, but it is important we do not develop policies based on fear, and let’s not let fear stop evolution. 

Lorraine reiterated the points we heard so far – being clear on what the objectives are, being incremental– and added being iterative. There is no ultimate regulation that will get it right, so we need to test stuff and iterate. The system is hard to predict and it moves fast. We need processes and institutions that are more agile. Like in software development, we need to identify the bug, and have multi-stakeholder conversations to address them. True multi-stakeholderism works when it seeks to be inclusive in an intentional way, particularly of communities that are underrepresented.

Ambassador Verdier added he thinks we can agree on a compass. In his view, we should stand for 3 aspects of the Internet’s golden age: unprecedented openness and access to information, which to date has not been fully accomplished as we still have a digital divide; unprecedented empowerment of communities and people; and permissionless innovation. He reiterated that fragmentation can come from the private sector, not just rogue states.

Olaf emphasised the point of the compass, saying our work needs to be principles-based. We need to make a differentiation between evolution OF the internet and evolution ON the Internet. We can get to those shared principles if we talk of the evolution OF the Internet. When we talk about empowerment, individualism, autonomy ON the Internet it gets more complicated to arrive at shared principles.

Sheetal added we need to assess how governments do regulation, and how companies operate from a human rights perspective. Are they human rights respecting, is there accountability, transparency? Are our governance and standards body inclusive? She summarised her points as protecting critical properties as they evolve, adopting a principles based approach, building on the human rights framework, and creating more inclusive spaces.

Lastly, Izumi highlighted that there were no Chinese or Indian representatives in the high-level session on AI, which to him is telling of the level of fragmentation that already exists. It wasn’t like that 18 years ago, we have fears. He encouraged the audience to go out into the world of chaos, to engage where there is tension, to think outside the box.

IGF 2023 Town Hall #105 Resilient and Responsible AI

Updated:
Sustainability & Environment
Key Takeaways:

Considering the situations including crises where dynamic interactions between multiple AI systems, physical systems, and humans across a wide range of domains may lead to unpredictable outcomes, we need to establish the discussion of resilient and responsible AI. We propose that a large complex system should be capable of maintaining/improving the value enjoyed by humans through the system in response to various changes inside/outside the system

,

In order to achieve system resilience in a human-centric way by letting humans make and embody their own value judgements, an interorganizational and agile governance mechanism is needed.

Calls to Action

The points presented above require urgent discussion and action under an international and comprehensive framework.

,

A broad outreach to the people including the general public is also needed.

Session Report

At the beginning of the session, Dr. Arisa Ema (The University of Tokyo), one of the organizers, explained the purpose of the session. The aim of this session is to expand the concept of "Responsible AI,” which is an important topic of AI governance, to "Resilient and Responsible AI" by considering the possibility of situations including crises where dynamic interactions between multiple AI systems, physical systems, and humans across a wide range of domains may lead to unpredictable outcomes.

First, Carly and Yui who are the pilots (operators of an avatar-robot) of OriHime (an avatar robot) talked about their experiences from the user's viewpoint of the technology. They have been in wheelchairs and feel the value of participating in society through the avatar robots. On the other hand, they have encountered situations where they could not handle irregulars because of the overreliance on technology.  Carly shared the experience that he was unable to turn on the power switchboard by himself and loss of communication with outside, when a power failure occurred by a lightning strike while working at home.  Yui talked about the anxiety and unnecessary apologies that people who need assistance face in a social system that is becoming increasingly automated. In a technology-driven society, where manuals are available but not always put into practice, they realized that this assumption would be broken not only in ordinary times but also in times of disaster, and that she would have to rely on people. The common conclusions of both stories, that is, the balance between technology and the manpower is important and that it should be considered that sometimes technology does not work, is suggestive. Furthermore, it made us realize that the nature of the crisis can be diverse for a diverse society. Next, Dr. Hiroaki Kitano (Sony), a researcher and executive of a technology company, who is currently working on an AI project for scientific discovery, pointed out that such an AI brings some positive effects for human being, but it also has a risk by misuse. Then, he also highlighted the possibility of future large-scale earthquakes in Japan and the importance of how to avoid excessive reliance on AI. There is a risk that AI will not be available unless communication networks, stable power and PC/mobile devices are available in accidents such as large-scale power outage when the dependency of AI in society is increased.

The organizers and three panelists, Dr. Inma Martinez (Global Partnership on AI), Ms. Rebecca Finlay (Partnership on AI), and Dr. David Leslie (The Alan Turing Institute), led the discussion based on the issues raised by OriHime pilots and Dr. Kitano. Dr. Martinez mentioned the necessity of defining resilience, and emphasized that the power of technology should be rooted in the values we have learned from our families and national cultures. By doing so, empowerment can create resilience. Ms. Finlay pointed out that while the assessments of AI systems before the launch are discussed, attention is hardly paid to how they affect different communities after they are released. The resilience and control methods are always required throughout the life cycle of AI, i.e., during the research phase, before and after launch. Focusing on machine-learning which has been the mainstream of AI in recent years, Dr. Leslie pointed out that data-driven systems may become vulnerable in a dynamic environment. As society and culture are gradually change, machine learning based systems driven by past data has the limitations. He emphasized the importance of considering resilience because excessive reliance on data driven systems has possibility to lead to stagnation in human creativity. In response to these discussions, Dr. Ema pointed out that we need to consider how technological and social perspectives on the current discussions such as generative AI will change. The following three points were pointed out by the audience.

  • The need for society to provide people with options for solutions.
  • The need for a more comprehensive impact assessment (technology, ethics, human rights, etc.) 
  • The risk of forgetting skills due to dependence on technology.

Then, a participant was asked about AI as a critical infrastructure. In response to this question, at first, Dr. Martinez said that AI is an infrastructure-based service, and it creates an unknown area for society. She mentioned the resilience of the communication infrastructure in which she was involved, and introduced an example in which a specific band continues to operate even if the whole network goes down in a disaster. She also pointed out the necessity of considering the self-repair mechanism of AI in the event of an infrastructural outage, and how to build not only systems but also human resilience. Ms. Finlay touched on the possibility that AI can be introduced in various ways with various implications, in response to Dr. Martinez. And she pointed out that systems need multiple layers of resilience. The way to understand how AI interact in a system is to map the system and understand its effects. Dr. Leslie pointed out that AI is rapidly becoming an infrastructure and general-purpose technology, and that it functions as an alternative for humans to think and act. AI is becoming a kind of a utility, but if it becomes an infrastructure, the question is who should control it. Dr. Ema said that it is difficult for individual companies to be held accountable when AI become infrastructural and go beyond the scope of a company, and that governmental and global discussions will be required.

As a summary of the discussion, the panelists highlighted the need for AI to be safe and have a solid foundation for society. They also emphasized the importance of defining and monitoring resilience to support society. In addition, they agreed the necessity of international research institutions to discuss AI from scientific and technological perspectives against the rapid commercialization of AI. In response to these comments, Dr. Ema concluded this discussion with the hope that all of us will work together to realize a resilient and responsible AI. The session received a variety of comments. A participant from public sector appreciated the uniqueness of the theme and the importance of discussion. On the other hand, another participant pointed out practical aspects such as how to handle large and complex systems composed by multiple AI systems. It is important to continue the discussion on this topic.

 

IGF 2023 DC-IoT Progressing Global Good Practice for the Internet of Things

Updated:
AI & Emerging Technologies
Key Takeaways:
When using IoT devices and services, strong identification becomes key to protect these from tampering. This identification may be between devices, for instance those that together provide a service, or form together a so-called “cyber physical system” such as a car, a house, an airplane, etc. When this identification is between people and devices, there needs to be sufficient measures in place to ensure privacy by default.,

With the ongoing growth of IoT deployment throughout our world, scaling issues are important to consider. Going forward to design imperatives need to be taken on board: (1) security by design - every device needs to be protectable (and updatable when needed); and (2) every device needs to be as carbon neutral as possible (as there will be many, including those that are dependent on power).

Calls to Action

Require appropriate security measures for IoT devices that can be handled by those that use them, and ensure appropriate labeling (dynamic for those devices that are software updatable) to make it possible for user to assess the risks and take the necessary measures.

,

Set global standards for this, as it concerns devices that are developed all over the world, and are deployed all over the world. National/regional initiatives will need to take global good practice into account.

Session Report

IGF 2023 DC-IoT Progressing Global Good Practice for the Internet of Things

The session considered IoT governance from various perspectives. To understand baseline IoT evolution, associated challenges, opportunities and responses, the IoT could best be understood as an internet of data, devices, systems or functions. For simplicity, we can call these “Internets of X” (IoX). Each perspective brings its understanding of what is possible, desirable or undesirable and tools and processes needed for governance.

Each approach must be considered in its own terms, but they start from a common base of experience and must ultimately come together to provide good governance. This leads to the need for an ecosystem comprising of stakeholders such as technical experts, governments, service providers, manufacturers, users, standards bodies, military vs civilian organisations, etc., varying in global and regional perspectives.

One immediate consequence is that IoT governance must respect a range of perspectives. Our fundamental principles are unlikely to be universal, especially when applied to specific IoT contexts. By analogy with the sensors and actuators of the IoT itself, governance needs to ‘sense’ the interests and perspectives of all significantly affected parties and somehow balance them to inform decisions at various levels. In other words, it requires multistakeholderism. It is not that specific expert groups (e.g., engineers) are insensitive to the needs of others (e.g., end users) but that they may misunderstand their interests, capabilities and behaviour.

The session began with a consideration of simple and recognisable use cases in which major challenges can already be seen (though they will become more complex). IoX components and their complex or hybrid assemblages will and should interact with others, so they must be identified uniquely and discovered with appropriate levels of precision, reliability, and permanence and be capable of enrolment in or separation from IoX systems. The concept of ‘identity’ has some subtlety. For instance, a smart home must be able to recognise and be recognised by new IoT components added to the system on a permanent or temporary basis, accorded the right kinds of access and privileges and tracked or remembered appropriately. These identities enable necessary functions, including the granting of trust. But they need not be unique, durable or universal. Indeed, categorical or shared identities (e.g., type certification) may be more practicable, scalable, flexible, future-proof, secure and robust to, e.g., (hardware, software or data) updates and interconnection or federation to create identifiable hybrid systems. Three subtleties linked to identity that came up in the discussion were security (including but not limited to cybersecurity), privacy (including but not limited to data privacy) and ownership (including protections against identity theft or misuse and, conversely, the use of identity to carry liability or responsibility).

Various identity schemes were discussed, ranging from central registries of semi-permanent discrete identities (along the lines of the DNS model) to purely transactional or temporary mutual authentication and identification schemes. These have advantages and drawbacks ranging from theoretical to practical, including technical, legal, commercial, security and other considerations. No single approach seemed to fit all foreseeable circumstances. In placing these in context, the panel recognised that the same concepts applied to the human beings (and organisations) that create, operate and use the IoX. For example, a person is more important than devices or data attributed to him/her, and human rights and responsibilities (e.g., of association and expression) cannot safely be extended to, say, their smart digital assistants. This cuts two ways; it may not be useful to hold a human being accountable for what their devices do in response to interactions with other systems, which the ‘user’ may not even perceive, let alone understand or control. Conversely, the automation of routine functions may result in their receiving less considered and responsible human attention, with unintended, undesirable and possibly irreversible results.

The discussion also considered desirable properties that might provide an ethical framework for IoT governance. Many are familiar, e.g., interoperability, transparency and accountability, robustness, resilience, trustworthiness, user empowerment, privacy and security. They are not IoT-specific but may need to be reinterpreted in that context. For example, IoT devices can harvest a wide range of data almost invisibly, which creates general privacy and security risks and affects global development, e.g., via ‘data colonialism’ whereby devices originating in and provisioned by the global north can be used to capture data from users in the global south to produce innovations for the benefit of the north and to lock in users in the south in ways that inhibit their techno-societal development.

One desideratum came up in relation to technologies, service provision, use cases, data issues, labelling and certification schemes and legal frameworks, and scalability. This is a generic issue, but the panel highlighted aspects that stand out clearly in the IoT context. One is complexity; as systems scale quantitatively, their qualitative properties may change and, with them, the appropriate kind of governance. Rules may need to be more general, neutral, principles- or function-based. Alternatively, governance may need to move between the data, device, software, etc., planes as systems interconnect in larger and more diverse ways. Another is practicability; effective governance may require limits on scale or interoperability. A further aspect is Quality of Service (QoS). The IoT-specific emphasis on low latency can constrain system scale, security or flexibility. Beyond this, QoS considerations may lead to multi-tier systems, which may reduce economic welfare, hinder interoperability or distort innovation. Large-scale systems may also be more susceptible to intentional or accidental compromise; effective access control in large environments may lead to inappropriate inclusions or exclusions. Under laissez-faire evolution, IoT systems may reach stable sizes and configurations, but these may not be optimal. Finally, very large systems may be difficult to govern with national or self-regulatory arrangements. For example, identification and certification schemes that identify individual devices or types scale with their number but cannot identify even pairwise interactions (which scale as the square of the number of interacting entities). As scale increases, management overloads, costs increase, and utility and use eventually decline. This, however, depends on the governance architecture; a centralised system (analogous to the cloud) offers economies of scale (or diseconomies) and a natural platform for observing systemic behaviour and emergent threats (if not weak signals). However, it creates additional power asymmetries and vulnerabilities; no one governance architecture will likely fit all cases. The group also mentioned other aspects of scale, such as environmental impact.

Another aspect that ran through the various phases of the discussion was trust and trustworthiness; beyond the customary discussion of e-trust, the panel contrasted high-trust and Zero-trust approaches to the problems of identification and interoperability.

The issue of AI in the IoT comes up often but not in depth. The panel recognised that it complicated the IoT, especially when considering smart devices and the emergent intelligence of connected systems. Foreseeability and explicability were discussed, as was the possibility that data-driven systems might be particularly vulnerable to noisy or biased data.

The panel considered various legal approaches and the ‘regulatory game’ being played out among countries, industries and civil society groups. Governance competition could spur the development of innovative and effective standards if different approaches can be compared and a suitable global standard emerges through a kind of ‘Brussels Effect’. This seems more promising than a too-rapid imposition of global standards and regulations whose implications cannot be foreseen. However, this result is not guaranteed; we could see damaging fragmentation or a rich diversity of approaches matching different contexts. Research on policy initiatives in 40 countries around the world shows that governments often do not regard modern global open source standards and global good practices with security at the core as “important”. It was suggested that governments could lead the way by taking such standards actively on board in their procurement activities. Keeping the discussion going and actively engaging with other DCs guarantees a positive outcome and an increased understanding of good global practices in IoT governance. Three important takeaways:


·       

IoT data, especially AI-enhanced, should be understandable, accessible, interoperable, reusable, up-to-date and clear regarding provenance, quality and potential bias.


·       

At the level of devices, there need to be robust mechanisms for finding, labelling, authenticating and trusting devices (and classes of devices). These should survive retraining, replacement or updating but be removable when necessary for functional, security or privacy reasons. To ensure IoT functionality, trustworthiness and resilience, market information and incentives should be aligned. Labels provide a powerful tool; many countries have developed and adopted IoT trust marks, and the time has come to start working towards their international harmonisation.


·       

Functions are not all confined to single devices, designed in or provided by system integrators; they can also be discovered by end-users or emerge from complex system interactions in cyber-physical systems (CPS) and IoT-enabled services. Governance requires methods for recognising, protecting and controlling these functions and their impacts.

-=O=-

IGF 2023 DCNN (Un)Fair Share and Zero Rating: Who Pays for the Internet?

Updated:
Avoiding Internet Fragmentation
Key Takeaways:

Large platforms generate enormous amount of traffic, but at the same time, they contribute to network infrastructure costs e.g. by building undersea cables or content delivery networks. The most traffic intense platforms have been zero rated for almost a decade by most operators of the world, including those currently proposing “fair share contributions” and in most Global South countries zero rated models are still very common.

Calls to Action

More comprehensive analyses of interconnection market is needed, including assessing the role of content delivery network. Increased multistakeholder confrontation to foster a better understanding of the issues at stake and whether proposed solutions such as fair share are de facto needed or not.

Session Report

 

The purpose of this session was to explore the so called “fair share” debate, which is rising in popularity especially in the European Union and South Korea and moving rapidly to Latin America. The session also discussed the connection between fair share and zero rating schemes, especially popular in the countries of the Global South.

 

The session adopted an evidence-based approach, featuring multiple stakeholder perspectives, discussing to what extent fair share and zero rating can be beneficial for the internet economy and to whether they contribute positively or negatively to the sustainability of the internet ecosystem.

 

Furthermore, panelists will explore how these two core issues connect with broader debate on Internet openness vs Internet fragmentation. The session was structured according to the following agenda:

 

Brief intro by Luca Belli, Professor and Coordinator CTS-FGV (5 min)

 

First slot of presentations (6 or 7 minutes each)  

  • Artur Coimbra, Member of the Board of ANATEL, Brazil  
  • Camila Leite, Brazilian Consumers Association (IDEC)
  • Jean Jaques Sahel, Asia-Pacific Information policy lead and Global telecom policy lead, Google  
  • KS Park, Professor, University of Korea
     

Q&A break (10 to 12 minutes)

 

Second slot of presentation (6 or 7 minutes each)

  • Maarit Palovirta Senior Director of Regulatory Affairs, ETNO
  • Thomas Lohninger, Executive Director, Epicenter.works
  • Konstantinos Komaitis, non-resident fellow, the Atlantic Council  

 

 

Participants stressed that, over the past decade, we have witnessed an increasing concentration in few internet platforms, with regard to social media or cloud computing, and such players generate a very relevant percentage of internet traffic. There is a wide range of ongoing regulatory initiatives aimed at frame large platforms, but over the past two years an additional type of regulatory proposal has been surfacing: imposing network fees to large platforms so that they pay their “fair share” of network related costs.

 

In countries such as Brazil, 95% Of users utilize internet access primarily for instant messaging and social Media (e.g. WhatsApp Facebook and Instagram are installed on 95, 80 and 70% of Brazilian smartphone respectively) and virtually all video recordings shared online in Brazil are hosted on YouTube.

 

Large platforms generate enormous amount of traffic, but at the same time, they contribute to network infrastructure costs e.g. by building undersea cables or content delivery networks

 

The most traffic intense platforms have been zero rated for almost a decade by most operators of the world, including those currently proposing “fair share contributions” and in most Global South countries zero rated models are still very common.

 

Some relevant points that need debate and clarification:

 

1) large platforms generate a lot of traffic because they have a lot of customers, not because they engage in any illegal or inappropriate practice; It is true that in most countries they have extremely low level of taxation, compared with their profits, but to cope with distortion it would be much wiser to review their taxation regime rather than simply shift part of their revenues to interne access providers.

 

2) Some regulators or operators have portrayed large platforms as free riders of internet infrastructure. This it is not correct, as platforms also invest enormously in infrastructure, e.g. by building submarine cables and large content delivery networks that are essential to maintain good quality of service good user experience;

 

3) Participants stressed that the topic of fair share with the topic of zero-rating are connected as large platforms have not become responsible for such enormous amount of traffic by chance, but the most traffic intense apps have been zero rated for almost a decade by most operators of the world, as we demonstrated with an empirical analysis, which was to annual output of this coalition already in already in 2018.

 

Actions suggested:

 

More comprehensive analyses of interconnection market is needed, including assessing the role of content delivery network

 

Increased multistakeholder confrontation to foster a better understanding of the issues at stake and whether proposed solutions such as fair share are de facto needed or not.

IGF 2023 DC-PAL Public access evolutions – lessons from the last 20 years

Updated:
Digital Divides & Inclusion
Key Takeaways:

There is an increasing disconnect between trends in connectivity and real-world outcomes, even on the basis of the limited data that we have. There is a strong need to invest in stronger data collection as a basis for meaningful internet governance decision-making

,

Public access, as a multipurpose means of helping people make the most of the internet, has proven itself as an adaptable and effective means of achieving people-centred internet development. It has proved its work faced with shocks, in allowing engagement with new technologies, and as a means of localising digital inclusion policies

Calls to Action
The full potential of public access as a way to address the decoupling of progress in extending connectivity and broader social progress needs to be part of internet strategies going forwards
Session Report

Evolutions in Public Access

 

It has been 20 years since the WSIS Action Lines were defined, setting out the importance of connectivity libraries and providing multifunctional public access centres. This fitted into a broader strategy that focused not only on finding rapid and effective ways of bringing the potential benefits of the internet to more people, while acknowledging the importance of a focus on people in order to turn this potential into reality.

 

The introduction to this session therefore set out the question of how public access as a concept has evolved over the past 20 years, as a basis for assessing its continued relevance and to understand how its place in the wider internet infrastructure has changed. It drew on written contributions shared by UNESCO and the Internet Society in particular, which noted, in particular that public access had been proven not to compete with public access, that libraries had proven to be adaptable and responsive, that public access had been a basis for service innovation and partnership, and that the fact of offering other services made libraries particularly valuable as public access venues.

 

Maria Garrido and Matias Centeno (University of Washington) set out the challenge faced, based on data collected as part of the Development and Access to Information report. Crucially, this underlined that good progress in general in bringing people online was not being reflected in other areas seen as vital for making access to information meaningful, in particular around equality and fundamental rights online. This illustrated the potential weaknesses of a tech-only approach.

 

Ugne Lipekaite (EIFL) offered a rich set of evidenced examples of how public access had proven its ability to help solve wider policy challenges, as well as its ongoing essential role in working towards universal connectivity. It had, indeed, been a driver of entrepreneurship and growth.  Crucially, many of the same trends could be observed in very different parts of the world, opening up possibilities for mutual learning in terms of how to develop public access most effectively.

 

Woro Titi Haryanti (National Library of Indonesia) described how public access was at the heart of a national strategy to develop library services as a means of improving lives. Centrally, the emphasis was on ensuring connectivity, providing adaptable content and building staff skills in order to develop programming that could combine public access with other support (including via partners). Thanks to this work, the library was increasingly seen as a partner for wider social development programming.

 

Don Means (Gigabit Libraries Network) underlined that libraries were often early adopters of new technology, providing a means for people not just to get to know the internet, but also new ways of working with it. They had also proven their role in connecting online services with users, for example to ensure that those needing to use eGov services were able to do so. They also offered a crucial backstop of parallel access technology, which boosted resilience.

 

The audience was then asked to share views via Mentimeter. They underlined their agreement with the idea that public access had a key role in the connectivity infrastructure and in future strategies, as well as broadly believing that public access complements other forms of connectivity.

 

 

Key themes that emerged in the discussion included:

  • Public access had proved a structure for delivering on the promise of the localisation of the internet and digital inclusion efforts in particular. Rather than a purely tech-led, supply-side approach, public access centres allowed supply and demand to meet effectively and inclusively.
  • The definition of meaningful access in general needed to include access to meaningful support services for those who needed them in order to make the most of the internet.
  • It was important to develop wider internet resilience strategies, in order to keep things going in times of disaster. Public access was a key part of this.
  • We needed to change the narrative about libraries in particular, and recognise (inside the library sector and outside) their role as agents for digital inclusion.
IGF 2023 Town Hall #134 The Digital Knowledge Commons: a Global Public Good?

Updated:
Data Governance & Trust
Key Takeaways:

The digital knowledge commons make a key contribution to what the internet is, with strong potential for growth, through AI, opening collections, and more inclusive practices

Calls to Action

We need to stop regulating the Internet as if it was only made up of major platforms – this risks harming public interest infrastructures

Session Report

Safeguarding the Knowledge Commons

 

As an introduction to the session, the moderator underlined that while shared knowledge resources had initially been included in definitions provided of digital public goods, they were not such a strong focus of subsequent initiatives. In parallel, UNESCO’s Futures of Education report had placed the concept of a Knowledge Commons at the centre of its vision, seen as a body of knowledge which is not only accessible to all, but to which everyone can make contributions.

 

Finally, organisations working around knowledge had long promoted the importance of realising the potential of the internet to enable global access to knowledge, and address barriers created in particular by intellectual property laws.  

 

Tomoaki Watanabe (Creative Commons Japan) underlined the particular questions that new technologies and in particular AI offered, thanks to the generation of new content that could potentially be free of copyright (3D data, scans, AI-generated content). This had the potential to create dramatic new possibilities that could advance innovation, creativity and beyond.

 

While there clearly were questions to be raised around information governance and AI (not least to highlight AI-generated content), copyright appeared to be a highly inadequate tool for doing this.

 

Amalia Toledo (Wikimedia Foundation) cited the connection between the concept of the knowledge commons and the need for digital public infrastructures that favoured its protection and spread – something that was ever more important. Wikimedia represented just such an infrastructure, but remained the only such site among the most used on the internet, with a constant risk of underfunding.

 

Moreover, laws were increasingly made with a focus on commercial platforms, but which caused collateral damage for non-commercial ones such as Wikipedia. Efforts to expand intellectual property laws brought particular risks when they failed to take account of the positives of a true knowledge commons.

 

Subsequent discussion highlighted the following issues:

  • The knowledge commons as a concept raised interesting questions about governance, and in particular how to ensure that it was inclusive and meaningful for everyone. There was a need for actors applying rules, such as Wikipedia and libraries in order to make it functional and sustainable.
  • The need to look beyond copyright as a tool for regulating information flows, given how blunt it was, and in particular in the context of AI to take care in taking decisions. Too often, Generative AI was mistaken for all AI, and policy choices risked imposing major costs even on research and education uses.
  • The value of a more holistic approach to upholding the knowledge commons in general, and the public domain in particular, in order to safeguard them and realise their potential to support wider efforts to ensure that the internet is a driver of progress and inclusion.
IGF 2023 Day 0 Event #161 Towards a vision of the internet for an informed society

Updated:
Digital Divides & Inclusion
Key Takeaways:

Importance of localization - if we want to promote inclusive internet we need to localize our approaches

,

Libraries are natural partners for any actor in the Internet inclusion space

Calls to Action

People should re assess their mindset about libraries and see them tech test beds, key sources of content and community infrastructures

Session Report

As awareness grows of the limitations of a purely technological definition of connectivity, as well as of the complex economic, social and cultural implications of the increasing ubiquity of the internet, the need to find a way to realise the goal of a human-centred internet grows. This session drew on the experience of libraries around the world as institutions (staffed by a profession) focused on the practicalities of how to put people in touch with information, and to help them use it to improve their lives. 

Winston Roberts (National Library of New Zealand (retd)) set the scene, highlighting the place of libraries in the original WSIS Agenda, which of course included strong reference to connecting libraries and the value of multi-purpose public access centres. He highlighted that while 20 years had passed, the evolution of the internet had only underlined the importance of having institutions like libraries in order to support universal and meaningful use, as part of a broader approach to internet governance. Thanks to this, it was not only possible to deal with the worst excesses, but also to unlock some of the potential that the internet creates in order to achieve goals around education, social cohesion and beyond. 

Nina Nakaora (International School of Fiji) highlighted the work that libraries had done in particular during the pandemic in order to provide access to learning materials. Again, this illustrated the value of having actors in the wider internet system focused on ensuring that public interest goals were achieved, especially where the market was unlikely to create solutions. She highlighted that, at the same time, to play this role there was a need for libraries to benefit from investment in hardware, connectivity and skills to deliver this.

Rei Iwaski (Notre Dame University, Kyoto) reflected on the Japanese experience of providing information services through libraries. She echoed the point made by Nina Nakaora that this is a potential that can only be realised when libraries are integrated into wider planning. Their cross-cutting missions meant that they often did not fit easily into any one policy box, and also needed to build their own sense of agency as actors in internet governance.

Misako Nomura (Assistive Technology Development Organisation) highlighted the particular situation of users with disabilities. Once again, this illustrated the need to move beyond a laissez-faire approach, and to look at how to connect people with opportunities. Her work included both developing materials for persons with disabilities and ensuring access to technology and wider support. With an ageing population, finding ways to bridge accessibility gaps would be an increasingly important part of wider digital inclusion efforts, and so a strong and properly resourced set of institutions to do this would be essential. 

Woro Titi Salikin (National Library of Indonesia) brought practical examples, again, of the power of facilitating institutions such as libraries in helping people to make the most of internet connectivity in order to deliver real-world change, in particular focused on gender inclusion and supporting entrepreneurship. The Indonesian experience demonstrate that it was possible to make change happen at scale through the right balance of centralised support and local flexibility to adapt services to circumstances. 

The subsequent discussion highlighted the following key points:

- the need to integrate libraries into wider strategies in order to realise their potential. Indonesia offered a strong example, with the close connection between the national library as coordinator of a wider network and central government. Elsewhere, this wasn't the case, and opportunities were being missed

- the fact that librarians too often lacked the sense of agency and skills necessary to fulfil their potential as facilitators of digital inclusion. The sector was at risk of remaining in traditional roles, especially when partnerships with other actors could not be formed. There was a need to build awareness of the responsibility that libraries have in the digital world

- the fact, nonetheless, that libraries do have a unique and flexible role in society which could be mobilised to support a wide range of different agendas

Collectively, the conclusions pointed in the direction of the need to reaffirm the role of libraries, both as a means of activating libraries and librarians themselves, but also to state the case for the place of libraries both as actors in internet governance processes, and as partners for delivery. This is at the heard of IFLA's Internet Manifesto Revision, currently underway, to which all participants were invited to contribute. 

 

IGF 2023 DC-CIV Evolving Regulation and its impact on Core Internet Values

Updated:
Avoiding Internet Fragmentation
Key Takeaways:

1. The Internet has been self organising with as little regulation as possible for it to work and if strong regulation is introduced it will hinder its technical functioning. Too much regulation will damage interoperation. As Internet networks evolve into space with no borders there are questions marks as to how its Core Values will be sustained.

,

2. One of the major policy tensions in digital life pits anonymity against accountability. Anonymity has been a key aspect of internet activity, but we have painfully learned that full anonymity can be exploited in ways that allow bad actors to escape being held accountable for the harms they cause. Systems must be developed to bring accountability without compromising essential anonymity - and layering identity levels is one way to do it.

Calls to Action

- The Internet community including the private sector, civil society, technical community should actively engage with governments to make them understand why a multistakeholder IGF is important.

,

- Use of encryption needs to continue - as without encryption many of the functions of the Internet's safety will be negatively impacted.

Session Report

 

DC-CIV Evolving Regulation and its impact on Core Internet Values

Report on the Internet Governance Forum (IGF) Session.

Main Report

The Core Internet Values, which comprise the technical architectural values by which the Internet is built and evolves, also comprise ‘social’ or, in other words, ‘universal’ values that emerge (or derive) from the way the Internet works.

The Internet is a global medium open to all regardless of geography or nationality. It is interoperable because it is a network of networks. It doesn't rely on a single application. It relies on open protocols such as TCP/IP and BGP. It is free of any centralized control, except for the needed coordination of unique identifiers. It is end to end, so traffic from one end of the network to the other end of the network goes. It is user centric and users have control over what they send and receive and it is robust and reliable.

The Dynamic Coalition on Core Internet Values held sessions at every previous IGF. During the 2023 IGF at Kyoto, the Coalition discussed the topic of "Avoiding Internet Fragmentation"  with "International Legal Perspectives" as the sub theme - all part of this year’s “Internet We Want”.

The following questions were examined during the session:

  • In a changing world and a changing Internet, should the Internet stick to its Core Values?
  • Should more legislation be needed? If yes, then how should it be drafted?
  • What are the risks of "changing" the Core Internet Values for the future of the Internet?
  • Could we end up with fragmentation? With the end of the Internet as we know it?
  • Could we end up with a better, safer, cleaner post-Internet network of networks? Is this achievable or is this a pipe dream? Does this have an impact on democracy across the world?

Panelists included  Lee Rainie, Jane Coffin, Nii Quaynor, Iria Puyosa, Vint Cerf with interventions from the floor moderated by Sébastien Bachollet as Co-Chair at Kyoto together with Olivier Crépin-Leblond.

Deliberations

The deliberations during this meeting by panelists' presentation, participant interventions and Q&A are reported here without attribution to the specific panelist or participant.

Broadly and roughly  there have been four notable 'phases' that could be seen as 'revolutions' in Internet evolution:

  • Home broadband. It sharply increased the "velocity of information" into people’s lives, bringing support for the way it democratized creativity, story-telling and community building. But it also spawned concern about misinformation, for example, in the medical community – and concern about the type of content to which children might be exposed. 
  • Mobile connectivity. Mobile phones became ubiquitous and became all-purpose “extra body parts and brain lobes” that allowed people to reach out and be contacted at any time, anywhere, without the need for knowledge on how to operate a computer. But a backlash grew about the ways in which phones disrupted people’s time use and attention allocation.
  • Social media.  Exposed users to new information and allowed them new ways to share their lives and create. The backlash has focused on the impact of social media on people’s emotional and mental health (especially for younger women and girls), the way social media can be used for information war purposes, enabled political polarization and tribalism, and menacing behavior like bullying and physical threats.
  • Artificial intelligence. Often functioning unnoticed and uncommented upon, AI allowed people to live their lives more conveniently, efficiently and safely. It promised productivity increases. But the backlash starts with people’s inherent wariness of anything that might challenge their rights, their autonomy and their agency. There are widespread concerns about job loss, bias and discrimination, and whether AI can be used ethically. 

It is worth noting that these and other concerns have mostly arisen at the level of applications, rather than the essential architecture of the Internet. Unfortunately, the concerns at the cultural, legal and social level usually drive policy deliberations that could limit the way the Internet functions.

Users almost unanimously support the Core Values of the Internet: open, free, secure, interoperable, end-to-end, permissionless innovation. The revolutions and the backlash they engendered:

Beyond those general concerns about digital functions, there is evidence that different people have different experiences of those revolutions. Those group differences drive concerns and calls for further regulations. At the group level, it is clear that divisions by gender, age, race/ethnicity, class, nationality, religious affiliations, affect people’s online experiences. There are also divisions along the lines of people’s level of awareness and their knowledge about technology, and their traits cause them to experience and react to technology differently. 

To further complicate the picture, it is clear that individual technology users act in different ways under different circumstances. They are not necessarily predictable and their actions are often contingent, transactional, and context specific. This makes it very hard for those designing policies to take into account the variety of ways people will use technology or have concerns about its impact on them.

In global surveys and other research, there is a division that pits individuals against society. Individual actors are often confident they can navigate the problems of information and communication ecosystems, but others are incapable of doing so. That results in an almost universal sense that “I’m OK, but the rest of the world is not”. 

How should policy makers understand that and take account of such an array of social, cultural, and legal variance as they try to think about regulations for the Internet? It is a chaotic picture that suggests that policy proposals affecting the basic functioning of the Internet should be undertaken with great caution and much humility.

The Internet has been self organizing its network of networks with as little regulation as possible for them to work. There is a lot of support for this self-organization on the network level even though in some cases the shared objective of developing networks for people who do not yet have access appears to have been lost.

Regulate

Caution is advised when facing pressure to “regulate fast... because some serious harm is upon us". Quick and ill-designed regulations may undermine online freedoms or lead to Internet fragmentation.

Before regulating, it is necessary to assess the tradeoffs of different policies as well as the suitable technical implementations of those policies.

Unfortunately, pressure to legislate is driven by public opinion on harms - often emphasized by governments to impose legislation. Law enforcement requests for access to private communications, national security, and cyber-sovereignty agendas dominate public debate in most countries.

The Internet will not be the same if it is run in a non open way - and we can see that with countries where there is a zeal to pass laws to "protect the interests of the regimes".

The intent may have originally been laudable but they may also have side effects.

For instance, we observe this problem in legislation threatening end-to-end encryption under the urge to provide more safety for children online, legislation establishing widespread Internet surveillance pretexting rising concerns related to violent extremism, cyber-sovereignty agendas undermining net neutrality, and cybersecurity policies that pose a risk to interoperability. 

Technical solutions to online harm must ensure respect for human rights and the rule of law in line with the principles of necessity and proportionality. Any restriction of access to the Internet must be lawful, legitimate, necessary, proportional, and non-discriminatory.

Civil society and the Internet technical community must continue collaborating in facing overregulation trends threatening Internet Core Values.

Some participants in the meeting pointed to further study in countries like Finland and Estonia, that have advanced in terms of e-governments. It was also mentioned that the borderless nature of the Internet would expand with a more widespread use of “satellite Internet” and Internet Exchange Points in Space - thus bringing a new perspective on cross-border issues.

Key Takeaways 

  1. The Internet has been self organizing with as little regulation as possible for it to work and if strong regulation is introduced it will hinder its technical functioning. Too much regulation will damage interoperation. As Internet networks evolve into Space with no borders there are question marks as to how its Core Values will be sustained.
  2. One of the major policy tensions in digital life pits anonymity against accountability. Anonymity has been a key aspect of Internet activity, but we have painfully learned that full anonymity can be exploited in ways that allow bad actors to escape being held accountable for the harms they cause. Systems must be developed to bring accountability without compromising essential anonymity - and layering identity levels is one way to do it.
    Such systems must be designed with clear and minimal implications for deep architectural changes. A layered approach (possibly in the application layer) may be desirable. 

Call to Action

  1. All stakeholders should actively engage in understanding, appreciating, and expanding knowledge of the Internet’s Core Values and the damages that may arise from actions that, deliberately or as unintended consequences, impinge negatively on them. The list is not long and it starts by layered architecture, packet switching, “best effort” i.e. design for resilience against failure, interoperability, openness, robustness (Postel), end-to-end (meaning that most functions that are not packet transmission are a responsibility of the “edge”, and implying network neutrality), decentralization, scalability, and, as a consequence, universal reach and “permissionless innovation”.
  2. Laws, norms, and treaties must all be commensurate with these values and only impinge on any of them after a deep analysis by all stakeholders, and with safety valves to avoid irreversible unexpected consequences down the road. 
  3. The Internet community including the private sector, civil society, technical community should actively engage with governments to make them understand why a multistakeholder IGF is important.
  4. Use of encryption needs to continue - as without encryption many of the functions of the Internet's safety will be negatively impacted.

 

IGF 2023 WS #570 Climate change and Technology implementation

Updated:
Sustainability & Environment
Calls to Action

Enhancing legal compliance and accountability in implementing environmental laws requires global efforts from governments, private sectors, and international organizations.

,

Sustainable digital transformation, involving transparent policies, sustainable design, and accessible technology solutions, is crucial to address climate challenges, requiring global collaboration and immediate action from all stakeholders.

Session Report

 

The intersection of sustainability, digitalization, and climate change has become a crucial topic in today's global concerns. This report synthesizes the key points discussed by the speakers from the session. These experts provided insights into how the digital age can both exacerbate and alleviate climate challenges, and their recommendations to address this complex issue. The Key Takeaways of the session were:

  • Digitalization and Its Environmental Impact: The speakers began by highlighting the growing significance of electric and autonomous mobility, emphasizing that digital technologies, especially electric vehicles (EVs) and autonomous mobility, place significant demands on energy production and computational power. This shift creates new challenges, such as the allocation of electricity from the national grid to EV users and the need for updated policies to accommodate this transition.
  • Insights into the European Union's strategy of a twin transition: Combining green and digital transformations were also shared. With emphasis on ambitious climate goals, such as a 50% reduction in emissions by 2030 and climate neutrality by 2050. To align sustainability with digitalization, the speaker proposed enhanced transparency regarding the environmental impact of digital devices, promoting entrepreneurial thinking for sustainability, and embedding ecological sustainability into design processes.
  • The importance of affordable and accessible technology solutions: There were concerns about the lack of necessary infrastructure to implement expensive technologies in many countries, as well as legal disputes and accountability related to environmental protection laws, emphasizing the need for effective enforcement and compliance mechanisms.
  • AI in Climate Mitigation and Adaptation: In mitigation, AI can optimize electricity supply and demand by considering weather conditions and electricity usage patterns. For instance, building energy management systems using AI can significantly reduce energy consumption during peak times. AI also contributes to climate adaptation by enabling the development of early warning systems and improving climate forecasting. These technologies allow us to take early countermeasures and ensure a stable food supply.
  • Negative Environmental Impacts of Technology: While technology offers solutions for climate change, it also presents environmental challenges, such as the energy consumption associated with electronic devices, data centers, and communication networks primarily powered by fossil fuels. The entire life cycle of electronic devices, from manufacturing to disposal, contributes to energy consumption and carbon emissions. Hazardous chemicals and e-waste pose environmental risks when not managed properly, especially in developing countries.

The discussions by various speakers highlighted the following unified actions: Ensure that digital technology contributes to sustainability goals and consider the environmental impact of digital devices; Invest in research and development to create green and energy-efficient technologies, especially for regions with increasing energy demands; Advocate for effective enforcement mechanisms and accountability in environmental protection laws globally; Encourage responsible consumption by extending the life cycle of electronic devices, reducing e-waste generation, and adopting sustainable practices in manufacturing; and Encourage collaboration between governments, businesses, research institutions, and individuals to harness the full potential of technology in combating climate change.

The global discussion on the intersection of sustainability, digitalization, and climate change is multi-faceted and addresses various challenges and opportunities, and needs more action from governments, civil society and the private sector. Through these unified calls to action, the digital age can be harnessed to mitigate climate change and transition toward a more sustainable future.

 

IGF 2023 WS #209 Viewing Disinformation from a Global Governance Perspective

Updated:
Global Digital Governance & Cooperation
Key Takeaways:

1. A more nuanced approach to disinformation is called for, which should not only focus on social networks or digital platforms but also consider the wider media landscape. Furthermore, more empirical research is needed to realistically assess the dangerousness of disinformation. We should not simply take for granted the effect of disinformation on people's thinking and (voting) behaviour.

,

2. There is not one global solution against disinformation that works in every instance or context. It is unlikely that governments agree on how to address disinformation. However, what is needed is a common set of principles that guides how we think of and act upon disinformation. Human rights and access to information must be front and center of such principles.

Calls to Action

1. Regional human rights courts need to be resourced in a way that they can function as mechanisms in the regulation of disinformation.

,

2. High quality journalism is an effective means against the impact of disinformation but faces an uncertain future. More work needs to be done to strengthen independent journalism particularly in countries with a high incidence of disinformation.

Session Report

 

Workshop Report  - IGF 2023 WS #209: Viewing Disinformation from a Global Governance Perspective

 

Workshop process

Part 1:

The workshop opened with the moderator asking participants to stand and gather along an imagined line on the floor in the room based on the extent to which they agreed or disagreed with the following statement: "Disinformation is undermining democratic political participation". The moderator then walked around the room and asked people to share their views and why they agreed/disagreed or stood somewhere in the middle. They were encouraged to shift their position if the discussion led to them rethinking their initial opinion.

Views in the room were diverse.  Almost all participants stood in the area signifying agreement with the statement.  Several offered examples from their countries and larger experiences that they believed demonstrated a strong causal link between disinformation and democratic erosion.  Two people, including one of the speakers, stood in an intermediate position and argued that a nuanced and contextualized approach is needed in examining cases so a binary choice between “is/not causing” was not adequate.  One person stood in the area signifying no impact of disinformation.

The moderator also asked the panelists to share their perspectives, and, in doing so, to respond to the question: “What is disinformation, is it a serious problem, and if so, why (or why not, if you believe it is not a serious problem)?”

Interactive discussion on this question between participants and the panelists continued for about 25 minutes. One of the panelists responded by asking what impact of disinformation we care about. He also suggested that disinformation is an umbrella term that is too broad as a basis for regulation. A person from the audience added that disinformation is not new and that each media has been abused for purposes of propaganda. One panelist pointed out that there is a lack of empirical evidence about the impact of disinformation. Most of what we know concerns the production and dissemination of disinformation while its effect on people’s worldviews and voting behaviour is mostly taken for granted. Recent research suggests that disinformation amplifies extremist beliefs rather than instigating them. As a closing question, the moderator asked participants if any of them lived in

contexts where disinformation does not have a major impact. Two people responded to say that in their countries disinformation does not appear to be causing much harm due to the presence of a serious and legitimized mass media and other factors. A panelist concluded that high quality journalism is the best way to combat disinformation.

Part 2

The second question put to the panel and the participants was: “Can disinformation be regulated internationally? How strong and clear a baseline do existing international instruments provide for the governance of disinformation? What are the implications for rights to access information and freedom of expression?

There was no common view on whether disinformation can be regulated internationally. Panelists doubted whether there can be one solution for all the different forms of disinformation. There was agreement on the need for a common set of principles to guide how we think of and act upon disinformation. Human rights, particularly Article 19, which protects freedom of expression and information must be front and center of such principles.

One speaker briefly flagged three examples of efforts to devise international Internet governance responses to disinformation.  These included some problematic proposals for binding treaty commitments among governments that have been floated in the UN cybersecurity and cybercrime discussions; the European Union’s Code of Practice on Disinformation; and the UN Secretary General’s proposed Code of Conduct for Information Integrity on Digital Platforms.  It was pointed out that while the first example involved efforts to devise constraints on state behavior that would never be agreed in geopolitically divided UN negotiations, the second two involve codes of practice pertaining mostly to the providers and users of digital platforms.  It was noted that while platforms certainly have responsibilities, focusing largely on them rather than on the governments that produce or support the production of a lot of disinformation is quite a limitation.  There are also open questions around the reliance on codes and guidelines varyingly interpreted and implemented at the national level.

The next question was: “Concerning new governance initiatives, what sort of consultation and decision-making process is best suited to the governance of disinformation, and can the IGF assume a role in the process?

This provoked a very interesting discussion. Participants involved in the Christchurch Call shared how they put multistakeholder consultation at the centre of their efforts to combat online extremism. The key lessons they shared that are relevant to responding to disinformation was (1) the multistakeholder approach has been critical to create trust among the actors involved, (2) the need to take time and form partnerships with diverse actors involved, (3) to keep the scope and focus really tight and (4) not to rush into regulatory intervention.

Part 4 - Closing

In closing, panelists offered their main take-aways, including things they did and did not want to see.  There were calls for better empirical research and evidence about the effects of disinformation; for more nuanced policy responses, including avoidance of governments using “moral panics” on disinformation to justify restrictions of human rights; for multistakeholder participation in crafting governance responses; and for hefty fines on Elon Musk’s X for violations of the EU’s rules.

IGF 2023 Day 0 Event #177 Transforming technology frameworks for the planet

Updated:
Sustainability & Environment
Key Takeaways:

Cooperative models and approaches to technology have created pathways for communities and movements to address their needs, including for digital inclusion and decent work.

,

It is critical that technological responses to planetary crises do not adopt a single model or approach, but rather support diverse community-led and cooperative models that centre care and solidarity.

Calls to Action

Governments must ensure that the precautionary principle is upheld in digital governance norms and standards, including policy responses to the role of technology corporations in carbon offsetting, and geoengineering.

,

All stakeholders must work to support models of technology that centre care and solidarity.

Session Report

On 7 October, 2023, the Association for Progressive Communications (APC), Sula Batsu, Nodo TAU and May First Movement Technology convened a pre-event discussion to the global IGF, focusing on cooperative models and approaches to transforming technology frameworks for the planet.

During the discussion, speakers from Sula Batsu, Nodo TAU and May First Movement Technology shared experiences from their work, emphasizing the critical importance of participation and accountability in cooperative models and approaches to technology.

Kemly Camacho reflected on the experiences of Sula Batsu in learning how to put care at the center of their business models using approaches that are rooted in feminism, solidarity, and collective care.

Speaking from the experiences of May First Movement Technology, Jaime Villareal shared his perspective on the importance of members of May First being able to collectively own, govern and maintain autonomous infrastructure.

From Nodo TAU, Florencia Roveri described the processes and challenges of transforming their e-waste management and recycling plant into a cooperative, and the value of working with existing cooperatives. Florencia reflected on the need to extend responsibility for electronic waste, and shift perspectives on the dangers of discarded technology.

Yilmaz Akkoyun, Senior Policy Officer of the German Federal Ministry for Economic Cooperation and Development (BMZ), reflected on the discussion from the perspective of the BMZ priorities for digitalisation, emphasizing that cooperation is essential in a holistic approach to address the root causes of the complex problems facing the world today.

Becky Kazansky, a postdoctoral researcher at the University of Amsterdam, framed the discussion of cooperative approaches to technology by reflecting on recent policy developments, and the importance for all stakeholders not to get distracted by technologies and tools that on the surface seem quite promising for mitigating and adapting to climate change, but have proven to be quite harmful for communities around the world.

On-site participants in the event shared questions and reflections on how transforming technology frameworks can be supported in practice, including through amplifying the work of cooperatives like Sula Batsu, Nodo TAU and May First Movement Technology.

Speakers emphasized the need for robust and community-led accountability mechanisms, support for environmental defenders, and shifting perspectives and narratives towards more technology frameworks that prioritize collective care.

IGF 2023 WS #500 Connecting open code with policymakers to development

Updated:
Data Governance & Trust
Key Takeaways:

Interest on data sets such as GitHub's Innovation Graph: https://innovationgraph.github.com/ as a way to approach private sector data for public sector research.

,

Discussion on challenges with skills technical staff within government to implement open source tools and how to tackle the myths some may have about open source.

Calls to Action

Some topics were too broad and could be narrowed down for more in depth discussion.

,

There was interest in the process of simplifying the process of using private sector data for policy making

Session Report

Connecting open code with policymakers to development

 

This session built on the work of numerous different agencies with speakers from the Government of France Digital Affairs office, GitHub Inc., and LIRNEasia. This session focused on the theme of ‘Data Governance & Trust’ and how private sector data in general and technology platform metrics in particular can inform research and policy on technology maturity, innovation ecosystems, digital literacy and the monitoring of progress towards SDGs at the country level. GitHub is the world's largest platform for collaborative software development, with over 100 million users. GitHub is also used extensively for open data collaboration, hosting more than 800 million open data files, totaling 142 terabytes of data. Their work highlights the potential of open data on GitHub and also demonstrates how it can accelerate AI research. They have done work to analyze the existing landscape of open data on GitHub and the patterns of how users share datasets. GitHub is one of the largest hosts of open data in the world and has experienced an accelerated growth of open data assets over the past four years and ultimately contributing to the ongoing AI revolution to help address complex societal issues. LIRNEasia  is a pro-poor, pro-market think tank. Their mission is to catalyze policy change and solutions through research to improve the lives of people in the Asia and Pacific using knowledge, information and technology. Joining the panel was also Henri Verdier, the French Ambassador for Digital Affairs within the French Ministry for Europe and Foreign Affairs. Since 2018, he leads and coordinates the French Digital Diplomacy.  He previously was the inter-ministerial director for digital information and communications systems (DG DINUM) of France; and he was the director of Etalab, the French agency for public open data.

The session opened with an overview of what connecting open code with policymakers and the previous efforts made on this topic. There has been research done on this and the panel highlighted GitHub’s work on Partnering with EU policymakers to ensure the Cyber Resilience Act works for developers. While in France, there has been policies on the implementation of “open source software expertise center” set up in the Etalab which is a part of the interministerial digital department DINUM. It is a part of an effort of setting up open source offices in governments that can be observed throughout the public administrations in Europe. The expertise center will be supported by other initiatives of the government such as projects within the TECH.GOUV programme aimed at accelerating digital transformation of the public service. Other efforts such as the French government’s roadmap for developing open source to make it a vector of digital sovereignty and a guarantee of “democratic confidence” is part of the conversation. Leading to the topic on the challenges from unmet data needs that can be supported by private sector data for development purposes which GitHub announced the Innovation Graph. The GitHub Innovation Graph dataset contains data on (1) public activity (2) on GitHub (3) aggregated by economy (4) on a quarterly basis on GitHub public data.

 

Finally, the panel session concluded with discussion on data privacy & consent as well as efforts to promote and support open code initiatives globally. There was extensive interest by attendees on how to encourage participation and capacity building locally, and encourage more open source development within governments.

 

IGF 2023 DC-Gender Disability, Gender, and Digital Self-Determination

Updated:
Digital Divides & Inclusion
Key Takeaways:

Accessible design: not an afterthought, mobile phone-friendly, with easy interfaces. A multistakeholder approach to digital accessibility where the onus is not just on people with disabilities to fix the accessibility problems. Involving persons with disabilities in technology design and development processes - learning from experiences across genders, sexualities, class, caste locations. Integrating digital accessibility in formal education.

,

Thinking about how accessible and affordable technology is for people with disabilities across caste and class locations. Accessibility barriers are also defined by who builds tech and who it is built for. What an inclusive policy framework can look like: ideas of inclusiveness that aren’t homogenised but are representative of a spectrum of disabled experiences.

Calls to Action

A paradigmatic shift in how technologies are designed and developed. Instead of developing them at scale, accounting for nuanced and individual use experiences, and creating customised tech centred around layered and individualised experiences, rather than a one-size-fit-all approach.

,

Involving persons with disabilities in developing technologies as well as policies - recognising people with diverse disabilities as part of the digital ecosystem and digital spaces. Developing technologies and policies taking into account the diverse experiences of persons with physical and psychosocial disabilities and different layers of accessibility barriers when it comes to inhabiting and occupying digital spaces.

Session Report

 

Lived experiences

Vidhya Y:

  • Digital space is huge - when we say tech, that’s the only way as a blind person I can communicate with the world. It opens up opportunities. Growing up in a village, I didn’t have access to tech and missed out on a lot. But when I got on to online platforms, there was so much I could do. I could access the news, know what time it is, communicate via emails. Most people don’t understand braille. 
  • Taking help from someone to type messages would mean I don’t have privacy over messages I want to say. Digital platforms have enabled many disabled people to have privacy and more autonomy over their choices.
  • Websites aren’t designed in a way all can access. There are a lot of images that aren’t labeled. 
  • For women with disabilities, the barriers are too many! It’s an irony. Digital platforms have given a lot of privacy but at the same time, you have to be so careful. When Covid happened and people were trying to get on online platforms, video calls were a must. I’d adjust my screen to point a bit downwards so people are not able to see much of me. But my sister observed and told me that the camera is actually at the top of the monitor and if you put it down, people can see you more clearly. 
  • I feel I have to take second opinion about a lot of things in the digital space. New things are coming up all the time.
  • When you’re using a screen reader, if you’re in a crowded place, you tend to misread content. Voice messages also have privacy issues: eg. in conferences I’m unable to use voice message.
  • Typing maybe easier if you have some other disability, but it’s a huge issue for visually impaired people. 

Gunela Astbrink:

  • A young woman in Africa, a wheelchair user, has speech impairments, limited use of one hand. She was determined to study IT and went to school, vocational college, and now she sometimes tutors other students. The way she uses smartphone/laptop is with her knuckles. That’s how she communicates with her digital tools.
  • When a person with a disability is online, there’s often a sense that we are all digital beings, and there’s an assumption that we’re all on the same level and will be able to use all tools. However, this isn’t the case. Tools, websites, platforms need to be made accessible. Important for tools and learning platforms etc. to be developed along with PwDs. 
  • Nothing about us without us - so that PwDs are able to be part of development and part of the digital community.

Privacy and security concerns

Vidhya Y:

  • Digital tools enable you to do a lot of things yourself, which wasn’t possible earlier. There are color recognisers, apps to tell you which currency you’re using, apps where sighted people sign up as volunteers for solving captchas etc. Captchas are designed as not being designed for machines so privacy isn’t compromised, but this is a barrier for many persons with visual impairments, if audio captchas are not enabled. Even if you can use a computer. If I want to get help in Kannada, local language, I won’t get help at night. But if you need help in English, there will be someone to assist you.
  • I conducted digital literacy trainings with school teachers. Guided them to installing these tools - we found really good uses: you can call them and the volunteer who picks up the phone, they’ll tell you to point your camera at the captcha on the computer. And guide you accordingly. People have used these technologies to even take support in matching their sarees with their bangles.
  • But you’re forced to depend on others at certain times. You’re also wary about where you’re pointing camera - what the other person can see - what data is being collected. At the end of banking transactions, if you have to enter captcha, you have to enter all other details beforehand, which means the person supporting you can see what all you have typed. It’s a huge privacy compromise.
  • Privacy concerns around how much of you should be visible to the other person: apart from your voice you aren’t sure what else is visible. A concern for women with disabilities.
  • For FB, IG etc.: If I were to upload photos I’ve taken during this conference to FB, my cousin will give me the photos with captions. But I don’t know if I’m missing anything in the photos - as I’m relying on the captions. Sometimes people have told me, only half your face is visible, or this photo shouldn’t have been taken.

 

Padmini Ray Murray:

  • Every device we use is compromised by some form of surveillance, and it’s very difficult for non-disabled people to wrap their heads around being online, use these devices and think about how to maintain their privacy.
  • Most devices or apps - even if they’re made for disabled users, might not be taking these considerations into account - while they’re being designed.
  • While there are accessibility guidelines, those are often just the baseline, and there’s much more nuanced requirements of disabled users that need to be taken into account.

 

Imagining inclusive tech

Manique Gunaratne:

  • Through assistive devices and tech, we’re able to work in an equally capable manner with non-disabled people.
  • The problem is often the cost factor in accessing technologies. Eg. for hearing impaired persons, they cannot hear if someone rings the bell. But they can access a picture of doorbell ringing through a smartphone.
  • For visually impaired people, smart glasses can identify what’s around us and provide a description of the surroundings.
  • For people with mobility difficulty, apps and technologies can help them find spaces they can access - restaurants, movie theater etc. Through hand gestures or facial expression if they can operate computers, they can also be employed and economically active.
  • Tech operating through brain functions.
  • Entertainment is not only for people without disabilities. Games, etc. need to be accessible. 
  • Technologies to give emotional recognition, especially for autistic people or those with intellectual disability.
  • Smart homes: PwDs can cook food of their choice, make domestic choices etc.

Judy Okite

  • For a long time, we’ve been advocating for physical accessibility at the IGF - hope it’s better this year. 
  • One of the things we did with KICTANet this year: Evaluated 46 govt websites, just to see how accessible information is for PwDs. Unfortunately, the highest they got was 80%. The feedback from the govt was interesting: people felt if you’re at 80% you’re at a good space. But actually it means 20% of your content is not accessible to PwDs.
  • From research we did: more emphasis is placed on persons who are blind when it comes to digital content. But persons with cognitive disability are more disadvantaged. If the content is not understandable/perceivable, then you’ve lost this person - they will not be able to interact with your content.
  • In Kenya, only about 2 years ago, cognitive disability was recognised as a disability. So we can see how far we are on inclusion. 
  • How do we ensure that PwDs are part of our change - not just because they want to, but because they have to be a part of the process.
  • Forum for Freedom in Jerusalem - in Tanzania - they know my needs on physical platforms - worked with them before. There was a ramp, but I still needed to be lifted up to reach the ramp. They had an accessible room but very small cubicles for washrooms - so I called the guy from the reception who came with a wheelchair and I requested him to push it into the washroom. He asked how can I do that? I asked him back, how do you expect me to get in the washroom then?
  • If they had included a PwD to be a part of this process, the ramp or the washroom wouldn’t have been this bad. Being deliberate in having PwDs as part of the process, the change.

Nirmita Narasimhan

On policy and regulatory processes

  • Important to have policies - ensures that people are aware there’s a need. Mandated. Recognised by law. The fact that there’s a legal and social requirement and responsibility to comply with standards is important in ensuring that accessibility is there. Countries that have policies are better placed in terms of how accessibility is implemented.
  • A lot of countries have implemented the CRPDA - domain specific policies need to come as well. Depends on different strategies and situation.
  • Eg. In India when we had to lobby for the copyright law, we had to do a lot of research on what are the legal models available everywhere. We ran campaigns, meetings, signature campaigns etc. On the other hand, when we look at electronic accessibility, we had meetings with electronics and IT departments, and that’s how we worked with them to develop a policy. While developing the procurement standard in India, we worked with agencies, industries, academic groups etc. on what the standards should be and how they will be implemented. The idea is to get different stakeholders involved and be responsible for this.

Concluding thoughts

Padmini Ray Murray

  • The biggest challenge we struggle with is when we design/develop technologies, we try to do it at scale, which means more nuanced and individual use experiences become harder to provide. This requires a paradigmatic shift in how tech is built - creating customised products. More layered and nuanced. More individualised and personalised experiences rather than one-size-fits-all.
IGF 2023 WS #457 Balancing act: advocacy with big tech in restrictive regimes

Updated:
Human Rights & Freedoms
Key Takeaways:

Increasingly authoritarian states are introducing legislation and tactics of online censorship, including internet shutdowns, particularly during politically sensitive periods. There is an urgent need for civil society and big tech to coordinate in mitigating risks to online free expression posed by sweeping legislative changes and practices empowering authoritarian states.

,

Lack of transparency in big tech's decision-making process, in particular regarding authorities’ user data and takedown requests, exacerbates mistrust and hinders effective collaboration between big tech and civil society, especially under authoritarian regimes. At minimum, platforms should develop comprehensive reports with case studies and examples on their responses in order to keep the civil society groups informed and in the conversation.

Calls to Action

Civil society and big tech should initiate structured dialogues to create a unified framework for responding to legislation and practices that threaten online free expression, including internet shutdowns at the national, regional and global levels including through multi stakeholder fora such as the GNI.

,

Big tech companies must commit to radical transparency by publishing detailed policies and data on content moderation and government requests. The companies should establish a dedicated team that engages directly with local civil society, sharing information openly to address nuanced challenges faced in specific geopolitical contexts.

Session Report

Session report:

The session brought together a diverse group of stakeholders, including representatives from civil society, big tech companies, and policy experts, to discuss the pressing challenges of online censorship, data privacy, and the role of big tech and civil society in authoritarian states. The session also highlighted the importance of multi-stakeholder dialogues and offered actionable recommendations for all parties involved.

The session highlighted that any meaningful progress on ensuring access to the internet and combating censorship online in restrictive regimes can only be achieved in a broader context, in conjunction with addressing the lack of rule of law, lack of independent judiciary, crackdown on civil society and absence of international accountability. 

Key discussions:

  • Legislative challenges: Participants highlighted the rise in authoritarian states introducing legislation aimed at online censorship, often under the guise of national security or cybercrime laws. These laws not only enable content censorship but also force platforms to share user data, posing significant human rights risks and chilling effect for online expression.
  • Big tech’s responsibility: There was a general consensus that big tech companies have a significant role to play in this landscape. There was also a strong sentiment that platforms need to step up their efforts in countries like Vietnam, where civil society has limited power to effect change due to authoritarian rule.
  • Lack of transparency, especially in big tech’s decision-making processes in particular regarding authorities’ user data and content takedown requests, was a recurring theme. This lack of transparency exacerbates mistrust and hinders effective collaboration between big tech and civil society. Additionally, it allows authoritarian governments to apply informal pressure on platforms.
  • Other barriers that hinders collaboration between big tech and civil society that were flagged by civil society included issues with the current mechanisms available for civil society to engage with big tech - long reaction time, little progress, no consistent follow-up, concealed results of bilateral meetings between the government and the platforms and the fact that country focal points are often in contact with the government especially in oppressive regimes which puts activists at risk. 
  • Civil society's role: Civil society organisations emphasised their ongoing efforts to hold big tech accountable. They also highlighted the need for more structured dialogues with tech companies to address these challenges effectively.
  • Multi-stakeholder approach: Both civil society and big tech representatives agreed on the need for a multi-stakeholder approach to tackle the issues. There was a call for more coordinated efforts, including monitoring legislative changes particularly in the face of rapid changes in the online space.
  • Remote participants: Feedback from remote participants underscored the urgency of the issues discussed, particularly the need for transparency and multi-stakeholder dialogues.

Turkey and Vietnam as case studies

Turkey and Vietnam were discussed as case studies to illustrate the increasing challenges of online censorship and government repression in authoritarian states. Both countries have seen a surge in legislation aimed at controlling online content, particularly during politically sensitive times, and both grapple with the complex role of big tech in their unique geopolitical contexts. Big tech in both countries face a difficult choice: comply with local laws and risk aiding in censorship, or resist and face being blocked or penalised.

The civil society representative from Vietnam highlighted that Facebook has a list of Vietnamese officials that cannot be criticised on the platform, highlighting the extent of government influence. Facebook and Google have been complying with the overwhelming majority (up to 95%) of content removal requests. Activists also pronounce big tech’s inaction in the face of the growing problem with the state-back online trolls. 

Some concrete examples showcasing successful advocacy and collaboration between big tech and civil society groups were discussed, such as, in 2022, the government in Vietnam turned the hard requirement of storing data locally to a soft requirement after civil society activism mobilised platforms to lobby with the government. 

In the case of Turkey, an amendment package passed in October 2022, introduced up to three years of imprisonment for "spreading disinformation and imposed hefty fines for big tech companies, including up to 90% bandwidth throttling and advertising bans for non-compliance with a single content take-down order, further complicating the operating environment for big tech companies. Companies are now also required to provide user data upon request of the prosecutors and courts in relation to certain crimes. 

The panel highlighted that this set of laws and lack of transparency allow authoritarian governments to place big tech under significant formal and informal pressure. The threat of throttling in the event of a non-compliance with government requests creates a particularly heightened chilling effect on platform decisions and their responsibility to respect human rights.

On the eve of the general and presidential elections on 14 May 2023, YouTube, Twitter and Facebook restricted access to certain content that involved videos critical of the government and various allegations of crime and corruption against the ruling AKP. While YouTube did not issue any public statement about the censorship on their platform, both Twitter and Meta noted in their public statements that Turkish authorities had made clear to them that failure to comply with its content removal request would lead to both platforms being blocked or throttled in Turkey.

In its transparency report, Meta explained that their top priority was to secure access of civil society to their platforms before and in the aftermath of the elections; and they made the decision to comply with government requests to remove the content because, although critical of the government, the content was not directly linked to election integrity. 

The panel also discussed that GNI principles state that ICT companies should avoid, minimise or otherwise address the impact of government demands if national laws do not conform to international human rights standards. The initiative also focuses on capacity-building within civil society to engage effectively with tech companies. The representative from GNI also mentioned a tool called “Human Rights Due Diligence Across the Technology Ecosystem” which was designed to formulate constructive asks to the relevant stakeholders depending on whether this is a social media platform, telecom company or a cloud provider. 

Recommendations for big tech:

  • Develop contingency plans to protect access to platforms during sensitive periods
  • Conduct human rights due diligence before taking any compliance steps
  • Actively engage with local NGOs and invite them for consultations
  • Full disclosure of government requests and compliance actions (Twitter’s publication of the government’s communication on censorship ahead of the Turkish elections was a step in the right direction)
  • Tackle the rise of internet trolls 
  • Protect civil society groups from false mass reporting and illegitimate account suspensions 
  • Expand end-to-end encryption for users' data privacy 

Recommendations for civil society:

  • Closer coordination on how to advocate for digital rights to avoid fragmented, unimpactful calls and align strategies to create a stronger stand against the government’s actions
  • Work together with platforms to formulate a multi-pronged strategy envisaging both private sector and civil society perspectives
  • Work towards increasing public literacy on digital rights 
  • Bring international attention to these critical issues

Recommendations for states:

  • Diplomatic efforts must extend to digital rights e.g. make them a proviso in trade agreements 
  • Financial and logistic support for NGOs

 

IGF 2023 Open Forum #59 Whose Internet? Towards a Feminist Digital Future for Africa

Updated:
Data Governance & Trust
Key Takeaways:

It might have become progressively easier for women to participate meaningfully in policymaking related to digitisation (including Internet governance) over the past twenty years, but there are still barriers to overcome and to address in order to make women’s voices heard and needs met in a comprehensive and not tokenistic manner.

,

There is a need for diversifying and deepening conversations, perspectives, terminology, and research about feminist priorities in the Internet space in order to move beyond a common focus on challenges pertaining to online gender-based violence and related issues, to broader dimensions that shape socio-digital inequalities that continue to impact women’s experiences in Africa.

Calls to Action

Invest in developing more meaningfully and diverse research and advocacy agendas pertaining to women and feminist that extend beyond online gender-based violence.

,

Stakeholders are encouraged to continue investing in capacity-building for African women. Women who are currently actively engaging in digital policymaking and Internet governance platforms should continue to actively open up spaces for new and young women leaders who can actively participate in these conversations and discussions in the future.

Session Report

Session Summary Report

 

As part of the 2023 UN Internet Governance Forum, held in Kyoto, Japan from October 9th to October 12, the African Union Development Agency (AUDA-NEPAD) organized an open forum on Whose Internet? Towards a Feminist Digital Future for Africa, on October 12. The session invited experts from the digital and policy sector to a panel discussion on opportunities and challenges faced by women, working in Africa’s digital economy and their role in shaping Africa’s digital transformation.

 

The session was hosted and moderated by Dr Towela Nyirenda-Jere of AUDA-NEPAD’s Economic Integration Division, supported by Alice Munyua, the Senior Director for Africa Mradi at Mozilla Corporation on-site.

 

Alice Munyua from Mozilla Corporation and Liz Orembo from Research ICT Africa (RIA) opened the discussion by sharing powerful personal testimonies, illustrating their experiences as women and female leaders in Africa’s digital sphere. Their reports highlighted the (mis)perception of female expertise and importance of female role models in digital spaces. Building on their reports, Bonnita Nyamwire from Pollicy and Dr. Nnenna Ifeanyi-Ajufo, Professor of Technology Law, shared and discussed research findings on threats of online gender-based violence, barriers faced by women in Africa’s digital economy and learnings on good practices and policy implications for ensuring safe digital spaces and socio-digital equality for women on the continent. Dr. Tobias Thiel from GIZ concluded the discussion by emphasizing Germany’s commitment towards feminist development policies and its continuous efforts to eliminate discriminatory structures for women, girls, and marginalized groups within the African Digitalization and Data sphere. All panelists highlighted the barriers women remain to face when working in digital sectors and emphasized the need to leverage women’s opportunities and participation to ensure an inclusive African Digital Transformation.

 

Participants off- and online actively engaged in the discussion and emphasized panelists’ statements by sharing their own experiences as leading female experts in the field. The interactive discussion underlined the importance of creating safe spaces and called for policymakers to ensure the inclusion of female voices in shaping policies that ensure a fair and just digital transformation in Africa. 

 

Panelists and the audience called for investing in developing more meaningfully and diverse research and advocacy agendas pertaining to women and feminist that extend beyond online gender-based violence. Panelists and audience also encouraged stakeholders to continue investing in capacity-building for African women. Women who are currently actively engaging in digital policymaking and Internet governance platforms should continue to actively open up spaces for new and young women leaders who can actively participate in these conversations and discussions in the future. Finally, the panel-discussion called on every person to consider their own unique commitment towards advocating for advancing socio-digital equality for women on the continent and beyond and take tangible steps towards realizing these goals.

 

In conclusion, the session identified several key takeaways from the panel discussion and subsequent round of contributions from the audience: While it might have become progressively easier for women to participate meaningfully in policymaking related to digitalization (including Internet governance) over the past twenty years, there are still many barriers to overcome and to address in order to make women’s voices heard and needs met in a comprehensive and not tokenistic manner. In addition, the discussion identified a need for diversifying and deepening conversations, perspectives, terminology, and research about feminist priorities in the Internet space in order to move beyond a common focus on challenges pertaining to online gender-based violence and related issues, to broader dimensions that shape socio-digital inequalities that continue to impact women’s experiences in Africa.

 

 

 

IGF 2023 Lightning Talk #97 Combating information pollution with digital public goods

Updated:
Global Digital Governance & Cooperation
Key Takeaways:

Highlight tools that are lesser known on misinformation and disinformation.

,

There was interest in what digital public goods were and how they could be implemented

Calls to Action

Provide more hands on opportunities to interact with tools and perhaps a demo could be effective.

,

A broader understanding on what digital public goods are is needed to ensure we can support the prevention of disinformation and misinformation

Session Report

Combating information pollution with digital public goods report

This lighting talk opened with an overview of the Digital Public Goods Alliance (DPGA) which is a multi-stakeholder initiative to accelerate attainment of the sustainable development goals by facilitating the discovery, development, use of and investment in digital public goods. The DPGA “defines digital public goods as open source software, open data, open AI models, open standards and open content that adhere to privacy and other applicable laws and best practices, do no harm, and help attain the Sustainable Development Goals (SDGs).” An example of a DPG is District Health Information System 2 (DHIS2), is the world's largest health management information system platform. This was followed by an overview of GitHub. GitHub is a complete software developer platform to build, scale, and deliver secure software with 100+ million software developers and used by 4+ million organizations from governments to international development organizations. Open source software like digital public goods are built on GitHub. 

 

This session focused on how digital technologies are essential parts of our lives and provide solutions to some of the world’s greatest challenges, we must urgently recognize, and help solve their downsides. This is particularly true regarding online information pollution, which has grown to be a cause of distrust and obfuscation. During this session the speakers provided an overview on how policies are needed to combate deep fakes, analyze online news media, verifying crowdsourced data, monitor technology companies’ legal terms, improve access to government policies and lastly, gain insights into the influence of digital technologies on societal conflict.

Mis- and disinformation are typically addressed through reactive measures against specific attacks or proactive prevention efforts. While these approaches are necessary and valuable, they are inherently endless and fail to address the root of the problem. Exploiting vulnerabilities for political gains will always attract malign actors, outnumbering those interested in prevention.

The issue of disinformation arises from vulnerabilities in the tools that mediate the information environment. These vulnerabilities persist because fixing them conflicts with the economic incentives of large platforms. Therefore, it is crucial to increase the costs associated with leaving these vulnerabilities open and provide incentives for their resolution. Alternatively, obligations should be imposed on actors to compel them to address these vulnerabilities.

The session provided two examples with Open Terms Archive publicly records every version of the terms of digital services to enable democratic oversight. They address a critical gap in the ability of activists, journalists, researchers, lawmakers and regulators to analyse and influence the rules of online services. Open Terms Archive enables safety by equipping actors who are already engaged in addressing these vulnerabilities. It amplifies their capabilities and facilitates connections for mutual reinforcement, ultimately enabling more effective action.

The second example is Querido Diario, developed by Open Knowledge Brazil, it addresses the challenge of accessing and analyses official decision-making acts throughout Brazil’s cities. With no centralised platform available, the only reliable source of information is in the closed and unstructured PDF files of official gazettes where they are published. To tackle this information gap, Querido Diario’s robots help collect, process, and openly share these acts. Launched over a year ago, it has grown into a comprehensive repository with more than 180,000 files, continuously updated with daily collections. Querido Diario helps combat information pollution by providing a transparent and reliable source of data that can be used to fact-check and counter false narratives, enabling informed analysis and promoting accountability. The primary users are researchers, journalists, scientists, and public policy makers and it helps benefit various sectors including environmental researchers and journalists, education NGOs, and scientists working with public data. Today, Querido Diario’s coverage reaches 67 cities, where 47 million people live. The next steps involve scaling up to include all 26 Brazilian states and at least 250 cities. The project aspires to incorporate Natural Language Processing models and integrate its data with other public datasets, helping users contextualise information even more.

Finally we closed with a discussion on a gradient approach to AI openness. The DPGA developed an exploratory framework to assess this uses cases of AI where full openness was not possible or not desirable. The audience were interested in the use of AI and preventing misinformation and disinformation which we aim to explore in future sessions.
 

IGF 2023 Day 0 Event #182 Digital Public Goods and the Challenges with Discoverability

Updated:
Digital Divides & Inclusion
Key Takeaways:

Take away 1: Attendees asked thoughtful questions on how to ensure digital public goods will not be misused by bad actors. This is a challenged would be a great next session on how to explore ways to encourage proper use of open source tools.

,

Take away 2: There was extensive conversation on capacity building on not just hard technical skills but also on soft policies that impact the implementation of digital public goods within a region.

Calls to Action

There is extensive interest to explore ways how digital public goods is used and how to prevent actors from using the tools that create harm.

,

Explore a way for simplified implementation process and a way for software developers to contribute.

Session Report

Digital Public Goods and the Challenges with Discoverability report

Summary of session

This session focused on the challenges of discoverability for digital public goods (DPGs) for governments and civil society to understand and implement. The talk opened with an overview of the Digital Public Goods Alliance (DPGA) which is a multi-stakeholder initiative to accelerate attainment of the sustainable development goals by facilitating the discovery, development, use of and investment in digital public goods. The DPGA “defines digital public goods as open source software, open data, open AI models, open standards and open content that adhere to privacy and other applicable laws and best practices, do no harm, and help attain the Sustainable Development Goals (SDGs).” An example of a DPG is District Health Information System 2 (DHIS2), is the world's largest health management information system platform. This was followed by an overview of GitHub. GitHub is a complete software developer platform to build, scale, and deliver secure software with 100+ million software developers and used by 4+ million organizations from governments to international development organizations. Open source software like digital public goods are built on GitHub. 

One key element of this session was to provide more background on what open source in the social sector means. Open source refers to software whose source code is freely available to the public, allowing anyone to view, use, modify, and distribute it. This means that the software can be improved and customized by anyone who has the necessary skills, and that it can be used for a variety of purposes without any restrictions. Open source software is often developed collaboratively by a community, and is typically distributed under a license that ensures that it remains open and free to use. Open source in the social sector is defined as software built with relevance to Sustainable Development Goals that do no harm by design and driven by a desire to increase transparency, accountability, and participation, and to empower individuals and organizations to work together to address social and environmental challenges.

This led us to discuss policies that can help improve discoverability and tools: Public & Private sector partnerships; Collaborative Platforms; Metadata Standards; Long-Term Sustainability Plans; Feedback and Improvement Loops; Interoperability Standards. 

Finally the session concludes with five simple rules for improving discovery:

  • Rule 1: Decide what level of access you can provide for partners
  • Rule 2: Deposit your DPGs in multiple trusted repositories for access, preservation, and reuse. 
  • Rule 3: Create thoughtful and rich metadata - consider the FAIR Data Principles
  • Rule 4: Localize the tools for cross-domain integration 
  • Rule 5: Ensure accessibility and inclusion for ease of access

In conclusion, this was a great session that encouraged roundtable discussions and attendees raised questions on ensuring security of open source; issues on preventing bad actors in using the open source digital public good tools and the challenges in capacity building. As a result of this session, GitHub has launched a microsite to encourage software developers to contribute to DPGs here: https://forgoodfirstissue.dev/.

 

IGF 2023 WS #494 Strengthening Worker Autonomy in the Modern Workplace

Updated:
Global Digital Governance & Cooperation
Key Takeaways:
  • Exploitation and Inequality: Emerging technologies like AI intensify labor exploitation and escalate global inequality. The business models of companies using these tools can compromise social protection rights, as they often fail to offer decent working conditions. Vulnerable groups, including refugees, are increasingly exploited to refine AI datasets.
,
  • Policy and Regulation Concerns: Urgent policy reform is needed to ensure adequate transparency between employers and workers regarding technology use in workplaces. Strong workplace privacy regulations are essential to prevent unwarranted data collection, protect personal information, and to guard against the deployment of unsound analytical tools.
Calls to Action
  • Establish and Enforce Robust Regulatory Frameworks for Worker Protection and Privacy: Develop and enforce detailed, internationally-harmonized workplace data protection and privacy regulation to protect workers, including low-paid workers, vulnerable workers, and hidden labor in the gig economy.
,
  • Foster Industry Accountability Initiatives: Establish frameworks and bodies that scrutinize and shine a light on corporate actions, ensuring that employers across all sectors adhere to high ethical, socio-economic, and environmental standards.
Session Report

The speakers presented insights into the gig economy, the future of work, the impact of Artificial Intelligence on labor rights, and corporate accountability in the context of achieving Sustainable Development Goal 8 (Decent Work and Economic Growth).

Gig Economy:

  • Globally, platform-mediated gig workers face challenges including: low pay, long hours, lack of job security, and the absence of social protections. Case studies were presented from India and Paraguay. 
  • Gig workers face exacerbated problems due to the lack of data protection laws and regulations which apply in the workplace, and a lack of meaningful anti-discrimination regulations safeguarding independent contractors and freelance workers.

Labor Rights and Corporate Accountability:

  • While there are supportive measures for labor rights in some jurisdictions, implementation issues and challenges persist. The Covid-19 pandemic revealed the inadequacy of support for gig workers, highlighting the need for a better safety net.
  • Data protection laws and regulations are crucial to preventing the potential misuse of data collected in the workplace. At the same time, there is a need for worker autonomy in the digital age, especially in surveillance-heavy environments.
  • The concentration of power in the data brokerage industry, market dynamics, and acquisitions raise concerns about transparency, competition, and data privacy.
  • There were calls for greater accountability in venture capital and early-stage interventions in private markets. There is a need for more transparency in companies' developmental stages and more consultation with impacted workers.

Venture Capital and Economic Growth:

  • The venture capital ecosystem remains insular, favoring established networks. Only 7% of female founders globally receive backing from VC firms, pointing to a significant gender disparity in entrepreneurial support, and many problematic workplace surveillance technologies are being developed by men.
  • Platform cooperativism is a potential solution. Governments should promote the creation of fairer work platforms by the workers themselves.

Global Initiatives:

  • UN instruments like the Global Digital Compact, and the WSIS+20 Review, are positioned as tools that could aid in achieving the objectives of SDG 8.
IGF 2023 DC-SIG Involving Schools of Internet Governance in achieving SDGs

Updated:
Key Takeaways:

- Issues involving SDGs are considered in many schools. This meeting heard reports on: SDG 5 on Gender, SDG 7 on access to energy, SDG 16 on Pearce and Justice. In follow up discussions, 1.8 in terms of economic aspects and 9.5 in terms of access were also discussed.

,

- SIGs are becoming reference resources on IG in many countries on topics such as: cybersecurity and regulatory frameworks. These can serve to bring clarity to the IG understanding in a country among citizens and government officials.

Calls to Action

- While SIGs discuss topics concerning SDGs, they do not always do so explicitly. While each of the schools decides on its own curricula and modalities, doing so explicitly could be considered in future courses.

,

- While the SIGs can have a well established curricula they can also adapt the content to special target groups to produce flexible and adaptable content. The SIGs can share their resources on the DC SIGs wiki and website provided by the Dynamic Coalition to help others and to promote their own efforts and achievements.

Session Report

 

Session Presentation

Schools on Internet Governance (SIGs) are an important initiative that help with creating and strengthening capacity in Internet Governance. Regional SIGs have been operating in all the regions of the world, while national SIGs exist in many, but not all, countries. The DC-SIG provides a common platform where SIGs can discuss matters of their interest, share information, share innovations and discuss adaptive mechanisms as they evolve. While the global pandemic did adversely impact many SIGs, most are now back in a fully functional manner.

This session took stock of the current status of SIGs, support community members who want to establish SIGs in countries that do not have them, and examined how SIGs can improve themselves by adapting new programmes and courses.

As part of each yearly meeting, the DC-SIG takes on a topic of specific interest for discussion and further development of plans. This year, the DC looked at how the DC SIG can contribute to developing curricula in support of SDGs as the focus.

1- Slideshow of existing SIGs was shown and a presentation of the recently formed Japan SIG. New schools were given a chance to describe their schools.

2- Schools on Internet Governance (SIGs) and their impact to achieve the Sustainable Development Goals (SDGs)SDG 5,7 and 16)

SDG 5 on gender equality. 

  • Ms Sandra Hoferichter (EuroSSIG)

Schools on Internet Governance (SIGs) contribute to this SDG because they are inclusive and the thematics are various. SIGs are a good effort  to fill the gender gap in education and to help promote women in leadership positions. For many years the application numbers of the EuroSSIG  show that more women are interested in these  topics.

  • Anriette Esterhuysen: AfriSIG addresses SDG 5 through developing women as leaders in IG and by including gender specific topics in the programme. Examples would be sessions on online gender-based violence and on the gender digital divide and how to respond.
     
  • Ashrafur Rahman Piaus (bdSIG)
    Bangladesh SIG works with the rural people on the SDG 5 and 9 by having women in their school and helping them achieve including transgender and many other marginalized community also 

SDG 7 on access to energy 

  • Ms Olga Cavalli (South SIG and Argentina SIG)

Access to energy has a great link with climate change. So in this SIG they have a few panels discussing the impact of consuming energy. The other aspect of energy, it’s important to notice that there is a gap between some areas which have access to energy and others don’t. In the SIG, they talked with different experts and panelists about this issue.

Other SDGs

  • Mr Alexander Isavnin (Russia SIG) speaks on SDG about peace and justiceThe SIGs can help build new standards. Help enforce the multistakeholder process like in ICANN. Also enforces inclusion and effectiveness.
  • SDG 8.6 Pakistan SIG conducts a session on digital entrepreneurship inspiring the youth to capitalize on the economic opportunities on  the internet.  For the SDG 9.5 (c), Access to the internet, they organize sessions on Access and Inclusion where Government and private sector brief the audience about their plans for expansion of ICT services and state of infrastructure in that city/area where school is being held (pkSIG is held at a different city every year). 
  • Some SIGs sometimes discuss topics about SDGs but not all the timeSo it is a good point to dive in after this session to see how the SIGs are promoted and present in Japan  for example.
  • Abdeldjalil Bachar Bong for Chad SIG point is that  every SIG in their own and specific way already contributes  to the SDG topics  

Roundtable  Discussion on the evolution of SIGs

  • SIGs are becoming references on IG in many countries on different topics : cybersecurity, regulations, and need to bring clarity to the IG understandings
  • The SIGs can have a root in a solid curriculum and then adapt the content to a special target group to produce flexible and adaptable content. 
  • The SIGs  can share their resources on the SIGs wiki and website to help others and promote their own achievements. This may align with the concept of open education. 
  • There are different types of SIGs who cater for different groups of people.

 

IGF 2023 Lightning Talk #116 Canada’s Approach to Regulating Online Safety

Updated:
Cybersecurity, Cybercrime & Online Safety
Key Takeaways:

In Canada, there is significant interest to regulate serious harms that results from online interaction, with many recognizing a need for a systems approach, as opposed to one just focused on individual-level content

,

The direction of legislative design seen in various governments are, in many cases, reflective of the legislative context (existing legislation, constitutional provision) that creates legislative constraint, than differences in fundamental opinions

Calls to Action

For regulators creating regulations on online harm to be clear of legislative intent, and focus on solving that specific legislative intent as opposed to other potential unachievable goals

,

For conversations to be clearly centered on the experiences of harm as experienced by people living within that jurisdiction

Session Report

In the session on online harms in Canada, we started by discussing the Canadian definition surrounding online harm, reminding participants that the talk was centered on Canadian usage of terms, which may differ from how the same term is used in other jurisdictions, inviting participants to stop the presenter and ask questions if there were any points that were unclear. We then defined online harms to mean financial, physical, psychological, and emotion harm that results from interactions that take place through the internet, whether they respect local, regional, or national borders. We then listed a number of examples of online harm, making clear that some instances of it (such as child sexual exploitation material) was illegal under existing legal framework, while some (such as misinformation) was harmful but legal.

We then moved to a discussion of the results arising from the survey of Canadians’ experience in online harm, demonstrating a significant number of Canadians are exposed to harmful content frequently. In particular, we noted that while many Canadians saw individuals as being largely responsible for generating harmful contents, they did not see individuals as being primarily responsible for reducing the amount of harmful content online, instead seeing a larger role played by online platform and the government in solving such. This particular finding was discussed in detail, in particular as informing public policy conversation on the topic.

We then moved to a discussion of the current legislative creation process taking place in Canada to tackle online harms, situating the potential legislation within a slew of legislative activity that has occurred in the past 3 years that concerns internet governance and digital economy broadly, stressing the fact that efforts to tackle online harms in Canada cannot be understood in isolation. From that point, a deeper exploration of regulatory tension surrounding online harms legislation followed, focusing particular on how it interacts with public sentiment held in Canada, as well as the law’s potential impacts on the preferred economic system, as well as other existing legislation (including constitutional law - in Canada in the form of the Charter of Rights and Freedoms) as directing the potential direction the legislation might take. The formal presentation finished with situating the Canadian conversation in a global context, stressing that while there are no unified approach to tackling online harm, many deviations seen globally likely may not reflect irreconcilable fundamental differences in definitions of online harm, but are much more likely to reflect the legislative constraints different country faces, and the possible regulatory action (both from a legal and political perspective) one can take.

After the talk, a number of questions were asked by the participants. One surrounded how legislative action can incorporate the idea of “benign exposure” to less harmful content, as a training to inoculate a user against being exposed to more harmful content. The presenter discussed at length current thinking on that topic in areas of policy approaches to tackling mis and disinformation, including approaches to increase digital media literacy amongst different groups.

IGF 2023 Open Forum #52 RITEC: Prioritizing Child Well-Being in Digital Design

Updated:
Human Rights & Freedoms
Key Takeaways:
In addition to the clear and urgent need to identify and address online risks and harms for children associated with the digital environment, sustained multisectoral efforts that prioritize child participation, including research, are required to adequately understand and leverage the positive value that digital experiences can deliver for children’s well-being in a digital age.
Calls to Action
1. To designers of digital play: consider the Responsible Innovation in Technology for Children (RITEC) project outputs, in particular the children’s well-being framework, in your decision-making processes. 2. To governments: consider how to create an enabling environment for businesses to prioritize children’s well-being in digital design.
Session Report

RITEC: Prioritizing Child Well-Being in Digital Design

Open Forum #52 - Session Summary

Speakers

  • Adam Ingle, The LEGO Group
  • Aditi Singh, Young Advocate, Dream Esports India and Esports Monk
  • Professor Amanda Third, Western Sydney University
  • Sabrina Vorbau, EUN
  • Shuli Gilutz, PhD, UNICEF 

Purpose: The session introduced the concept of well-being for children in the digital age before going on to examine its importance when we consider the centrality of digital technologies in children’s lives and the rapidly growing concerns around online harms. 

Part 1: Setting the scene on child safety and well-being in a digital age

This part commenced with Aditi Singh, Young Advocate, describing her own experiences with online gaming and how, from a young age, games pushed her critical thinking and collaboration skills and enabled her to grow intellectually and socially. However, Aditi also described the harms, particularly those related to being a young woman online, associated with gaming. This includes how she, and other children, often don’t understand the risks of sharing personal information and prevalence of gender-based harassment.

Aditi then discussed how forums, like the UNICEF Game Changers Coalition, has helped her and others reimagine the role of women in online gaming and drive the design of games to make them more age-appropriate spaces. Aditi called for governments and other bodies to incentivize private firms to build experiences with children at their core and how platforms themselves need to realize that their choices can unlock the benefits of games while minimizing the risk.

Sabrina Vorbau from European Schoolnet followed Aditi, discussing the EU’s revised Better Internet for Kids (BIK) strategy and how the revision process ensured the new BIK onboarded diverse views, including those of children which were instrumental to shaping the strategy. Ultimately this ensured the strategy adopted a more modern approach to promoting protection, empowerment and participation of children online. Sarbina highlighted how young voices also helped inform the Safer Internet Forum conference, informing important matters like topics, speakers and themes. Sabrina reinforced the need to educate with young people, not simply to them or for them.

Shuli Gilutz began to discuss how design philosophies within industry are critical to embedding digital well-being into online play. Shuli unpacked the concept for ‘well-being’, noting that it’s about the subjective experiences of children and includes not just safety but also outcomes like empowerment and creativity. Shuli described how RITEC is working with designers to develop a guide for business, giving them the tools to create positive digital experiences that are safe, private but also advance well-being.

Part 2: the RITEC project

Adam Ingle provided an industry perspective of why designing for children’s experiences is critical, discussing how the LEGO Group is embedding the concept in its own online play products. Adam highlighted that the RITEC project is about developing an empirical basis for understanding what digital well-being looks like while also creating the tools to proliferate responsible design throughout industry. Adam discussed the LEGO Group’s internal processes that helped the company implement best practice, this includes incorporating the views of child rights experts in product development processes, adopting clear digital design principles built around well-being as well as ensuring business metrics and KPIs also measure success against well-being. Adam concluded by noting that it’s not just about equipping businesses with design tools, but that cultural change is also needed to lift industry standards.

Amanda Third introduced the RITEC project itself, based on engagement of almost 400 children (predominately from the global south) and driven by their own views on digital play. Crucially, the project revealed that digital play brings joy and satisfaction and that children experienced many benefits – particularly through fostering social connection and promoting creativity. They are however conscious of the dangers and expect governments and firms to protect them.

Amanda noted how the perspectives of children informed design of a well-being framework with eight components (competence, emotional regulation, empowerment, social connection, creativity, safety and security, diversity, equity and inclusion and self-actualization). The project has also developed metrics to determine whether digital play experiences are meeting the above eight components of well-being, so it’s a practical, measurable framework and not just an abstract one. Amanda concluded by reinforcing the benefits of online play for children but also the criticality of involving children in research.

Shuli noted the next steps for the RITEC project, which includes the guide for business that summarizes the research and makes the findings actionable. Project managers are building the guidance with feedback from designers to ensure the tools speak design language and can be adopted with relative ease.

Panelists were asked to each note a critical action for embedding responsible digital design. Sabrina highlighted the importance of youth participation and including young voices in policy design. Adam emphasized the need for policymakers to adopt a holistic approach to online regulation, that balanced both harms and benefits and incentivizes firms to design for well-being. Shuli stated that industry needs to pivot towards more holistic design philosophies, including empowerment rather than just engagement. Amanda cautioned that we should also recognize the limits of design and how it’s one part of a wider solution that includes cultural change and education.

QUESTIONS AND DISCUSSION:

How do we reach a true representational group of young people? Amanda noted that it’s important to reach to partner organizations who have expertise in engaging vulnerable and diverse perspectives but also there isn’t a perfect research method for participation, and we all need to move forward consciously.

How do we design for the evolving capacities of children? It was noted that regulatory frameworks require firms to consider the different capacities of children and Adam discussed how clever technical design can ensure that, for example, social settings are more limited for younger ages but expand for older ages who can engage with strangers in a more mature way (and with less risk).

What is the role of parents and educators and how does the framework include them? Shuli noted that the main recommendations for parents are (1) play with your kids - once you play with your kids you understand the benefits and risks and that helps the discussion happen, (2) also talk to children what you, as a parent, are worried about. Sabrina noted the conversations between parents and children about online safety is critical.

 

IGF 2023 Town Hall #39 Elections and the Internet: free, fair and open?

Updated:
Human Rights & Freedoms
Key Takeaways:

Importance of multi stakeholder approach but recognition of the lack of government/private sector engagement, in Africa region in particular, which leads to isolation and an inability to effectively moderate content. This can lead to the common use of Internet shutdowns as a means of addressing content issues such as hate speech, which is not the solution.

,

Whilst some governments may lack the tools, knowledge, digital literacy and access to the wider multi-stakeholder community to address issues of concern through effective content moderation, shutting down the internet does not address the root causes and only creates more problems, including undermining rights and the prosperity of a society. Internet shutdowns are also widely used as a deliberate tool for controlling the free flow of information

Calls to Action

Call on governments to cease use of the blunt tool of internet shutdowns which impedes the free flow of information during electoral periods, and threatens human rights and the democratic process as a whole.

,

Reinforce the importance of planning ahead through narrative and risk forcasting to pre-empt and mitigate shutdowns, with a view to developing knowledge and literacy around other means for addressing the issues Governments state they are addressing by shutting down the internet (e.g. hate speech). Addressing one problem by creating another is not the answer and the multi stakeholder community must continue to challenge the narrative.

Session Report

This session was facilitated by the FOC Task Force on Internet Shutdowns (TFIS), co-Chaired by the U.K. and Freedom Online Coalition-Advisory Network members Access Now and the Global Network Initiative. The session examined causes, trends and impacts of Internet shutdowns and disruptions, and explored how the multistakeholder community can work together to anticipate, prepare for, and where possible prevent Internet shutdowns before they occur, with a focus on identifying practical steps that can be taken ahead of ‘high risk’ elections in 2024.

Kanbar Hossein-Bor, Deputy Director of Democratic Governance & Media Freedom at the U.K. Foreign, Commonwealth & Development Office, provided opening remarks, noting that Internet shutdowns pose a significant threat to the free flow of information and are a fundamental impediment to the ability to exercise human rights, underscoring the importance of a multistakeholder approach to addressing these challenges. Mr. Hossein-Bor highlighted the Freedom Online Coalition (FOC) Joint Statement on Internet Shutdowns and Elections, launched during the session, which calls on States to refrain from shutting down the Internet and digital communications platforms amid electoral periods, as aligned with States’ international human rights obligations and commitments.

Speakers underlined the critical role access to the Internet and digital media platforms play in promoting free, transparent, and fair electoral processes. Panellists spoke on the negative reality of Internet shutdowns and their impact, noting its destructive consequences on economic prosperity and access to health care, as well as obscuring human rights violations. Panellists highlighted how Internet disruptions and preventing access to platforms during election periods are often justified by governments as a means to ensure national security and to mitigate disinformation, even though shutdowns and disruptions have proven to further exacerbate security risks, especially among already vulnerable groups. Speakers also highlighted big tech companies’ lack of engagement and product oversight in local contexts (e.g. hate speech moderation in local languages). Additionally, when examining government use of Internet shutdowns, panellists flagged governments’ lack of knowledge and experience regarding alternative tools to address security concerns amid elections in contexts of violence. In these contexts, full and partial shutdowns were used as a form of resistance and expression of sovereignty by governments in response to companies and systems they felt powerless to and did not know how to engage with. In addition to underlining the need for a multistakeholder approach and calling on telecommunications and digital media companies to ensure people have access to a secure, open, free, and inclusive Internet throughout electoral processes, panellists also recognised the role of disinformation as a risk cited by governments to justify Internet shutdowns and disruptions during elections. In order to address this challenge, speakers noted the following recommendations:

● Narrative forecasting: Anticipating the types of narratives that may be deployed at different points in the electoral process, and preparing a response;

● Overcoming selection bias: Finding ways to bring fact-based information into the right spaces;

● Preemptive responses to disinformation: Drafting preemptive responses to disinformation in order to reduce response time and minimise the spread of disinformation.

● Collaboration between civil society and Big Tech: Encouraging collaboration between local civil society organisations and big tech companies to address online content moderation in local contexts.

During the Q&A session, audience members enquired about government and civil society strategies to address and prevent Internet shutdowns, emphasising additional considerations to take into account when seeking to promote fair and open elections.

The U.K. closed the session by reiterating the importance of 2024 as a key election year, and also highlighted the publication of the Oxford Statement on the Universal Access to Information and Digital connectivity, developed following the Global Conference for the International Day for Universal Access to Information 2023.

IGF 2023 Open Forum #163 Technology and Human Rights Due Diligence at the UN

Updated:
Global Digital Governance & Cooperation
Key Takeaways:

- Access to effective remedy is crucial, noting the impact of technologies on marginalized and vulnerable populations. There is a need to build in elements of independent assessment for oversight and accountability reasons. Transparency on the process and practice and continued engagement with civil society are key. Effective enforcement is also a key element to the success of this guidance.

Calls to Action

- Emphasize the need to take seriously the questions raised in the discussion on transparency, independent assessments, and enforcement for the HRDD Policy Working Group to take on as they implement next stages on the policy guidance.

Session Report

The UN is developing guidance note on human rights due diligence guidance for its use of digital technology. This process has included consultations with internal and external partners, helping mainstream human rights due diligence and align approaches across the UN system. The guidance, undergoing multiple drafts, aims to be inclusive and address different impacts, especially on gender and intersectionality. It will be considered for implementation across the UN system following feedback and endorsement.

UNHCR is actively applying human rights due diligence in its digital technology use, focusing on complex settings. They have a range of policies and are working on a formal framework to align with international human rights and ethical standards. They have been involved in developing the guidance through case studies and strategic partnerships, and the guidance has evolved to become more implementable. UNHCR plans to incorporate the guidance into their digital strategies.

The World Bank commends the principles-based approach but emphasizes the need to consider different levels of development and maturity among member states, stressing the importance of adapting the guidance to each country's specific context while maintaining universal principles.

Access Now highlights that access to effective remedy is crucial, noting the impact of technologies on marginalized and vulnerable populations. There is a need to build in elements of independent assessment for oversight and accountability reasons. Transparency on the process and practice and continued engagement with civil society are key. Effective enforcement is also a key element to the success of this guidance, as well as transparency in private-public partnerships.

The session concluded with OHCHR emphasizing the need to take seriously the questions raised in the discussion on transparency, independent assessments, and enforcement for the HRDD Policy Working Group to take on board as they implement next stages on the policy guidance.

IGF 2023 DC-Sustainability Data, Access & Transparency: A Trifecta for Sustainable News

Updated:
Human Rights & Freedoms
Key Takeaways:

Data, access, and transparency are fundamental to the sustainability of news and internet governance. However, data access discrepancies around the world, especially in Global South regions, limit the capacity of research, analysis and reporting about the impact that digital platforms have on news and journalism sustainability, as well as on society as a whole.

,

The global reach of supranational policies might require regional/local parties to comply with rules originated elsewhere. The session acknowledged the interconnection of local issues with global ramifications and vice-versa, and stressed the importance of ensuring representation and access to digital policy discussions in all levels for those communities and sectors that will be most affected by these initiatives.

Calls to Action

To Intergovernmental Organizations: Allocate resources and initiatives to enhance participation and access for underrepresented communities, ensuring their voices are heard in global internet policy discussions, including on data privacy, news sustainability, and generative AI, and that their perspectives are taken into account when drafting resolutions, policies, and guidelines.

,

To Private Sector: Ensure that the implementation of internal policies created in compliance with international or supranational bodies take into account the diversity of local context. Engage with local stakeholders, media organizations, journalists, and their communities to address the local implications of global digital policy frameworks.

Session Report

Introduction

The DC-Sustainability coordinators, Daniel O’Maley, Waqas Naeem and Courtney Radsch opened the session by underscoring the significance of balancing technology innovation governance with the sustainability of journalism and news media. The key highlight for the year was the dynamic coalition's focus on data transparency and access as vital elements for media sustainability. The coalition's annual report was launched during the session, a collaborative endeavor that offers a snapshot of the critical issues facing the news media industry. The report spotlighted topics like the power imbalances between media and tech giants, the dual nature of government regulations impacting media, and the challenges and opportunities presented by technological innovations, such as generative AI.

In the first section of the session, authors of the report presented their chapters: Prue Clarke (New Narratives - Australia), Mike Harris (Exonym - United Kingdom), Juliana Harsianti (Independent Journalist and Researcher - Indonesia) and Juliet Nanfuka (CIPESA - Uganda). Following the presentations of each chapter, members of the DC-Sustainability took the floor to present their work: Michael Markovitz (GIBS Media Leadership Think Tank - South Africa), Ana Cristina Ruelas (UNESCO - Mexico), Julius Endert (DW Academy - Germany), Michael Bak (Forum on Information and Democracy - France), Sabhanaz Rashid Diya (Global Tech Institute - Bangladesh) and Ramiro Alvarez (CELE - Argentina). The session concluded with an open discussion with the audience.

Global influence of EU/US policies

A key topic was the overarching effect of policies and tech companies from powerhouses like the EU and the US on the global digital space. Despite being localized, their ripple effect transcends borders, impacting organizations working in so-called “Global South” countries. These organisations often find themselves grappling with the daunting task of compliance, struggling to decipher a logic they didn't create and can't control. Notably, these policies (both from companies and governments) play a pivotal role in shaping how journalists and media outlets operate, offering them limited avenues to challenge the tech giants. Courtney Radsch elaborated on these techno-legal systems, emphasizing the major influence of US and EU-based tech platforms on global media. These platforms determine how content rules and policies, such as the DMCA and GDPR, are implemented. Tying into the conversation on how centralized internet governance has impacted media visibility and sustainability, Mike Harris spoke about the importance of decentralized rulebook systems to empower news media, especially in the face of challenges from large online platforms. Juliana Harsianti shed light on the evolution of digital technology in Indonesia, emphasizing the implications of regulations intended for e-commerce now being used to restrict journalistic freedom. 

Digital Equity: Paving the Way for Sustainable Journalism

Data stands as the backbone of informed decision-making in today's digital realm. Gathering the right data is the first hurdle. With tech platforms influencing the visibility and viability of content, there's an undeniable need for a coordinated approach to collect and utilize data. Such data can aid in understanding audience behaviors, advertising strategies, and the effectiveness of content distribution methods. Ensuring a fair compensation model, bolstered by clear data-driven strategies, can pave the way for the sustainability of quality journalism. In that regard, via a written statement, Michael Markovitz the Conference held in July, “Big Tech and Journalism - Building a Sustainable Future for the Global South” which culminated in the adoption of the Principles for Fair Compensation, aimed to be a framework for the design of policy mechanisms seeking to address media sustainability through competition or regulatory approaches.

Prue Clark spotlighted the disparity faced by countries like Liberia in the digital age. The challenges faced by media in such countries, from a lack of digital monetization knowledge to reliance on government support, are evident. Juliet Nanfuka offered a parallel from Uganda, emphasizing the hesitancy in the media's approach to AI, despite the challenges they face. Both Clark and Nanfuka highlighted the struggles and gaps in media adaptation and digital training in lower-income countries.

Daniel O’Maley emphasized the transformative power of data sharing, stressing the importance of understanding which data is essential for different sectors. He talks about the implications of data transparency policies, especially considering their global impact.

While Nanfuka highlighted the challenges of integrating new technology into media spaces that are already grappling with other significant issues, Julius Ender dived into the transformative power of AI in media, emphasizing the importance of AI literacy. Both Ender and Nanfuka conveyed the urgency for media sectors, especially in developing countries, to understand and adapt to AI's growing influence.

Regional Focus vs. Global Perspective:

During the Members’ Spotlight, Sabhanaz Rashid Diya offered insight into the mission of the Tech Global Institute to bridge the equity gap between the global South and dominant tech platforms. Ramiro Alvarez provided a deep dive into the media landscape of Latin America, emphasizing the influence of state-driven media and the need for more open dialogue. This regional focus complements the broader global themes discussed, reinforcing the idea that global digital governance challenges often manifest in unique regional ways. Despite the fact that the media landscape varies by region and country, there are common threads of challenge and opportunity related to digital governance, sustainability, and the integration of new technologies.

Conclusion and next steps

Overall, the session emphasized the value of global collaboration grounded in local insights. It's not just about dissecting EU or US policies, but also diving deep into what's happening in places like Uganda and Liberia. The local challenges faced in these regions have global implications, reinforcing the need for an inclusive approach in policy discussions.

While the EU and US might often take center stage due to their significant influence, the collective effort should focus on ensuring that voices from all corners of the world are heard: Global strategies must be informed by local knowledge and experiences. 

In the coming months, DC-Sustainability members will meet again to shape the priorities for the year ahead, especially when it comes to envisioning AI governance and its impact in the media. The goal is to ensure that as the world of journalism evolves, it remains rooted in authenticity, inclusivity, and the pursuit of truth.

IGF 2023 Networking Session #78 Governing Tech for Peace: a Multistakeholder Approach

Updated:
Global Digital Governance & Cooperation
Key Takeaways:

While some perceive technology as a threat to peace (cyber vulnerabilities, privacy and discrimination issues, disinformation and polarisation on digital platforms, trust in information and data undermined by AI), digital technology should also be seen as a peace-enhancing factor, if properly governed by avoiding "tech-solutionism" and adopting an inclusive, multistakeholder approach to implementing PeaceTech initiatives.

,

We need to move from "coercive peace" (tech for security and stability) to "persuasive peace" (tech and data to promote social cohesion). We need human rights due diligence for the procurement process of tech solutions: tech that violates human rights, dignity and freedom should not be called PeaceTech. To enhance social trust, we should regulate processes rather than content, so that the Internet can become truly transparent and accountable.

Calls to Action

To bring together different stakeholders (governments, tech-companies, NGOs, academia) to discuss the potentials and challenges of PeaceTech, define key areas of intervention, and implement collaborative projects to enhance peace and social cohesion via the safe and responsible use of frontier technologies.

Session Report

The Networking Session started with a round of introduction, the participants were from different sectors, but their common thread was using technology for peace and sustainable development. In the beginning of the discussion, the participants tackled the definition of peace, as an important first step in determining the role of technology in its enhancement. Human rights were mentioned as a necessary, but not sufficient condition for peace, along with other criteria such as the positive definition of peace according to which peace implies attitudes, institutions and structures that create and sustain peaceful societies, rather than mere absence of violence. When it comes to the relationship between technology and peace, the participants identified both positive and negative impacts of tech to peace. As PeaceTech advocates using technology as a tool to achieve peace, it should be avoided to associate PeaceTech with any technology that violates human rights and dignity and endangers people’s freedom. In line with that, the participants commented on the need for moving from coercive peace, which entails using tech centrally to obtain security and stability, to persuasive peace, in which technology and the collected data can be used to advance peace and social cohesion. Building trust and creating a safer space without compromising on freedom of expression was identified as another crucial mission. Having in mind people’s tendency to behave responsibly when they are held accountable for their words and actions, the participants mentioned the need for raising transparency and accountability in the digital environment. An example that came up was the social scoring system in China, relevant both for the trust-building issue and for defining areas that PeaceTech includes. The participants agreed on the importance of bringing together stakeholders from various fields, such as governments, tech-companies, NGOs and academia, as well as from different parts of the world and perspectives. Through this multistakeholder approach, the actors would discuss the potentials and challenges of PeaceTech, areas of possible intervention and implement collaborative projects that would be a contribution to using technology safely and responsibly to improve peace and social cohesion.

IGF 2023 Day 0 Event #189 Women IGF Summit

Updated:
AI & Emerging Technologies
Calls to Action

Women IGF should study what are the cost of women exclusion in the digital leadership and spaces, the cost of women’s lack of internet access

,

Women in IGF be recognized as an NRI and inclusive and representative of the global issues.

Session Report

 A call to action is to promote Women IGF globally, to identify and work with ambassadors or champions of internet governance to push for national actions required to empower women and give opportunity to participate as leaders in the Internet Governance and Policy formulation and to be recognized as an NRI at IGF global level. Secondly to support the Feministic Principles inclusion in the Global Digital Compact. 

IGF 2023 Open Forum #98 CGI.br’s Collection on Internet Governance: 5 years later

Updated:
Global Digital Governance & Cooperation
Key Takeaways:

Libraries play an important role in providing access to knowledge. CGI.br has been working on implementing a library and many outreach initiatives that can inspire other organizations to make information on Internet governance more accessible.

,

Controlled vocabularies are essential resources for organizing and retrieving information and data on Internet Governance. Regarding this, artificial Intelligence and machine learning tools can be used in order to automatizing taxonomies.

Calls to Action

The IGF space for experts and stakeholders to share insights, best practices, and challenges related to building and maintaining collections in Internet governance.

,

Stakeholders need to cooperate more on building collections on Internet Governance. One essential area of collaboration is the development of taxonomies and vocabularies specific to Internet Governance.

Session Report

The Open Forum "CGI.br’s Collection on Internet Governance: 5 years later" was presented at IGF-2023 in order to continue the discussion that began in 2017 with the Open Forum titled "Memory and documentation in Internet Governance: The challenge of building collections". It had an audience of 12 people and saw five interactions with the audience.

The moderator Vinicius W.O. Santos provided context by explaining that the earlier open forum was co-organized with the Internet Corporation for Assigned Names and Numbers (ICANN) and focused on documentation and preserving institutional information. Additionally, the Brazilian Internet Steering Committee (CGI.br) team shared its initial efforts to create a specialized library in Internet governance.

The Speaker Jean Carlos Ferreira reported on the main activities and progress made since the last Open Forum about the CGI.br collection. He highlighted actions taken within the Brazilian Internet Steering Committee (CGI.br) and The Brazilian Network Information Center (NIC.br) related to producing and sharing information on Internet governance in Brazil.

The presentation mentioned the wide range of materials produced by CGI.br and NIC.br, including books, guides, reports, CGI.br meeting minutes, resolutions, technical notes, and other promotional materials. 

Ferreira described the main pillars of CGI.br's collection:  1) Documentation of CGI.br activities; 2) Publications; and 3) Specialized Physical Library. The project also includes the development of a digital repository that will include all materials from the Brazilian IGF.

Regarding the initiative's challenges, the presentation raised the need to build a multilingual Internet Governance vocabulary for standardized document indexing. Another highlighted challenge referred to implementing and maintaining robust, though complex, open-source tools that facilitate integration with other collections and collaboration with other organizations.

The moderator emphasized the importance of the session, as information organization and dissemination in the Internet Governance area are seldom discussed but vital.

Comments from the audience pointed out the significance of CGI.br's collections as a fundamental role in strengthening the community and knowledge development on Internet Governance in Brazil. One participant drew attention to artificial intelligence and machine learning in document indexing and designing taxonomies. Another participant also mentioned the possibility of using "language models" for term extraction to build a taxonomy. A third participant inquired about lessons learned during the project and tips for institutions interested in implementing similar initiatives. 

The speaker and the audience discussed the need to build an Internet Governance taxonomy for better information organization. Developing this taxonomy is a challenge faced by the Internet Governance community due to the diversity of topics and specializations within this field. Therefore, it is essential to bring together the librarian community, the Internet technical community, and other stakeholders to discuss and create an adequate vocabulary and taxonomy for the Internet Governance area.

The session featured comments from Mr. Winston Roberts, representing the International Federation of Library Associations (IFLA), who mentioned that IFLA is involved in the Internet Governance process, participating as one of the multistakeholder communities. He pointed out the critical role that Internet Governance plays in delivering library services and disseminating information. He emphasized the importance of collaboration and cooperation between libraries and the Internet technical community. He discussed the update of IFLA's Internet Manifesto, encouraging participants to reach out to IFLA and its regional representations in Latin America and the Caribbean for more information.

In conclusion, the open forum fostered an important discussion on the need for collaboration and dialogue within the Internet Governance community to create a taxonomy that addresses Internet Governance topics. It underscored the importance of CGI.br's collections in strengthening knowledge development within the Internet Governance community.

IGF 2023 Town Hall #162 How prevent external interferences to EU Election 2024 - v.2

Updated:
Cybersecurity, Cybercrime & Online Safety
Key Takeaways:

An efficient fight against disinformation at elections times requires a framework for a large cooperative between the different stakeholders, a continuous monitoring of the phenomena, and rules for transparency in the different processes. A “big stick” against those who don’t want to play along the rules is also very useful. In case of non-respect of the rules, EU Commission can issue warning letter to fines up to 6% of the global turnover.

,

Concerning the coming European elections, EDMO set up a specific task force which has three areas of activity: - the past, i.e. reviewing old electoral campaigns to identify the different strategies - the present, i.e. an evaluation of the main risks, country by country - the future, i.e. how to better prepare the network for the coming campaign.

Calls to Action

Under the guidance of the Commission, EDMO has created a task-force covering all EU countries and all EU languages with the involvement of a broad set of stakeholders to carry out a risk assessment, monitor and report on mis/disinformation trends, and increase cooperation between the stakeholders.

,

One of the new challenges is generative artificial intelligence, which can amplify intentional disinformation campaigns: a human centric approach needs to clearly separate human from artificial output. Therefore, AI production will not have copyright or have free speech rights, and will need to be clearly identified (watermarking).

Session Report

IGF 2023 Town Hall #162 How prevent external interferences to EU Election 2024

Esteve Sanz in Kyoto and Albin Birger from Brussels, the representatives of the European Commission, stressed that disinformation is false or misleading content that is spread with an intention to deceive or secure economic or political gain and which may cause public harm. It is not the Commission’s aim to create a ministry of Truth, but to make the online environment more transparent and its actors accountable, to empower citizens, and to foster open democratic debate. One of the new challenges is generative Artificial Intelligence, which can amplify disinformation campaigns: a human centric approach needs to clearly separate human from artificial output. Therefore, AI production will not have copyright or have free sch rights, and should be clearly identifiable (identifying the effective way to do so, for instance through watermarking, remains a challenge).

 

They also presented the articulation of the different EU’s initiatives (regulatory and other) and institutional set up to fight against disinformation:

  • under the DG CNECT, Regulations have been developed and are now in place at EU level (Digital Services Act and Digital Markets Act); The 2022 Code of Practice on Disinformation was strengthened in 2022, empowering industry to adhere to self-regulatory standards to combat disinformation. The Code of Practice aims to be transformed into a Code of Conduct under the DSA (to constitute a risk mitigation tool for Very Large Online Platforms, while remaining voluntary); and the European Digital Media Observatory (EDMO) has been set up to support the creation of a cross-border and multidisciplinary community of independent fact-checkers and academic researchers.
  • under the European External Action Service different strands of work aim to foster international cooperation, to increase situational awareness and coordinate response to Foreign Information Manipulation & Interference (FIMI), including with partner countries, e.g. a Rapid Alert System between EU Member States’ administrations the creation of the EUvsDisinfo database or that  of a FIMI “toolbox”.
  • the DG COMM, provides for internal Commission coordination and factual communication on EU policies, through monitoring  and analysis of related areas,, with an accent on debunking false narratives (e.g. climate change disinformation),  and through the promotion of media literacy initiatives.

 

Specific situations also call for targeted and coordinated actions, e.g. the imposition of EU sanctions on state owned outlets suspending RT and Sputnik’s broadcasting in the EU.

In view of the coming 2024 elections, specific initiatives have been put in place to further cooperation between the different actors:

- within the framework of the Code of Practice there is a Working Group on Elections, with a focus on the activities of the signatories and the facilitation of exchange of information between them

- under the guidance of the Commission, EDMO also has created a task-force covering all EU countries and all EU languages with the involvement of a broad set of stakeholders to carry out a risk assessment, monitor and report on mis/disinformation trends, and increase cooperation between the stakeholders.

Stanislav Matejka, representative of the ERGA, explained that the European Regulators Group for Audiovisual Media Services functions as an expert body, which is also tasked to provide the Commission with essential evaluation of the local implementation of the Code of Conduct, the local respect of the transparency obligations. It coordinates the work of the local authorities to monitor the effective implementation of the European policies in these matters (e.g. the access to data), and handles the repository of political adverts.

Paula Gori, Secretary General of EDMO stressed the necessity of a multidisciplinary approach of the phenomena of disinformation, which required expertise in numerous fields, from emotion analysis to computing, etc. In that sense, EDMO should be considered as a platform offering tools to the experts from the different fields, from fact-checkers to academic research, without forgetting the fundamental promotion of media literacy.

Giovanni Zagni, representative of a member of the network of fact-checkers and chair of the EDMO task force on elections, explained how their work has evolved from the sole analysis of content (which nevertheless remains an important part). For example, they set up a specific task force on Ukraine which led to 10 recommendations to policy makers; they produce a monthly brief on the tactics of disinformation.

Concerning the coming European elections, EDMO set up a specific task force which has three areas of activity:

- the past, i.e. reviewing old electoral campaigns to identify the different strategies

- the present, i.e. an evaluation of the main risks, country by country

- the future, i.e. how to better prepare the network for the coming campaign.

Caroline Greer, representative for TikTok, expressed the support of the company for fact-checking.

Concerning the coming elections, TikTok has a global election integrity program, with a template that is applied to local circumstances. This includes:

- specific election policies

- community guidelines

- a full prohibition of political advertising (at all times)

- a restriction of certain political activities such as funding campaigns

- local “election hubs” that inform citizens about - for example - where to vote, ecc.

Eril Lambert, from Eurovisioni in Rome, express appreciation for the role attributed by the European Union to civil society in the mechanisms to fight disinformation and raised several questions to the representatives of the EU and of the platforms. In response to different questions on line and in the room, it was precised that the voluntary Code of Conduct was only one tool to demonstrate compliance with European rules. The objective is to bring disinformation into light, through transparency – the Commission often launches investigations, and the DSA has now added an auditing layer to the instruments at its disposal. Take downs by platforms with their motivation and eventual appeal; have to be sent to a Commission database.

In case of non-respect of the rules, the Commission has several means available such as warning letters and imposing (large) fines up to 6% of the global turnover.

It was also indicated that what is important to improve collaboration between platforms, authorities, and institutions such as EDMO, e.g. to facilitate access to platform data on behalf of researchers.

Transparency of recommending algorithms systems is also an issue. TikTok for example allows the user to reset its recommendations to avoid to remain locked in a filter bubble, or to refuse a personalized feed.

The conclusion was that an efficient fight against disinformation requires a framework for a large cooperative between the different stake-holders, a continuous monitoring of the phenomena, and rules for transparency in the different processes.

A “big stick” against those who don’t want to play along the rules is also very useful.

IGF 2023 Networking Session #172 Networking for Information Integrity in Asia and Globally

Updated:
Cybersecurity, Cybercrime & Online Safety
Key Takeaways:

The process of negotiating internet governance issues is opaque and confusing to ordinary people, particularly in less developed, global majority contexts. There needs to be a multistakeholder approach (public sector, private sector, media, academia, civil society, tech companies) to address internet governance specifically focusing on information integrity issues.

,

Civil society engagement with the private sector has gotten more difficult as tech companies disinvest in trust and safety teams, certain platforms such as TikTok have become more responsive such as to physical threats of violence or violent images, while others such as X have been challenging to engage.

Calls to Action

All stakeholders should work with and pressure private sector technology companies to have clear and robust escalation paths that are not based on personal relationships or single employees committing to action.

,

Civil society should form regional networks so that similar closing contexts can share resources and strategies. Through networks, CSOs should look to share information to get a more holistic view of current data sets, engagement experiences, and historical data around closing societies and other contexts.

Session Report

Major themes:

This session brought together stakeholders from civil society across Asia and Globally to discuss the challenges facing CSOs when trying to build a resilient information space, especially in closed or closing societies. NDI discussed its Info/tegrity network and other means of connecting with groups across civil society to develop capacity to address information integrity issues and contribute to internet governance discussions. Experts from Pakistan and Taiwan shared the challenges associated with engaging social media platforms to gather data for critical research, support an open, democratic and free information environment during elections, and escalate cases of online harassment and abuse. The session then split into four break-out groups to share both existing challenges and potential solutions across the major themes on this issue.

Group 1: Challenges of working online in closed societies

  • This group discussed the feasibility of creating a global network of CSOs for groups or individuals working in closed societies. They agreed that while a network of support is an important component of successfully navigating a closed space as a CSO, regional-level networks make more sense than global networks. Closed societies face unique challenges within their larger classification and allowing convergence at the regional level would allow groups to take a narrower, deeper approach to networking than a broad, shallow global network would achieve. They cited current work in Asia around protecting journalists in closed societies as an existing model of their proposal.

Group 2: Social media data access for research

  • This group discussed current methods of monitoring social media platform information and what resources would make their work easier. They focused on ways CSOs can support each other’s work in addition to talking about recent API changes that have made research more difficult. 
  • They highlighted that to continue the important work of researching the information landscape using social media data, they recommend that CSOs build regional networks to share their experiences across similar contexts and share their current data sets and historical data sets to bolster the total amount of data and enrich everyone’s data sources. 

Group 3: Coordination with technology platforms around trust and safety concerns

  • This group discussed the varying roles specific social media platforms play across Asia and the World. They also emphasized that platforms’ gutting of trust and safety teams across the boards has resulted in a delay or lack of response when online harm is reported and an uptick in attacks on activists and human rights defenders.
  • Their main point was that while programs like Meta’s Trusted Partner Program are effective in providing an escalation path, it is not equitable and relies on personal relationships or individual tech platform employees prioritizing trust and safety. A system fix is needed, especially with the 2024 elections around the corner. The recommendation from this group is that all stakeholders should work with and pressure private sector technology companies to have clear and robust escalation paths that are not based on personal relationships or single employees committing to action.

Group 4: Internet governance for information integrity

  • This group recommended several strategies to improve coordination at the global level around local, national, and/or regional Internet governance and policy best practices. These include adopting a multistakeholder (public sector, private sector, media, academia, civil society, tech companies) approach to Internet governance to make the process more accessible, prioritizing tools that enable access for people with disabilities and other marginalized groups, and developing regional and local strategies for Internet governance as well as a global perspective.
  • They also suggested that a human rights approach can be incorporated into technology platform policy by applying the multistakeholder framework to implement better interaction, information sharing and policies with the private sector. This would have impacts such as more robust privacy and data protection procedures, simplifying the language that platforms use to communicate their policies (including expanding available languages), and creating quantifiable measures for tracking online harms.
IGF 2023 Lightning Talk #37 Open Data Evaluation Model in Brazilian Governmental Portals

Updated:
Data Governance & Trust
Key Takeaways:

Takeaway 2: Brazil has begun implementing such tool

,

Takeaway 1: Tools for automated evaluation of open data portals and open data best practices can help to improve open data quality

Calls to Action

Call to action 2: The Civil Society that is involved with open data should become aware of the existence and workings of such evaluation tools

,

Call to action 1: Governments around the world should follow Brazil's example and implement evaluation models.

Session Report

Report on Lightning Talk #37: "Open Data Evaluation Model in Brazilian Governmental Portals" 

Introduction

The lightning talk "Open Data Evaluation Model in Brazilian Governmental Portals" was presented at the Internet Governance Forum, shedding light on the critical issue of data standardization and the efforts made by the Brazilian Network Information Center (NIC.br) to address this challenge. The talk emphasized the importance of open data quality, presented an automated evaluation model under development for the Brazilian Open Data Governmental portals, and issued two key takeaways and call-to-action messages.

Key Takeaway Messages

The presentation by the speaker highlighted two primary takeaway messages:

1. Tools for Automated Evaluation of Open Data Portals Enhance Data Quality

The first crucial takeaway from the talk was the significance of tools for automated evaluation in enhancing the quality of open data. Open data portals often need more standardized information structures, an improvement that impacts efficient data access and utilization. The speaker stressed the need for standardized principles and best practices for publishing open data. Tools designed to evaluate open data portals and ensure adherence to these principles can play a vital role in improving the overall quality of open data.

2. Brazil's Implementation of Evaluation Tools

The second takeaway message revealed that Brazil has initiated the implementation of such tools for evaluating and improving open data quality. The Brazilian government has recognized the importance of standardization and best practices in data publication and is taking proactive steps to address these issues.

Call-to-Action Messages

The talk concluded with two call-to-action messages aimed at governments and civil society:

1. Governments Worldwide Should Emulate Brazil's Example

The first call to action implores governments across the globe to follow Brazil's lead and implement open data evaluation models. Given the benefits of standardization and best practices in data publication, the speaker urges governments to prioritize developing and deploying tools for automated evaluation in their own open data initiatives. This step would improve data governance and lead to more efficient data sharing and utilization.

2. Raise Awareness among Civil Society

The second call to action aims at civil society organizations and advocates involved in open data. It encourages these stakeholders to become aware of the existence and workings of open data evaluation tools. By increasing awareness and understanding of these tools, civil society can actively participate in the process, supporting the implementation of standardized data practices and advocating for open data quality in their respective regions.

Conclusion

The lightning talk on "Open Data Evaluation Model in Brazilian Governmental Portals" at the Internet Governance Forum highlighted the critical need for standardized data publication practices and the role of automated evaluation tools in achieving this goal. The Brazilian Network Information Center's proactive efforts in implementing such tools serve as an inspiring example for other nations. The call-to-action messages emphasize the importance of global adoption and civil society involvement in furthering the cause of open data quality and standardization.

In an age where data drives innovation and policy decisions, standardization and evaluation tools ensure that open data fulfills its potential as a valuable resource for governments, organizations, and individuals worldwide. The lessons from this talk must be acknowledged and acted upon, setting a higher standard for open data globally.

IGF 2023 Open Forum #58 Child online safety: Industry engagement and regulation

Updated:
Cybersecurity, Cybercrime & Online Safety
Key Takeaways:

Online child sexual exploitation is a grave violation of human and child rights. Threats are continuously escalating and changing.

,

Self-regulatory measures are broadly perceived as inadequate. Significant regulatory and cultural changes are on the horizon, demanding greater responsibility and action from businesses.

Calls to Action

Governments and companies must remain vigilant and responsive to the ever-evolving threat landscape. Continued exchange of learning and experience in collaborative and co-regulatory models across different jurisdictions is necessary.

,

Companies should embed online child sexual abuse and exploitation into broader human rights due diligence, including impact assessments.

Session Report

IGF 2023 Open Forum #58: Child online safety – Industry engagement and regulation


Key Takeaways


1.    

Online child sexual exploitation is a grave violation of human and child rights. Threats are continuously escalating and changing.


2.    

Self-regulatory measures are broadly perceived as inadequate. Significant regulatory and cultural changes are on the horizon, demanding greater responsibility and action from businesses.

Call to Action


1.    

Governments and companies must remain vigilant and responsive to the ever-evolving threat landscape. Continued exchange of learning and experience in collaborative and co-regulatory models across different jurisdictions is necessary.


2.    

Companies should embed online child sexual abuse and exploitation into broader human rights due diligence, including impact assessments.

Context

This hybrid session facilitated in-person by Ms Afrooz Kaviani Johnson – and online by Ms Josianne Galea Baronexplored different models of industry engagement and regulation to address online child sexual abuse and exploitation (CSEA). 

Panel discussion

Ms Julie Inman Grant, eSafety Commissioner, Australia, discussed the suite of regulatory tools her office uses to combat online CSEA. Key among Australia’s tools is its complaints schemes, which facilitate the removal of harmful content to prevent re-traumatization and allow trend analysis to influence systemic change. Additionally, the Basic Online Safety Expectations, which detail the steps that social media and other online service providers must take to keep Australians safe, enable the Commissioner to demand transparency, complete with penalties. Australia’s tools also include mandatory codes for various sections of the online industry in relation to illegal and restricted content, including CSAM. The Commissioner emphasized that even the largest companies are not doing enough and stressed the need for global pressure on companies to enhance safety measures. ‘Safety by Design’ was highlighted as a fundamental systemic initiative to support industry to better protect and safeguard citizens online.

Mr Tatsuya Suzuki, Director, Child Safety Division of the Children and Families Agency, Japan, presented how the newly formed Children and Families Agency is working with the private sector to combat online CSEA. The national framework acknowledges the essential role of private sector voluntary actions to ensure children’s safety online. It respects the balance between eradicating harmful content and ensuring freedom of expression. The Agency’s strategies, detailed in the 2022 National Plan for the Prevention of Sex Crimes against Children, involve public-private collaborations. The Plan for Measures Concerning Child Sexual Exploitation 2022 outlines these government-led actions. In July 2023, a prevention package was presented to the Cabinet Office, emphasizing joint efforts with relevant ministries to address child exploitation. 

Mr Toshiaki Tateishi, Japan Internet Provider Association/ Internet Contents Safety Association, discussed Japan’s private sector initiatives against online CSEA. The Internet Content Safety Association (ICSA) compiles a list of websites known for child abuse material based on data from the National Police Agency and the Internet Hotline Centre. An independent committee reviews this data, and upon confirmation, the ICSA distributes a blocking list to ISPs and mobile network operators, preventing access to these sites. The Safer Internet Association (SIA) contributes by operating a hotline for reporting illegal content, conducting research, advising on policy, and leading educational initiatives. These associations coordinate with providers, both domestic and international, to reduce and remove illegal and harmful content.

Dr Albert Antwi-Boasiako, Director-General, Cyber Security Authority Republic of Ghana, emphasized Ghana’s approach to championing industry responsibility and innovation. Recognizing that self-regulation is insufficient, Ghana advocates for ‘collaborative regulation’ rather than traditional top-down mandates. This strategy acknowledges that companies often overlook the risks children face online. Ghana’s Cybersecurity Act mandates industry action to protect children, encompassing content blocking, removal, and filtering. This law requires further specification through a legislative instrument, which is currently being crafted in consultation with the private sector and civil society. The Act includes administrative and criminal penalties, crucial for enforcement in developing nations, and allows for fines to fund the regulatory institutions. Dr Antwi-Boasiako noted that success hinges on widespread awareness and understanding of the issues at stake.  

Mr Dunstan Allison-Hope, Vice President, Human Rights, BSR (Business for Social Responsibility) highlighted the critical role of human rights due diligence (HRDD), including impact assessments, in combating online CSEA. HRDD based on the UN Guiding Principles on Business and Human Rights (UNGPs) can form a key part of a company’s obligations to address online CSEA. The benefits of this approach include a comprehensive review of human rights impacts, special attention to vulnerable groups like children, and a structured framework for action, tailored to each company’s position in the technology stack. With regulations now echoing the UNGPs, voluntary measures are shifting to mandatory. He urged companies to embed children’s rights into their broader HRDD processes. While this significant regulatory change is especially prominent in Europe, he encouraged companies to take a global approach to achieve the desired child rights outcomes.

Interactive discussion

The discussion started on balancing children’s right to protection with their right to access information, especially age-appropriate and accurate sexual and reproductive health information. The conversation took cues from the UN Committee on the Rights of the Child, General comment No. 25 (2021). Although the internet was not built for children, they are significant users, leading to a call for both minimizing harm and amplifying benefits. Australia’s consultations on approaches to age-assurance spotlighted this need, pushing companies to look beyond age-gating. A human rights-based approach was emphasized to navigate tensions between human rights. Strategies like DNS blocking alone were deemed inadequate, emphasizing holistic approaches, like Australia’s ‘3Ps’ model of Prevention, Protection, and Proactive, systemic change, are crucial. One significant challenge lies in raising awareness and promoting help-seeking behaviours among children and young people.

Conclusion

Both regulators and companies, along with civil society, are currently navigating extremely challenging dilemmas. Whether through regulation, self-regulation, or ‘collaborative regulation’, there is a significant shift happening in the regulatory landscape. This shift presents an opportunity to firmly integrate the issue of online CSEA into these evolving processes.

Further resources

United Nations Children’s Fund (2022) ‘Legislating for the digital age: Global guide on improving legislative frameworks to protect children from online sexual exploitation and abuse’ UNICEF, New York.

 

IGF 2023 YCIG Advancing Youth Participation in IG: results from case study

Updated:
Key Takeaways:

Value of Inclusivity: The discussion also emphasized the importance of not just engaging youth who are already part of the community, but also newcomers and the benefits of involving a wider and more diverse youth population in shaping these sessions and discussions.

,

Collaborative Efforts: Collaboration and partnerships seem to be key themes. The discussion highlights collaborative efforts across various groups, such as the Internet Society Youth Standing Group and the Youth Coalition on Internet Governance.

Calls to Action

The Role of Youth: While youth are at the decision table, there is a need to move beyond this and consider them as co-collaborators and co-creators in Internet governance discussions.

,

Growing Youth Engagement: The conversation underscored a growing trend where young people are becoming increasingly involved in these discussions. African governments, in particular, are beginning to engage more with the youth, but there is a call for deeper involvement beyond just Day 0 events.

Session Report

The session captured a discussion related to Internet Governance Forums (IGFs) and youth participation, specifically in different regions like Africa and Latin America. Following are some insights and takeaways:

1. Diverse Regional Perspectives: The session presented various regional perspectives, from Latin America to Africa, on the state of youth engagement in Internet Governance.

2. Growing Youth Engagement: The conversation underscored a growing trend where young people are becoming increasingly involved in these discussions. African governments, in particular, are beginning to engage more with the youth, but there is a call for deeper involvement beyond just Day 0 events.

3. Collaborative Efforts: Collaboration and partnerships seem to be key themes. The discussion highlights collaborative efforts across various groups, such as the Internet Society Youth Standing Group and the Youth Coalition on Internet Governance.

4. Case Studies: Various case studies from different regions, such as Latin America and Africa, were discussed to illustrate the state of youth engagement in these areas. For example, how Youth IGF operates differently across various regions due to cultural, logistical, and governmental factors.

5. Challenges and Solutions: Challenges such as the need for a common reporting tool and the disparity between youth discussions and main session topics were brought up. Solutions like creating a common platform for reporting were suggested.

6. Youth-Led Initiatives: There are emerging youth-led IGF initiatives, such as the Youth IGF in Ethiopia. These initiatives highlight the growing momentum and importance of youth voices in Internet Governance discussions.

7. The Role of Youth: While youth are at the decision table, there is a need to move beyond this and consider them as co-collaborators and co-creators in Internet governance discussions.

8. Value of Inclusivity: The discussion also emphasized the importance of not just engaging youth who are already part of the community, but also newcomers and the benefits of involving a wider and more diverse youth population in shaping these sessions and discussions.

In summary, the session provided a glimpse into the dynamic and evolving role of youth in Internet Governance across different regions. There's a clear call for deeper youth involvement, collaborative efforts, and the creation of systems that ensure their voices are effectively incorporated into broader discussions and decisions.

IGF 2023 DC-BAS A Maturity Model to Support Trust in Blockchain Solutions

Updated:
AI & Emerging Technologies
Key Takeaways:
Benefits of a maturity model: Provides a common framework for assessing blockchains. Support trust in blockchain solutions. Helps organizations identify areas for improvement. Facilitates communication and collaboration between stakeholders. Use cases: Digital identity Banking & Finance Digital Assets Voting/elections Legal Supply chain Other (e.g., healthcare, education Interest from parliamentarians, non-governmental organizations, academia, Reported benefits from those who have conducted assessments of blockchain solutions based on a set of shared criteria: Ensure that blockchain solutions meet the needs of all stakeholders. Reduce the risk of selecting inappropriate or inadequate blockchain solutions for their specific use cases. Promote the adoption of best practices in blockchain design and implementation. Rationale for using a maturity model
Calls to Action

Provide more details about opportunities for training and awareness on the Blockchain Maturity Model and the corresponding assessment methodology. Share lessons learned and best practices. Involve key stakeholders and interested new parties in the: Periodic meetings of the IGF-BAS Collection of input/requirements/suggestions from representatives of multi-stakeholder groups. Develop and validate sector-specific supplements. Simulate the assessment

Session Report

Dynamic Coalition on Blockchain Assurance and Standardization

 

Sessional Report: The IGF-Blockchain Assurance & Standardization, Panel Discussion on “A Maturity Model to Support Trust in Blockchain Solutions”.

Date of Session: 18 October 2023

Kyoto Conference Center, Room: WS 10 – Room I

Online Link: https://intgovforum.zoom.us/meeting/register/tJEucuihrT4pE9VXFZ6GWP2gQNOjl19VqgLQ

 

Introduction

The Dynamic Coalition on Blockchain Assurance and Standardization (IGF-DC-BAS), was established to connect, communicate, and collaborate with government leaders and stakeholders to use blockchain technology to improve public services.

More specifically, with the support of the Government Blockchain Association (GBA), the IGF-DC-BAS established a working group for International Organizations & Standards, supporting the UN-SG Global Digital Compact goals, including:

  • Ensure that everyone has access to the digital world.
  • Promote the use of digital technologies to achieve the Sustainable Development Goals.
  • Protect human rights and fundamental freedoms in the digital age.
  • Build trust in the digital world.

Outcome of the Session

Takeaways:

  • Benefits of a maturity model:
    • Provides a common framework for assessing blockchains.
    • Support trust in blockchain solutions.
    • Helps organizations identify areas for improvement.
    • Facilitates communication and collaboration between stakeholders.

 

  • Use cases:
    • Digital identity
    • Banking & Finance
    • Digital Assets
    • Voting/elections
    • Legal
    • Supply chain
    • Other (e.g., healthcare, education)

 

  • Interest from parliamentarians, non-governmental organizations, and academia:
    • Demonstrate the growing awareness on the importance of blockchain assessments.
    • Create opportunities for collaboration and knowledge sharing.

 

  • Reported benefits from those who have conducted assessments of blockchain solutions based on a set of shared criteria:
    • Ensure that blockchain solutions meet the needs of all stakeholders.
    • Reduce the risk of selecting inappropriate or inadequate blockchain solutions for their specific use cases.
    • Promote the adoption of best practices in blockchain design and implementation.

 

  • Rationale for using a maturity model:
    • A maturity model provides a structured, objective, repeatable, and technologically agnostic approach to assess blockchain solutions.
    • It helps organizations identify their current state of maturity and track their progress over time.
    • It can be used to benchmark blockchain solutions.

 

Plan of action:

  • Provide more details about opportunities for training and awareness on the Blockchain Maturity Model and the corresponding assessment methodology.
  • Share lessons learned and best practices.
  • Involve key stakeholders and interested new parties in the:
    • Periodic meetings of the IGF-BAS
    • Collection of input/requirements/suggestions from representatives of multi-stakeholder groups.
    • Develop and validate sector-specific supplements.
    • Simulate the assessments of blockchains.

 Additional Activities of the IGF-DC-BAS

In addition to the DC Session, representatives of the IGF-DC-BAS participated in the “Free and Fair Voting Panel”, “Blockchain Assurance Panel”, “Internet for All Panel”, and “Blockchain in Healthcare Panel”.

 

During the 4 days of the conference, the IGF-DC-BAS Team held 24 individual meetings with Government Officials (Parliamentarians form Uganda, Kenya, and Ghana), and representatives from media (Bloomberg), law firms, private sector, and educational institutions.  

 

The topics discussed included newly available functionalities in scalability of networks, secure identification, CBDC, voting, software supply chain security and general governance using zero knowledge, AI and blockchain technology.

 

 

IGF 2023 DC-Blockchain Implementation of the DAO Model Law:Challenges & Way Forward

Updated:
Key Takeaways:

DAOs are a global technology that knows no borders and there should not be a rush to regulating this technology and stifling its growth. We are also starting to see the emergence of case law in relation to key issues such as liability, tort, fidiciary duties etc. What would be needed s the use of sandboxing to allow them to grow and deliver on its promise.

,

When looking at the development of legal frameworks in relation to DAOs some of the prcedural requirements already fits under existing legislation. For certain issues there may not be the need to develop de novo frameworks, but to address the key issues such as the impact of joint and several liability on DAOs which can stifle their development.

Calls to Action

There needs to be greater sensitization and discourse taking place between DAO practicioners and governmental policy and law makers in order to remove misapprehensions about DAOs and also clarify how the technology works and its benefits. In addition to which, DAOs could and should be used as an effective tool in promoting global participatory democrarcy by instutions such as the IGF and this should be definitely explored further.

Session Report

 

Session Report IGF 2023 DC-Blockchain Implementation of the DAO Model Law: Challenges & Way Forward


  1. Session Details

The DAO Model Law is a multistakeholder effort led by COALA (Coalition of Legal Automated Applications) to provide legal certainty for so-called ‘unregistered’ DAOs (i.e., DAOs that are not wrapped in a legal entity form) and their participants, and unlike other regulatory frameworks, accommodate flexibility for their unique features and further innovation. Since its development the Model Law has served as a precedential source in the development of legislation such as the DAO Acts in both Utah and New Hampshire, parliamentary discussions in Australia, and has also been referenced in the recent call for evidence by the UK Law Commission. The session seeks to take the discussion further from the session hosted at IGF 2022, to analyse how different legislators and policy makers are approaching the development of legal frameworks to govern DAOs and also outline lessons learnt as well as recommendations for the way forward as more jurisdictions express interest in regulating unregistered DAOs. The session will have great benefit for policy makers, governmental representatives, law makers, practitioners as well as DAOs in navigating the course of granting legal recognition and certainty and will address the critical aspects of inter alia governance, functional and regulatory equivalence, liability attribution and taxation of unregistered DAOs.

It is intended that the workshop will be conducted in hybrid format to accommodate onsite participation at IGF 2023 as well as online attendees within various jurisdictions who wish to contribute to the discussion on the implementation on the DAO Model Law. In this regard it is anticipated that the official IGF Online meeting platform will be utilized, and online participants will be able to post comments and also ask questions in relation to the content of the discussion.

  1.  Panel Discussion

The Presentation made during the Panel Discussion and ensuing conversation centred around why is there a necessity to develop a DAO Model Law, the inherent advantages of DAOs, the primary principles of the DAO Model Law (viz. functional and regulatory equivalence) as well as the outline of the fundamental sections of the DAO Model Law.

The discussion then focussed on what are the next steps and the progression being made by various jurisdictions towards the implementation of regulatory frameworks. This therefore involved taking a close look at the jurisdictions that have instituted incorporation options such as Wyoming, Vermont as well as the Marshall Islands as well as countries where the Model Law have been considered/reviewed/(partially)transposed, such as Australia (Bragg report, Senate of Australia), United Kingdom (UK Law Commission DAO Consultations), St. Helena, New Hampshire and Utah.

During the session the Panel then focussed on what are some of the challenges faced in garnering adoption by countries which centred around the key sensitive issues of regulatory equivalence, privacy rights (incl. privacy of remuneration) recognised by law as well as taxation.

  1. Next Steps/Way Ahead

It as identified that there is further work that can be undertaken to refine the DAO Model Law, based on developments within the global sphere. As such new taskforces will be convened to work on the key areas of Identity and Limited Liability, Privacy/Transparency, Taxation as well as Technical Guarantees for Functional & Regulatory Equivalence and Updates.

  1. Key Session Takeaways

DAOs are a global technology that knows no borders and there should not be a rush to regulating this technology and stifling its growth. We are also starting to see the emergence of case law in relation to key issues such as liability, tort, fiduciary duties etc. What would be needed s the use of sandboxing to allow them to grow and deliver on its promise.

When looking at the development of legal frameworks in relation to DAOs some of the procedural requirements already fits under existing legislation. For certain issues there may not be the need to develop de novo frameworks, but to address the key issues such as the impact of joint and several liability on DAOs which can stifle their development.

There needs to be greater sensitization and discourse taking place between DAO practitioners and governmental policy and law makers in order to remove misapprehensions about DAOs and also clarify how the technology works and its benefits. In addition to which, DAOs could and should be used as an effective tool in promoting global participatory democracy by institutions such as the IGF and this should be definitely explored further.

---oOo---

IGF 2023 Lightning Talk #122 AI in the courts an opportunity for economic proceedings?

Updated:
AI & Emerging Technologies
Key Takeaways:
  • The use of AI in alternative dispute resolution will be of great benefit to business. Being aware of the chances of winning a dispute and therefore receiving a predicted outcome and/or an assessment of the strength of a party's arguments and position from AI will reduce the burden on the courts. We should use AI to issue non-binding resolutions that will guide a party whether to take the case to court or, for example, to settle.
  • ,
  • The implementation of AI in the judiciary is a universal and global issue. The differences between legal systems remain in the background. We should develop postulates and international legal and ethical standards for the use of AI in the judiciary.
  • Calls to Action
  • We expect from the local governance to support jurisdiction to fulfill the tech gap between the business needs and justice. We should aspire to cooperation between business and public authorities, but at the same time create clear and transparent rules for such cooperation. We must be aware of the temptation of private entities gaining access to citizens' data and attempting to manipulate court rulings using AI systems.
  • ,
  • The implementation of AI in the courts should be progressive, in the first step we should start by using AI to perform routine, repetitive and time-consuming activities. As a second step, it would be good to implement solutions based on hybrid intelligence.While implementing the AI driven solutions we have to review carefully every activity that is processed in the court and analyze what can be replace in a first place.
  • Session Report

    The panel discussion titled "AI in the courts an opportunity for economic proceedings?" brought together industry experts who explored the implications, advantages, and challenges of integrating Artificial Intelligence (AI) into the judiciary. The session was moderated by Rafał Wieczerzak.

    Panelists and their Key Points:

    In her remarks, Anna Pietruszka primarily focused on how artificial intelligence can impact the efficiency of court proceedings, especially from a business perspective. She pointed out that introducing AI-based tools for straightforward, routine matters, such as making minor changes in business registers, could significantly speed up and simplify procedures. Anna also emphasized the need for modernizing communication within the judiciary. She suggested that while courts are an integral part of our system, their current communication methods are not aligned with modern realities. In her view, technologies like artificial intelligence can play a pivotal role in transforming these mechanisms to be more accessible and understandable to today's society.

    Gabriela Bar and Robert Sowiński highlighted the complexity of introducing AI into the judicial system. Gabriela focused on the ethical aspects of implementing AI. She underscored that trust in the system is crucial and that people need to believe that the technology is used fairly and transparently. Therefore, as she suggested, the optimal model would be Explainable Artificial Intelligence (XAI), which would be able to provide people with a logical justification for its decisions. Robert, on the other hand, cited the example of the Chinese judicial system where AI is already in use and pointed to the successes in the realm of alternative dispute resolution in the UK. However, he noted that this technology is not without risks, and we need to be aware of the potential consequences of its misuse.

    From a judge's perspective, Konrad Wasik shared his unique insights into the impact of artificial intelligence on the judiciary. He expressed concern over the burden of numerous administrative tasks that divert judges from their primary duty of adjudicating. In his opinion, artificial intelligence could significantly alleviate courts from these routine tasks, allowing them to concentrate on more complex cases that require human judgment. Konrad also identified potential areas of AI application, suggesting that its integration into the judiciary holds immense potential, as long as it's introduced with due caution and an understanding of its limitations.Post-panel Activities:

    The session was not just an opportunity to gain insights from the panelists but also a platform for attendees to ask questions. The face-to-face interaction allowed for lively debates and provided a chance for legal professionals from various countries and continents to network, exchange experiences, and establish valuable contacts.

    Conclusion:

    The panel successfully addressed the multidimensional aspects of integrating AI into the judiciary, from efficiency and modernization to ethical considerations. The consensus was that while AI offers great potential, its implementation needs to be done thoughtfully, ethically, and in a phased manner.

    The panel concluded with the following recommendations and recommendations:

    The implementation of AI in the judiciary is a universal and global issue. The differences between legal systems remain in the background. We should develop postulates and international legal and ethical standards for the use of AI in the judiciary.

    The use of AI in alternative dispute resolution will be of great benefit to business. Being aware of the chances of winning a dispute and therefore receiving a predicted outcome and/or an assessment of the strength of a party's arguments and position from AI will reduce the burden on the courts. We should use AI to issue non-binding resolutions that will guide a party whether to take the case to court or, for example, to settle.

    The implementation of AI in the courts should be progressive, in the first step we should start by using AI to perform routine, repetitive and time-consuming activities. As a second step, it would be good to implement solutions based on hybrid intelligence.While implementing the AI driven solutions we have to review carefully every activity that is processed in the court and analyze what can be replace in a first place.

    We expect from the local governance to support jurisdiction to fulfill the tech gap between the business needs and justice. We should aspire to cooperation between business and public authorities, but at the same time create clear and transparent rules for such cooperation. We must be aware of the temptation of private entities gaining access to citizens' data and attempting to manipulate court rulings using AI systems.

     

    IGF 2023 WS #279 Sandboxes for Data Governance: Global Responsible Innovation

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    No sandbox will be the same and depending on who you ask the definition of a sandbox is different. This shouldn’t alarm stakeholders but rather fuel openness and enable sandboxes to be used as an anchor for policy prototyping

    ,

    Sandboxing is a spirit and can help actors share and understand a problem. This can clarify policy challenges or new tech applications and how to develop user safeguards.

    Calls to Action

    Regulators need to listen different point of views. Building an effective sandbox is less about the skills and maturity of a regulatory but rather about regulators being allowed to engage purposefully with stakeholders.

    ,

    More experimentation and sharing of experiences need to be done in order to help unpack the opportunities and challenges of setting up sandboxes for data in a particular sector or regulatory environment.

    Session Report

    Mr. Axel Klaphake, GIZ Director, Economic and Social Development, Digitalisation, opened the panel by briefly introducing the topic, emphasizing the benefits of data for economic growth and social development, and then introducing the speakers present at the table as well as those who would be attending online. 

    The on-site moderator, Armando Guio, then gave a presentation on the current state of regulatory sandboxes to offer context for the upcoming conversation. He defined the regulatory sandbox as "a regulatory approach, typically summarized in writing and published, that allows live, time-bound testing of innovations under a regulator's oversight. Novel financial products, technologies, and business models can be tested under a set of rules, supervision requirements, and appropriate safeguards." This concept was attributed to the U.N. Secretary-General's for special advocate for inclusive finance for development. Mr. Guio also dealt with examples of uses, such as Brazil, Colombia, Ethiopia, Germany, Kenya, and Lithuania. 

    As the first panelist's speech, a video from ANPD, the Brazilian Data Protection Authority which co-organized the panel, was broadcasted, in which Thiago Moraes emphasized the importance of fostering a dynamic discussion among all relevant stakeholders in order to deliberate strategies that can pave the way for the development of Sandbox's initiatives. He also announced the beginning of the call for contributions for ANPD’s regulatory sandbox on AI and data protection, which is a crucial step forward in Brazil's journey toward responsible innovation. 

    Agne Vaiciukeviciute, Vice Minister of Transport and Communication of the Republic of Lithuania, highlighted her country's experience with regulatory sandboxes. The outcome has been considered a success, and this has generated more interest and investments in this area. They are currently exploring 5G technology and its capabilities in depth. 

    Denise Wong, from the Singapore Data Protection Authority, IMDA, highlighted their experience and spoke about unlocking the potential of data through policy mechanisms in collaboration with industry as a method to support it and also assist them discover suitable safeguards and protections. She cited one of the key advantages of employing sandboxes as the ability to reduce the time and effort required for technologies to be deployed, allowing enterprises to securely experiment with cutting-edge technologies that provide them a competitive advantage, among further benefits. 

    Lorrayne Porciuncula, from the DataSphere Initiative, addressed the fact that the aspects required for governments to follow in order to successfully establish a regulatory sandbox vary depending on the national jurisdiction in which it is located, the institutional framework, and the time frame, among other factors. Therefore, it is important to demystify what sandboxes are and to show that they are not for exclusive application of sophisticated regulators. In fact, it is a way of engaging purposefully with stakeholders from the design phase onward and building institutional trust with the private sector. 

    Kari Laumann, from the Norwegian DPA, presented the benefits of using sandboxes in her country. She listed as a good practice the experience of bringing firms into the dialogue prior to the installation of the sandbox with questions about what they were interested to build when it comes to AI and data protection, algorithm fairness, and minimizing data. 

    Ololade Shyllon, from Meta, shared the private sector's perspective, saying that while the benefits of using sandboxes vary depending on the unique context of each project, in general, they help to reduce regulatory uncertainty, create a safe space for innovation, make adaptation faster, and build trust between regulators and the private sector. 

    The panel then proceeded with an online and in-person Q&A session. 

    Overall, the session brought out the following takeaways: 

    • It is critical to establish objective criteria and clear advantages for participants, such as certifications. Set highly specific use-case objectives as well. 

    • The sandbox is vital for mapping common problems that the public and the private sector would face when developing or deploying a technology. 

    • Bringing many stakeholders into the conversation can help to reduce regulatory capture. 

    • The resources needed to implement sandbox may vary according to its goals and the skills and maturity of the regulator. 

    • Sharing experiences between countries is a great approach to learn about the many models available. 

    • Sandboxes can promote responsible data governance and AI innovation, creating a space where innovative ideas can flourish while respecting human rights, such as privacy and data protection. 

    IGF 2023 Networking Session #168 Advancing Open Science Globally: Challenges and Opportunitie

    Updated:
    Data Governance & Trust
    Key Takeaways:
    During the discussion, two distinct perspectives on open science emerged. One emphasized the need to enhance the organization and standardization of scientific production, aiming at maximizing the value that can be derived from it. The second perspective highlighted the importance of broadening access to scientific discoveries and derived products, and of involving a broader range of individuals in defining scientific processes.,

    It's essential to outline specific actions that can drive progress toward these goals, and the appropriate actions vary depending on which perspective is adopted.

    Calls to Action

    To maximize the value derived from scientific research, there should be a concerted effort by the private sector to standardize data related to scientific research and make this data widely available on the internet.

    ,

    To enhance accessibility to scientific results and resources and enhance their social impact, it is crucial that government reconsider existing intellectual property and patent models.

    Session Report

    Report on the Networking Session #168: "Advancing Open Science Globally: Challenges and Opportunities"

    The session was fascinating as it contrasted two different perspectives on the goals and paths of Open Science. While researchers and advocates from Latin America highlighted the importance of involving a broader range of individuals in the governance of science and of broadening free and open access to scientific discoveries and derived products in order to maximize its social impact, participants from the private sector and the global north emphasized the need to enhance the organization and standardization of scientific production, aiming at maximizing the value that can be derived from it.

    Henrique Xavier highlighted the persistent issue of paywalls to scientific publications. Moreover, while government and academic data are often open, data from private companies in areas like social media and artificial intelligence remain closed. Opening such data sources is essential for research on misinformation and AI governance, both discussed at the Internet Governance Forum.

    Sarita Albagli reinforced that paywalls hinder access to knowledge, particularly in the global south. She highlighted that Open Science is not only a more cost-effective model than closed science but also addresses the issue of knowledge access, preventing the loss of valuable resources. As a concrete example of a successful program, she mentioned the Brazilian bibliographic database SciELO.

    She raised the requirement for Open Science to address citizens' needs and the importance of involving citizens in research about issues that affect them. She also mentioned the risk of Open Washing, where companies direct Open Science to practices that allow them to profit, which could disproportionately affect the global south by making its research subordinated to private foreign interests.

    Carolina Botero emphasized that Open Science should grant access to publications and the knowledge generated by scientific research, such as vaccines during the pandemic. Rethinking patent laws is crucial to achieving this. Carolina emphasized the importance of addressing power imbalances, ensuring that all countries can utilize data for research purposes by adjusting legal frameworks to support global access.

    Kazuhiro Hayashi emphasized that Open Science goes beyond Open Access. It encompasses providing access to both data and research methods. He stressed the importance of international cooperation in making this data and knowledge accessible to everyone. He said Japan was implementing Open Access and Open Data policies for publicly funded research.

    Vint Cerf (present in the audience) mentioned Google Scholar and Schema.org as tools that help organize and standardize scientific knowledge. He raised the need to document experiment designs and the challenge of accessing old data, methods, and analyses after computer systems evolved. He questioned who should fund Open Science infrastructure and suggested we design a viable business model that could encourage companies to invest in these initiatives.

    Vint Cerf highlighted the importance of creating a document stating the desirable properties of an Open Science ecosystem. He suggested creating a vast database to ease data processing and analysis. Cerf emphasized the importance of its interoperability so the database could migrate in case of a lack of support from the host institution. He recommended organizations such as UNESCO and the International Science Council as potential allies in advancing Open Science.

    Two practical conclusions surfaced from the discussion. In order to maximize the value derived from scientific research, there should be a concerted effort by the global community, including the private sector, to standardize data and metadata related to scientific research and make this data widely available on the internet. To enhance accessibility to scientific results and resources and enhance their social impact, governments must reconsider existing intellectual property, copyright, and patent models.

    IGF 2023 Town Hall #170 Multistakeholder platform regulation and the Global South

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    Multistakeholderism is still largely considered the best way to construct consensus, ensuring results that encompass different stakeholders. However, it was highlighted that it needs improvements to guarantee meaningful participation from all stakeholders, especially within the civil society and technical community that many times have difficulties in participating in national or international forums, due to lack of resources and time.

    Calls to Action

    Guarantee more resources to civil society and technical community to increase participation in international governance forums Adopt bottoms-up regulation, specially in technical standards, such as AI, ensuring global south countries participation, involving the technical community and private sector in rule formulation. Private sector to ensure openness and access to data in order to ensure meaningful participation from other sectors

    Session Report

    Organized by the Brazilian Internet Steering Committee (CGI.br), the Town Hall focused on delving into different digital platform regulation governance models through the exchange of global south countries’ practices and discuss the role of State and non-State stakeholders vis a vis the value of the Internet Governance multistakeholder model. The session was moderated by Henrique Faulhaber, counselor of the Brazilian Internet Steering Committee, representative of the private sector, who opened the session by exposing the role of multistakeholderism in Brazil Internet Governance, as the role it may have on platform regulation, highlighting the particularities of regulation and institutional difficulties that may occur in global south countries. 

    Marielza Oliveira, from Unesco, presented a more general approach to the multistakeholderism model, highlighting its importance to build consensus evolving multiple stakeholders, however the model must overcome challenges to be inclusive, diverse and human rights based as well as to account power imbalances from big techs.

    Sunil Abraham, from Facebook India, on the other hand, highlighted the importance of coordinating all the forms of regulation – from the estate, co-regulation and self regulation with standards setting organizations. This could be seized in platform regulation by  giving room to bottom up knowledge and norm settling, especially with global south participation in a way that would ensure future-proof regulation. 

    Miriam Wimmer, director from the brazilian DPA, also agreed on the importance of coregulation, highlighting the complex institucional set in Brazil with the difficulties in defining the regulation scope and which authorities would be evolved in a broad theme such is platform regulation. The director also emphasized that multistakeholder isn’t incompatible with multilateralism. 

    Joanne D Cunha, researcher from the Centre For Communication Governance at NLU Delhi, pointed out the challenges for global south countries in platform regulations and participating in global forums and international processes, especially due to difficulties with resources. 

    At last, Renata Ávila from Open Knowledge Foundation stressed out the inequalities between different realities, in particular considering small global south countries that may lack not only platform regulation laws but also data protection laws. She also highlighted the importance of platforms not taking advantage of that situation, ensuring transparency and a general frame to be replicated. 

    The Q&A session stressed out the arrangements between the different regulation models that may be applied to platform regulation, and the challenges in cooperation between multiple authorities. It was also pointed out how platforms with transnational reach keep track of many jurisdictions and may replicate new mechanisms to different countries. At last, the speakers highlighted the importance of south-south cooperation, holding platforms accountable and an expanded multistakeholder model with more diverse participation. 

    We can highlight two key takeaways. Multistakeholderism is still largely considered the best way to construct consensus, ensuring results that encompass different stakeholders. However, it was pointed out that it needs improvements to guarantee meaningful participation from all stakeholders, especially within civil society and technical community that many times have difficulties in participating in national or international forums, due to, among other reasons, lack of resources and time. Therefore, governance of platform regulation needs to consider the differences of institutional arrangements and the necessity to equalize the power imbalances that large platforms may cause.  

    Call to actions mentioned: 

    • Guarantee more resources to civil society and technical community to increase participation in international governance forums 
    • Adopt bottoms-up regulation, specially in technical standards, such as AI, and ensuring global south countries participation. 
    • Ensuring openness and access to data in order to ensure meaningful participation. 
    IGF 2023 WS #311 Global Digital Value Chain: Africa’s Status and Way Forward

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    The session outlined that the GDVC has become increasingly complex and interconnected, with organizations and industries across the world collaborating and competing in the digital space which has transformed the way businesses operate, and how consumers access goods and services. Also, Africa is deficient at the GDVC as a result of low capacity, and technology to harness the available resources and poor taste for indigenous solutions.

    ,

    There is an issue of indigenous funds availability, the bulk of the available funding are from foreign venture capitalists with certain conditions and interests that keep Africa dependent digitally. Hence, the need for indigenous funding for digital independence in Africa Countries. In the same vein, speakers also commented on new approaches to digital infrastructure in the area of electricity, telecommunications, and data centers.

    Calls to Action

    Government with the support of other stakeholders should develop clear and supportive policies and regulations that prioritize local content and promote its integration into various sectors, such as energy, mining, manufacturing, and technology. The African government (Nigeria Communications Commission - NCC, National IT Development Agency-NITDA and other replicants in Africa) should explore massive investment in digital infrastructure.

    ,

    Private sector and other stakeholder groups should develop a crowdfunding mechanism to which indigenous investors and individuals could contribute. This would allow Africans to provide certain digital interventions that are controlled and benefiting Africa. A decisive and deliberate decision should be made to enhance capacity and positively engage the populace for the invention of solutions to our unique problems.

    Session Report

    AfICTA- Africa ICT Alliance Workshop Report

    IGF 2023 WS #311 Global Digital Value Chain: Africa’s Status and Way Forward, Thursday, 12th October, 2023, KYOTO, JAPAN

    Organized by: AfICTA-Africa ICT Alliance

    Overview of the Session: The discussion underscored the intricate nature of the Global Digital Value Chain (GDVC), where global organizations collaborate and compete digitally, reshaping businesses and consumer access to goods and services. Africa's lag in GDVC was attributed to limited capacity, inadequate technology to utilize available resources, and a preference for non-indigenous solutions. Challenges regarding GDVC's impact on Africa were discussed, emphasizing the continent's rich mineral and human resources for internet infrastructure. However, concerns were raised about retaining value within Africa. The session questioned Africa's exclusion in the value chain, emphasizing the need for increased value, consensus building, policy development, and active engagement in Internet Governance Forums. It highlighted Africa's consumption-centric approach and stressed the urgency of transitioning to a production-based economy. Critical questions were posed about Africa's ability to achieve sustainable development goals, accompanied by strategies to shift from consumption to production. The session emphasized the importance of creating a roadmap for capacity development, establishing production facilities, and enabling active participation in the global digital value chain.

    The onsite moderator, Dr. Jimson Olufuye, Principal Consultant at Kontemporary Konsulting Ltd, Nigeria, Founder/Fmr. Chair and Chair of the Advisory Council, AfICTA, provided background information about AfICTA, an advocacy group for African ICT-driven businesses. AfICTA was founded in 2012 with six (6) member nations and has now grown to over 40 member African nations. Underscoring the importance of the theme of the workshop concerning Africa's participation in the global value chain, he introduced the panelists, the online moderator and facilitators, and the Chair of AfICTA, Mr. Thabo Mashegoane for opening remarks.

    Speakers

    1. Mr. Bimbo Abioye, President of the Institute of Software Professionals, ISPON and Group Managing Director of Fintrak Software, Nigeria (Private Sector, Africa)
    2. Dr. Kossi Amessinou, Chief of the World Bank Division, Benin Republic (Government, Africa)
    3. Dr. Melissa Sassi, representing the private sector in North Africa and serving as the Partner & Chief Evangelist, P3 Network (Private sector, North America)
    4. Mrs. Mary Uduma, West Africa IGF Coordinator (Civil society)
    5. Professor Kulesza Joanna from the University of North Poland (Academic community, Europe)
    6. Ms. Rachael Shitanda, AfICTA Vice-Chair, East Africa and Executive Member of Computer Society of Kenya (Private sector, Africa)
    7. Chief Toyin Oloniteru, CEO, DAPT - Data Analytics Privacy Technology; (Private sector, Africa)
    8. Dr. Chidi Diugwu, Deputy Director, New Media and Information Security, Nigeria Communications Commission (Government, Africa)
    9. Dr. Ben Ewah, Director of e-Government, NITDA - National IT Development Agency; (Government, Africa)

     Moderators

    1. Dr. Jimson Olufuye, Principal Consultant at Kontemporary Konsulting Ltd, Nigeria, and Founder/Fmr chair and chair of the advisory council, AfICTA. (Onsite Moderator)
    2. Mr. Inye Kemabonta, National Coordinator of AfICTA and CEO of Tech Law Development; and the Chair of AfICTA, Mr. Thabo Mashegoane for his opening remark. (Online Facilitator)

    Policy Questions to the Speakers

    The moderators posed the following questions to the speakers for their responses

    1. Considering that Africa is rated as a continent with the least contribution to the GDVC as evident through the dilemma experienced in the advent of the COVID-19: a. How inclusive is the GDVC and as a concerned stakeholder, what are the initiatives or actions required to take to amend the abnormal trend? b. Identify soft areas through which Africa could penetrate the GDVC and the benefits the continent would derive?
    2. Africa being home to major raw materials of production is yet with little or no contribution to the GDVC, what could have gone wrong, what are the remedies?

    Mr. Bimbo Abioye President of the Institute of Software Practitioners of Nigeria addressed the questions by highlighting the challenges faced by Africa in the Global Digital Value Chain (GDVC). He pointed out the lack of ownership and digital slavery in the continent's ecosystem. To address these issues, he emphasized the importance of enhancing policy frameworks, skills development, capacity development, research and development, and access to finance. Additionally, he stressed the need for infrastructural development and the creation of an enabling business environment across Africa. In his final submission, he envisaged the government leveraging existing solutions and existing capacity.

    Dr. Kossi Amessinou, Chief of the World Bank Division in Benin Republic highlighted the significant internet consumption from foreign countries but acknowledged a growing collective awareness in Africa, especially post-COVID. Despite this, challenges persist in the region. He proposed several solutions:

    Massive Investment in Digital Infrastructure: Dr. Kossi emphasized the need for substantial investments in digital infrastructure, especially from the private sector. He stressed the importance of broadband expansion into rural areas and advocated for new approaches to infrastructural development, including discussions on establishing data centers in Africa. Internet Exchange Points: He suggested building Internet exchange points across Africa to enhance local networks. Regulation: Dr. Kossi stressed the necessity of regulating the digital sector in Africa to ensure its growth and stability. Digital Literacy: Addressing the challenge of digital illiteracy, he recommended initiatives focused on enhancing digital literacy skills in the population. In his final submission, he envisaged capacity development and harnessing solar energy for Africa's own power.

    Dr. Ben Ewah, NITDA, emphasized the importance of understanding the existing structure of the labor market, especially the significant informal sector. He highlighted the need to identify specific areas where technology can address existing needs effectively. Focusing on interventions that cater for the majority of these needs will yield quick results for African markets. He stressed the government's role in recognizing the shift in resource utilization and harnessing of these changes for national development.

    Dr. Chidi Diugwu from NCC emphasized the vital role of Human Capacity Development, particularly concerning the inclusion of raw materials. He highlighted NCC's commitment to promoting research and development in the academic realm, with a focus on strengthening research grants for students in the field of artificial intelligence, given the transformative nature of the digital age. Dr. Chidi stressed the importance of identifying young talents, fostering their development, and increasing the number of skilled individuals to enhance the Human Development Index.

    Ms. Mary Uduma, West Africa IGF Coordinator representing the civil society emphasized the importance of Africa's grassroots participation in the Global Digital Value Chain (GDVC). She highlighted the discussions held at the IGF, both regionally and nationally, and stressed the need for Africa to be actively engaged in the value chain. Mary Uduma expressed concerns about Africa's dependence on the Western world during the COVID-19 pandemic and advocated for developing local businesses and voices within the continent. She praised Africa's achievements in the fin-tech sector, citing examples like Konga and Jumia. Mary Uduma called for the protection of human rights, advocating for standards and data safety. She questioned the location of data and emphasized the importance of housing data within Africa rather than relying solely on cloud services.

    Dr. Melissa Sassi from the Private Sector in North America highlighted the significance of tech entrepreneurship for Africa's economic growth. She emphasized the need to foster a culture of digital entrepreneurship, which plays a crucial role in Africa's capacity and economic development. Dr. Sassi stressed the importance of encouraging innovation, financial stability, practical skills, collaboration, and engagement. She advocated for integrating entrepreneurship culture into tertiary education and scaling up capacity-development efforts.

    Chief Toyin Oloniteru, CEO D.A.P.T, highlighted the importance of unbiased self-appraisal regarding Africa's strengths and progress. He emphasized the need to build on existing strengths and advance further. Chief Toyin pointed out the significant business expansions in Africa, citing examples like MTN and the banking sector, which have expanded beyond the continent. He stressed the need for behavioral modification, advocating for crowdfunding and crowdsourcing within Africa's resources. Chief Toyin emphasized the value of funding initiatives through crowdsourcing, promoting self-reliance and reducing dependency on external sources. The younger generation needs to be structured and guided to be focused on diverse opportunities available for skills development towards sustainable growth and development in Africa.

    Ms. Rachael Shitanda, Executive Member of Computer Society of Kenya, highlighted the need for Africa to leverage its resources for economic development and internet inclusivity. She emphasized the importance of developing local content, focusing on government initiatives. She shared perspectives with Mr. Bimbo Abioye on finance, creating enabling environments, local networks, and policy regulation. Ms. Shitanda stressed the importance of breaking silos, merging skills, and strengthening capital investment. She urged the continent to safeguard its data and collaborate effectively for growth and development.
     
    Prof. Joanna Kulesza, representing the Academia, emphasized the need for comprehensive and well-aligned regulations, coordinated and reliable capacity development, addressing policy challenges in Africa's global value chain, and aligning policies with sustainable development goals. She stressed the importance of civil society engagement, consistent policy development, raising awareness about broadband satellite, and resolving data-related questions. Prof. Kulesza highlighted the role of governments in ensuring increased African participation in the digital chain.

    She further emphasized the need to address policy challenges within the digital value chain, particularly in the African region. She highlighted the importance of aligning with the sustainable development goals with secure and stable internet access, enabling the development of technology based on accessible opportunities. Prof Coffin stressed the importance of awareness and recommended strengthening civil society engagement. She advocated for policy development through a multistakeholder approach, emphasizing that Internet access is a human right. Prof Coffin urged governments to consider jurisdiction, equipment ownership, and internet shutdown protocols during crises. Regarding data collection processes, she underscored the necessity for government involvement to enhance Africa's participation in the global value chain.

    Summary Recommendations 

    1. Governments, along with the support from various stakeholders, should formulate clear and supportive policies prioritizing local content integration in sectors like energy, mining, manufacturing, and technology. African governments, including entities like Nigeria Communications Commission (NCC) and National IT Development Agency (NITDA), should invest significantly in digital infrastructure.
    2. The private sector and other stakeholders should establish a crowdfunding mechanism where indigenous investors and individuals can contribute. This approach enables Africans to create digital interventions that are locally controlled and beneficial to the continent. A deliberate effort should be made to enhance capacity and engage the public in inventing solutions for our unique challenges.
    3. Africa needs a holistic approach to enhance its participation in the Global Digital Value Chain (GDVC). This includes investing in digital infrastructure, promoting indigenous solutions, and fostering digital entrepreneurship. Governments and private sectors should collaborate to develop clear policies, encourage local content integration, and invest in digital infrastructure. Additionally, there should be a focus on human capacity development, especially in emerging technologies like artificial intelligence. Identifying and nurturing talents among the youth is crucial for long-term sustainable growth.
    4. It's essential to mentor and empower the younger generation in the rapidly evolving digital landscape.
    5. African nations must enhance capacity development comprehensively across various sectors.
    IGF 2023 WS #495 Next-Gen Education: Harnessing Generative AI

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    Digital empowerment is a priority and especially GenAI has a lot of potential in academic curriculum for young minds. By enabling acess via audio inputs, translation tools etc, GenAI can amplify an individual’s potential and increase the learning outcomes. But there are some academic concerns like the accepted levels of plagiarism, the impact on critical thinking etc.

    ,

    There is a strong need for strong cybersecurity measures: Use of GenAI by Youth and school students will require strong security and data privacy measures as it is prone to misuse. Privacy is a quintessential concern for a young person. By setting standards, sharing best global practices etc, we can successfully merge GenAI in education. It is a multifaceted challenge but the benefits outweigh the challenge.

    Calls to Action

    Policymakers need to take on an inclusive approach which can make use of GenAI more global diverse and inclusive of ethnicities, races, and local contexts. Diverse datasets, newer user-centric approaches that go beyond Euro-centric models with privacy in design is welcome.

    ,

    Educators need to collaborate with technical community, app developer, cybersecurity experts etc. to ideate on more inclusive GenAI

    Session Report

    Link to the report (PDF Version): https://drive.google.com/file/d/16QC9suOkn4ZBNzpkta8xZZl-Gg5KW5dM/view?…

    IGF 2023 WS #495 Next-Gen Education: Harnessing Generative AI

    Ihita G. welcomed everyone and set the context by highlighting the relevance of Generative (Gen) AI in education, underlined it’s use in personalized learning. She added that use of Gen AI further increases the importance on critical thinking and digital literacy and invited interventions from audience which primarily constituted on concerns around plagiarism in academic pieces.

    She introduced the speakers and invited Ms. Dunola Oladapo, a representative of inter-governmental organization to explore GenAI’s role in education. Ms. Oladapo argued that Digital empowerment is a priority for the youth. Covid 19 was a definitive moment for the history. The digital access is not uniform and about 55% of youth in Africa don’t have access to Internet. It has a multifold impact – lack of affordable devices, high internet costs etc. are some challenges which restricts young people to participate in a connected future with others.

    She shared ITU’s Generation Connect platform’s work on AI for Good– that it focusses on how young people are connecting with AI and explores different ways by which power of technology can be harnessed for a connected digital future.

    Ihita asked Connie Man Hei Siu and Osei Manu Kagyah (civil society) their opinions on responsible and ethical use of generative AI technologies in educational settings including algorithms, and the gaps that need to be addressed. Osei said that it’s an important conversation that’s long due as AI has impacted given that the industry is racing ahead of academia. He emphasized on the need for a human-centric approach and a mutual platform to address issues of accountability, bias and security of Generative AI in formal education.

    Connie shared her insights and highlighted the importance to explore as GenAI as can knockdown long-standing barriers like language and make learning more inclusive via translations, audio inputs etc. GenAI can also help student by help in managing schedules, increase learning outputs, connect with peers and reduce the stress of multi-tasking. She explored the challenges by emphasizing on the misuse of the tech:

    • higher degrees of reliance could hinder critical thinking skills of students
    • given that it required a lot of data, it can compromise user’s privacy
    • AI systems can inherit biases

    She underlined the need to promote responsible usage and vigilance since technology isn’t inherently good or bad.

    Ihita invited audience for interventions which featured concerns on the right kind of regulation, the need for an academic dialogue amongst PhD scholars and mentors on the extent of use of GenAI. Educators in the audience used example of calculator as a case wherein there was a possibility of hinderance of critical abilities but it further amplified mathematical models. 

    Ihita posed another question to the speakers - How can policymakers collaborate with relevant stakeholders to ensure that teaching and learning processes are enhanced while sustaining creativity, critical thinking, and problem-solving?

    Connie responded that policymakers required a thorough understanding on technology especially on leveraging GenAI’s power and safeguarding it. She suggested that it is important for policymakers to collaborate with stakeholders like students, teachers, academic institutions to understand the challenges. Further, to address challenges of data protection, and security infrastructure, the educators can team up with Teacher training institutes and tech companies. She highlighted that setting standards, sharing best practices globally can lead to successfully merging GenAI in education. It is a multifaceted challenge but the benefits outweigh the challenge.

    Online moderator Adisa agreed with Connie and contributed that the curriculum needs to evolve to address real world challenge. Ihita said need to do more assessment with respect to the models and asked Osei to respond. He argued that there is a need to uncolonized the designs since the deployment of AI tools have reflected a bias.

    Ihita posed the final policy question- How can policymakers ensure that the use of generative AI by youth in education is inclusive, age-appropriate and aligned with their developmental needs and abilities? She invited audience interventions on how can it be approached as a concern.

    A teacher from Finland expressed concern on who will create an inclusive model for children – as the approach of educators is different than that of a profit-earning company as the goal of inclusivity and protection needs to be aligned with learning. Another teacher from Japan added that GenAI model is US centric and there is a need to explore the local contexts.  Another audience member added that it’s not just about access to technology but also about knowledge e.g., the domestic context becomes important to understand what kind of data pool is being referred to. He referred to UNESCO’s open data report wherein the open science recommendation underlines knowledge sharing in sense of Global Commons.

    Ihita approached speakers for their final comments. Osei emphasized on the need more of interventions from different languages to move away from Euro-centric approaches. Connie suggested the need for stronger data protection laws and added with critical digital literacy skills, young people will have better skills to navigate the digital spaces. Policymakers need to take inclusivity driven approach like Personalized learning experience, accommodating the linguistic diversity etc. into consideration. Ihita concluded that young people need to take a stand themselves and contribute in the decision-making processes themselves to make the best of GenAI. She thanked everyone for joining.

     
       

     

    IGF 2023 Launch / Award Event #46 The State of Global Internet Freedom, Thirteen Years On

    Updated:
    Human Rights & Freedoms
    Key Takeaways:

    • The multistakeholder model for internet governance is a crucial part of combating cyber threats, strengthening human rights and democracy online, and maintaining a global, open, free, and secure internet.

    ,

    • Laws governing the digital space that are developed in democracies can have drastically different and unintended consequences for people’s rights when imposed in less free contexts.

    Calls to Action

    • The Freedom Online Coalition should be more inclusive in its efforts to engage with civil society around the world.

    ,

    • Democracies should ensure that they are modeling rights-respecting legislation and regulatory approaches that will not restrict human rights online in less free spaces.

    Session Report

    Moderator Allie Funk began the session with an overview of findings from Freedom House’s Freedom on the Net 2023 report, which examined how artificial intelligence is deepening the crisis of internet freedom. She noted that AI drives intrusive surveillance, empowers precise and subtle censorship, and amplifies disinformation campaigns as generative AI lowers the barriers to entry for the disinformation market. She shared that if AI is designed and deployed safely, it can be used to bolster internet freedom. She closed by noting that as AI augments digital repression, there is an urgent need to regulate it, drawing on the lessons learned over the past 15 years of internet governance, namely: not overly relying on companies to self-regulate, centering human rights standards in good governance of the internet from governments, and the importance of involving civil society, particularly from the global majority. 

    Olga Kyryliuk discussed how the internet freedom space has changed in the last ten years. She described how initial hopes were that the multi-stakeholder model would make it easy to reach consensus on a way to regulate technology, and that ten years ago, many also felt that legal regulation would be able to catch up with technological advancement. She noted that, looking back, regulation has still lagged behind, but there is now a greater recognition of the importance of digital rights. She shared that innovations in AI and other technologies have brought new risks and opportunities, particularly when it comes to governments balancing their safety and security interests with protecting rights online. She closed by noting that continued multistakeholder collaboration is positive, but many people want more than just venues for discussion, but actionable results such as initiatives or partnerships that will lead to change. 

    Guus Van Zwoll discussed walking the tightrope of the “Brussels effect” and trying to ensure that regulations adapted by other countries with lower rule of law standards will not have adverse human rights impacts. He touched on the difficulty of balancing between fighting censorship and fighting disinformation. He described work done in the Netherlands to ensure that regulation incorporates strong requirements for transparency and references to the guiding principles on business and human rights, so that if other countries copy EU regulations, these considerations that were reached through a long multistakeholder process will already be baked into the laws. He noted that when the Netherlands has bilateral discussions, Dutch policymakers urge other government to adapt human rights and democratic clauses in their regulations.  

    Emilie Pradichit discussed the proliferation of harmful cyber laws throughout Southeast Asia that target dissenting voices in the name of security, and cases in which people in Thailand and Laos have been imprisoned for speaking the truth or sharing criticism on Facebook. She identified the lack of clear definitions for terms like national security as a problematic part of such regulation, and that voluntary commitments from tech companies do not do enough to counter such problems. She expressed that companies should have meaningful engagement with other stakeholders, both on how to prevent harm and to provide remediation after the fact, not just to tick the box of consulting civil society with no follow-up. She noted that digital rights organizations are small and cannot combat the misuse of platforms by governments on their own, but end up being told that companies cannot do anything either. She called for decisions about how tech companies and AI should be regulated to come from those who have been most impacted, through meaningful engagement that holds the powerful to account. 

    On multistakeholder engagement, Guus discussed efforts through the Freedom Online Coalition (FOC) and other initiatives, to incorporate and mainstream the Dutch Cyber Strategy among civil society groups, to ensure that while digital security remains high, there are principles for governments seeking to balance this with human rights, developing the governance structures to protect against a surveillance and censorship apparatus. 

    Olga commented on the desire among many in civil society for greater clarity about engaging in the FOC and other initiatives. She called for greater opportunities, in addition to the FOC advisory network, such as bringing back the Freedom Online Conference, as a venue for civil society to consult with FOC member governments on issues including AI. 

    Emilie emphasized that the FOC has not yet made itself accessible among civil society groups in Southeast Asia or other contexts across the majority world, where rights defenders are most under threat from digital authoritarianism and struggling under repressive governments. She pointed out the role that FOC governments could play in pressuring less democratic governments or companies that are operating in repressive contexts, particularly in cases where those still in-country are unable to speak out safely. 

    Olga added that getting access to government stakeholders at regional level IGFs and other meetings can be a challenge for civil society. She suggested that FOC governments should work to incentivize governments to engage with local and regional communities outside the global IGF, in order to develop partnerships and work together in a meaningful multistakeholder way. 

    Throughout the Q&A, panelists discussed the challenges for civil society in engaging with other global efforts, including the UN’s Global Digital Compact. Panelists also discussed the difficulty of ensuring that laws that are built on models from the EU, whether it be the DSA, DMA, or EU AI Act, still include the positive protections for human rights defenders without being imposing regulations that are overly burdensome and not responsive to local needs and realities.  

    Olga highlighted the importance of dialogue and conversations happening early on, before a law is drafted and adopted, to ensure that it is responsive to the local context, which sometimes requires advance capacity building as well. Emilie shared the frustration that civil society in Southeast Asia often feels with government-led regulation efforts, as there are few to no opportunities to engage. She noted that governments will say they are adopting global standards as a way to receive diplomatic applause, while still refusing to engage with human rights defenders or other stakeholders. 

    Guus noted that the Brussels effect was not always intended, and that although EU governments developed these laws, the way they have had global impacts was not something that was planned, which makes civil society feedback a crucial part of the learning process to improve the implementation of future regulations. 

    No feedback was received from remote participants during or after the session. 

    IGF 2023 Town Hall #61 Beyond development: connectivity as human rights enabler

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    It was highlighted by Robert Pepper that it’s possible to identify a shift in the lack of connectivity that went from a coverage gap to a usage gap. This means that, recently, the there was a improvement in the Internet coverage, and the main issue, now, relies in the Internet use by people who lives in regions that have Internet coverage

    ,

    Promises of universalizing Internet access through the 5G haven’t been materialized yet, and some sectors are already discussing the 6G technology. Internet fees, such as the fair share proposal, which may lead to a context of fragmentation, considering that a few companies would be able to provide a globally connected infrastructure. Zero rating agreements give an unfair advantage to large companies

    Calls to Action

    We call governments and intergovernmental agencies to reinforce the relevance of universal and meaningful connectivity as a fundamental enabler of human rights and elaborate on this relevance for the protection, promotion, and enjoyment of civil and political rights, in addition to economic and social development

    , We ask policy makers and govts to stand against imposition of direct payment obligations to the benefit of a few telecom operators. Current system has proven resilience and ability to evolve alongside the Internet. Considering roles of small, community and nonprofit operators in providing complementary connectivity for rural areas and minorities beyond sole reliance on incumbent infrastructure providers will sustainably address the digital divide
    Session Report

    Beyond development: connectivity as human rights enabler

    October 2023

    by session orgs: Raquel Rennó, Lucs Teixeira, and Nathan Paschoalini

    Introduction

    The 2030 Agenda for Sustainable Development explicitly recognises that the spread of information and communication technologies has the power to bridge the digital divide; as such, governments are increasingly addressing connectivity expansion as part of their efforts to meet the Sustainable Development Goals. However, framing connectivity solely as a facilitator for social and economic growth is limiting. These approaches ultimately privilege the most powerful telecommunication industries that can afford international agreements; if all connectivity is provided by the same few global incumbent telecommunication operators, there will be very little diversity in technologies, content, and little space for dissident voices.

    To expand on this issue and bring in different views, ARTICLE 19 organized a Town Hall session during the 18th edition of the Internet Governance Forum (IGF2023) in Kyoto, Japan. It brought together regulators, members from the private sector, the technical community and civil society to discuss the following questions:

    • Would it be possible to re-center connectivity as a human rights enabler, moving away from the development-only approach?
    • How can PPP and cross-national agreements help solve the digital divide while allowing the diversity in the ISP technologies, improving innovative policies and techniques to spectrum management instead of just promoting one specific industry?

    Moderated by ARTICLE 19 Program Officer Raquel Renno Nunes, the session included Jane Coffin (civil society), Thomas Lohninger (epicenter.works, civil society), Robert Pepper (Meta, private sector) and Nathalia Lobo (Ministry of Communication of Brazil, public sector). As online moderator, Lucs Teixeira (ARTICLE 19 Internet of Rights fellow, civil society) coordinated participants in the Zoom room; Nathan Paschoalini (Data Privacy Brazil, civil society) was the rapporteur.

    The full recording of the Town Hall session, with captions, is available at IGF’s YouTube channel: https://www.youtube.com/watch?v=MwlgWVXYFuo

    Discussion

    Before the discussion, the on-site moderator, Raquel Renno, stated that this Town Hall should be a space for an open discussion on connectivity issues, that enables different views on this subject, considering its importance as a human rights enabler. Then, invited speakers exposed their views on the questions raised above, with the opportunity for participation extended both to the on-site audience and to remote participants.

    After the panellists’ intervention, there was a open mic round, in which members of the audience and the panellists could debate the topics covered at the beginning of the panel.

    We split the points raised between three interrelated main problems.

    Problem 1: Building infrastructure

    Robert Pepper highlighted the fact that in the last few years, it was possible to identify a shift from a “coverage gap” to a “usage gap”. In this sense, more than 2 billion people could be online, but aren’t. He mentioned a project they conducted within sub-Saharan countries to understand the reasons why the majority of the population in this region are not online. In this study, they identified three main reasons for the issue, being a) affordability of devices and of monthly services; b) lack of digital literacy; and c) lack of local relevant content online. Another issue identified was related to the lack of electricity. He questions how to make people online, considering  that Internet access should be recognized as a human right and a human rights enabler. 

    Jane Coffin in her turn told us about how difficult it was to take fiber from Zambia to South Africa, mentioning negotiations between the countries borders, the presence of an historical bridge in the way, and a swarm of bees as obstacles in the more than 1 year period of deployment. This example serves the purpose of highlighting the difficulties related to Internet infrastructure and the barriers related to building Internet infrastructure in a cross border region. According to Coffin, it takes a multistakeholder approach to improve Internet access and to strengthen the dialogue with governments, so they can understand what has to be done to speed up Internet connectivity.

    She also mentioned that community networks come from a diversification of perspectives to bring in a last mile connectivity. In this sense, such networks can provide a type of Internet connection that is different from the ones provided by bigger ISPs, which don’t always have the economical interest to connect people in remote or otherwise impractical places. She states that building network infrastructure is usually very expensive, but there are alternative ways to build Internet infrastructure, especially if focused on smaller networks, and that different organizations can work together to achieve and improve Internet connectivity for those underserved publics.

    Thomas Lohninger acknowledges that all promises related to 5G, especially regarding connectivity, have not yet materialized; and despite this, discussions about 6G can already be identified.

    Nathalia Lobo presented the Brazilian context on the issues related to the universalization of Internet access in the country, due to the continental dimensions of Brazil. She mentioned that the Brazilian 5G auction was an opportunity to establish obligations related to the universalization of Internet access to the companies that won the process.

    She also presented a Brazilian public policy named Connected North, that was designed to strengthen connectivity to the northern region of Brazil through eight information highways composed of twelve thousand kilometers of optical fiber laid in the bed of the Amazon River. Lobo also mentioned that the public-private partnerships play a key role in the accomplishment and maintenance of the Connect North project. 

    Problem 2: Fair share proposals

    Thomas Lohninger address issues related to network fees, such as the fair share debate, which is not new, dating  back to the telephony era. According to Thomas, in this context, small ISPs have revealed that they are afraid of their ability to compete and to connect to other networks if such a proposal is approved, due to economical barriers. This, Thomas said, might lead to a fragmented Internet, where only large ISPs would have the financial resources to remain connected to the global network.

    Robert Pepper reinforces this critical view on network fees, explaining that the whole rationale behind them is based upon the architecture and economics of “Telecom Termination Monopoly”. With past network architectures, the distance and duration of connections increased costs substantially; after 4G arrived, with “essentially flat IP networks even in mobile”, the cost for connection is a