IGF 2023 #24 IRPC Human Rights Law and the Global Digital Compact

Updated:
Human Rights & Freedoms
Session Report

The DC session of the Internet Rights and Principles Coalition (IRPC) took place in two separate sections, a first presentation of remarks by speakers, and a second half of interactive group workshops. Between in-person and online attendance up to 50 people participated in the session.

Panel Phase

Helani Galpaya

The Global Digital Compact (GDC) is praised for its focus on human rights but lacks clear guidelines on implementing business and human rights principles within its framework. The voluntary nature of these principles and ineffective stakeholder integration slow progress, with separate discussions among businesses and civil society instead of meaningful interactions. The draft document does not support effective collaboration among stakeholders, and the GDC struggles to hold nations accountable for human rights violations by private companies and states. Although the GDC acknowledges the link between human rights and socio-economic rights, addressing inequality, there is a significant gap between its vision and real-world application. Proposed regulations often restrict rights, and there's urgency in their enactment, compromising the GDC’s credibility. Additionally, the GDC underscores the importance of teaching online civic responsibility, drawing parallels with environmental protection efforts to highlight the role of individual actions in creating a safe digital space. Despite efforts to make its consultation process inclusive, it remains imperfect and dominated by privileged voices. Effective ground-level actions are essential after the GDC’s establishment to ensure national policies reflect its principles and objectives. While the GDC commits to human rights and socio-economic issues, it faces challenges in clarity, stakeholder engagement, and enforcement, undermining its impact. However, emphasizing individual online responsibilities and improving consultation inclusivity are crucial for meaningful outcomes.

Raashi Saxena

The GDC is a joint effort by the UN, governments, and civil society to integrate technology with the Sustainable Development Goals (SDGs) through a multi-stakeholder approach. The Internet Rights and Principles Coalition plays a key role in the GDC by promoting digital inclusion and connectivity for marginalized groups like women, migrants, and refugees, supporting SDG 10 – Reduced Inequalities. Ensuring that human rights in digital spaces match those offline, particularly in areas like freedom of expression and net neutrality, is vital for SDG 16 – Peace, Justice, and Strong Institutions. The influence of Artificial Intelligence (AI) in sectors like finance and health, and its challenges, including privacy and the rise of deep fakes, impact SDG 3 – Good Health and Wellbeing, and SDG 8 – Decent Work and Economic Growth. The IRPC emphasizes partnerships and youth involvement, aligning with SDG 17 – Partnerships for the Goals and SDG 9 – Industry, Innovation, and Infrastructure. During IRPC events, interactive group activities foster engagement and idea exchange, promoting a collaborative digital future and advancing the SDGs.

Santosh Sigdel

The Dynamic Coalition on Internet Rights and Principles is committed to upholding human rights online, creating the multilingual Charter of Human Rights and Principles for the Internet, now available in 28 languages, to facilitate global and regional stakeholder engagement. The IRPC promotes a rights-based approach to Internet frameworks and has participated in various global forums, including EuroDIG and UNESCO conferences, raising awareness about digital rights and fostering collaborations. Highlighting the importance of accurate and culturally relevant translations, the IRPC engages local stakeholders in the translation process to ensure integrity and build local understanding of the charter's principles. This not only aids in language translation but also inculcates a deep understanding of human rights among local communities, empowering them to advocate and enforce these principles. However, balancing the regulation of online misinformation with freedom of speech presents challenges, particularly as governments may use regulatory measures to suppress free expression. This issue is prevalent e.g. in South Asia, where Internet regulations often threaten freedom of speech. Moreover, awareness of the United Nations' Global Digital Compact remains low in South Asia and other developing regions, impacting its effectiveness. Ensuring broad stakeholder involvement from diverse regions is crucial for the GDC’s success. In summary, the IRPC's work in translating and promoting human rights online, while building local capacities and raising awareness, is vital. Nonetheless, ongoing efforts to maintain freedom of speech in the face of regulatory challenges and to enhance global engagement with initiatives like the GDC are essential for a rights-centric digital world.

Wolfgang Benedek

Wolfgang Benedek criticizes what we know about the Global Digital Compact for its limited progress in advancing human rights. He points out two main flaws: the compact's lack of enforcement mechanisms and the difficulty in achieving consensus among stakeholders. These weaknesses, Benedek argues, hinder the GDC's ability to effectively promote and protect digital human rights. He emphasizes the need for stronger enforcement and more collaborative decision-making to enhance the GDC’s impact on digital human rights.

Dennis Redeker

As far as work of the Coalition is concerned, key developments included the translation of the 10 Principles document into Japanese to engage more local stakeholders and there are concrete plans to translate the entire Charter into Japanese until IGF 2024. To support this, a task force is seeking experts in Internet governance or international law. Dennis also pointed out that the Platform Governance Survey, conducted at the University of Bremen, highlighted a discrepancy between the expected and actual influence in shaping the GDC, with technical experts viewed as ideal leaders but businesses seen as overly dominant. In addition, the general population of 41 countries does not appear to be aware of the important role of governments in the negotiation of the GDC. The results underscore the need for broader public consultation, involving citizens, NGOs, and academics to create a more inclusive digital governance framework. The Internet Rights and Principles Coalition is advancing this inclusivity by collaborating on translations with universities and student groups, enriching students' understanding of digital rights.

Group Phase

Dennis Redeker led a group discussion as part of the workshop, encouraging participants to analyze and discuss future challenges related to specific IRPC Charter articles. This activity aimed to deepen understanding and disseminate knowledge about the charter's relevance.

Audience

The audience discussed and emphasized the following topics:

1.  Youth and diversity in Internet governance emphasizing the importance of involving young people in updating and translating governance documents to reflect diverse perspectives.

2.  Freedom of expression: Discussed the need to balance regulation and protection of free speech, emphasizing principles like legitimacy, necessity, and proportionality to prevent government overreach.

3. Responsibilities in the digital space, stressing the importance of clearly defining roles for states, businesses, and stakeholders in upholding human rights when regulating online content and protecting against harmful information.

4. Inclusivity and accessibility, acknowledging advancements and ongoing challenges in making technology accessible for individuals with disabilities, including variations in regional sign languages and Internet accessibility.

5. Protection of children and their rights, addressing the need for careful regulation to protect children online, the potential impacts of digital certificates on human rights, and the necessity of strategic litigation to safeguard digital rights against overreaching government actions.

Vint Cerf (in the audience) emphasizes the need for users and providers in the digital space to understand and fulfill their responsibilities alongside rights. He links this to Rousseau's social contract concept, which balances individual freedoms with societal obligations. This approach, including the role of social norms, aims to foster a responsible, secure online environment. Ultimately, recognizing and upholding our duties can enhance harmony both online and in broader society.

IGF 2023 DC-Sustainability Data, Access & Transparency: A Trifecta for Sustainable News

Updated:
Human Rights & Freedoms
Key Takeaways:

Data, access, and transparency are fundamental to the sustainability of news and internet governance. However, data access discrepancies around the world, especially in Global South regions, limit the capacity of research, analysis and reporting about the impact that digital platforms have on news and journalism sustainability, as well as on society as a whole.

,

The global reach of supranational policies might require regional/local parties to comply with rules originated elsewhere. The session acknowledged the interconnection of local issues with global ramifications and vice-versa, and stressed the importance of ensuring representation and access to digital policy discussions in all levels for those communities and sectors that will be most affected by these initiatives.

Calls to Action

To Intergovernmental Organizations: Allocate resources and initiatives to enhance participation and access for underrepresented communities, ensuring their voices are heard in global internet policy discussions, including on data privacy, news sustainability, and generative AI, and that their perspectives are taken into account when drafting resolutions, policies, and guidelines.

,

To Private Sector: Ensure that the implementation of internal policies created in compliance with international or supranational bodies take into account the diversity of local context. Engage with local stakeholders, media organizations, journalists, and their communities to address the local implications of global digital policy frameworks.

Session Report

Introduction

The DC-Sustainability coordinators, Daniel O’Maley, Waqas Naeem and Courtney Radsch opened the session by underscoring the significance of balancing technology innovation governance with the sustainability of journalism and news media. The key highlight for the year was the dynamic coalition's focus on data transparency and access as vital elements for media sustainability. The coalition's annual report was launched during the session, a collaborative endeavor that offers a snapshot of the critical issues facing the news media industry. The report spotlighted topics like the power imbalances between media and tech giants, the dual nature of government regulations impacting media, and the challenges and opportunities presented by technological innovations, such as generative AI.

In the first section of the session, authors of the report presented their chapters: Prue Clarke (New Narratives - Australia), Mike Harris (Exonym - United Kingdom), Juliana Harsianti (Independent Journalist and Researcher - Indonesia) and Juliet Nanfuka (CIPESA - Uganda). Following the presentations of each chapter, members of the DC-Sustainability took the floor to present their work: Michael Markovitz (GIBS Media Leadership Think Tank - South Africa), Ana Cristina Ruelas (UNESCO - Mexico), Julius Endert (DW Academy - Germany), Michael Bak (Forum on Information and Democracy - France), Sabhanaz Rashid Diya (Global Tech Institute - Bangladesh) and Ramiro Alvarez (CELE - Argentina). The session concluded with an open discussion with the audience.

Global influence of EU/US policies

A key topic was the overarching effect of policies and tech companies from powerhouses like the EU and the US on the global digital space. Despite being localized, their ripple effect transcends borders, impacting organizations working in so-called “Global South” countries. These organisations often find themselves grappling with the daunting task of compliance, struggling to decipher a logic they didn't create and can't control. Notably, these policies (both from companies and governments) play a pivotal role in shaping how journalists and media outlets operate, offering them limited avenues to challenge the tech giants. Courtney Radsch elaborated on these techno-legal systems, emphasizing the major influence of US and EU-based tech platforms on global media. These platforms determine how content rules and policies, such as the DMCA and GDPR, are implemented. Tying into the conversation on how centralized internet governance has impacted media visibility and sustainability, Mike Harris spoke about the importance of decentralized rulebook systems to empower news media, especially in the face of challenges from large online platforms. Juliana Harsianti shed light on the evolution of digital technology in Indonesia, emphasizing the implications of regulations intended for e-commerce now being used to restrict journalistic freedom. 

Digital Equity: Paving the Way for Sustainable Journalism

Data stands as the backbone of informed decision-making in today's digital realm. Gathering the right data is the first hurdle. With tech platforms influencing the visibility and viability of content, there's an undeniable need for a coordinated approach to collect and utilize data. Such data can aid in understanding audience behaviors, advertising strategies, and the effectiveness of content distribution methods. Ensuring a fair compensation model, bolstered by clear data-driven strategies, can pave the way for the sustainability of quality journalism. In that regard, via a written statement, Michael Markovitz the Conference held in July, “Big Tech and Journalism - Building a Sustainable Future for the Global South” which culminated in the adoption of the Principles for Fair Compensation, aimed to be a framework for the design of policy mechanisms seeking to address media sustainability through competition or regulatory approaches.

Prue Clark spotlighted the disparity faced by countries like Liberia in the digital age. The challenges faced by media in such countries, from a lack of digital monetization knowledge to reliance on government support, are evident. Juliet Nanfuka offered a parallel from Uganda, emphasizing the hesitancy in the media's approach to AI, despite the challenges they face. Both Clark and Nanfuka highlighted the struggles and gaps in media adaptation and digital training in lower-income countries.

Daniel O’Maley emphasized the transformative power of data sharing, stressing the importance of understanding which data is essential for different sectors. He talks about the implications of data transparency policies, especially considering their global impact.

While Nanfuka highlighted the challenges of integrating new technology into media spaces that are already grappling with other significant issues, Julius Ender dived into the transformative power of AI in media, emphasizing the importance of AI literacy. Both Ender and Nanfuka conveyed the urgency for media sectors, especially in developing countries, to understand and adapt to AI's growing influence.

Regional Focus vs. Global Perspective:

During the Members’ Spotlight, Sabhanaz Rashid Diya offered insight into the mission of the Tech Global Institute to bridge the equity gap between the global South and dominant tech platforms. Ramiro Alvarez provided a deep dive into the media landscape of Latin America, emphasizing the influence of state-driven media and the need for more open dialogue. This regional focus complements the broader global themes discussed, reinforcing the idea that global digital governance challenges often manifest in unique regional ways. Despite the fact that the media landscape varies by region and country, there are common threads of challenge and opportunity related to digital governance, sustainability, and the integration of new technologies.

Conclusion and next steps

Overall, the session emphasized the value of global collaboration grounded in local insights. It's not just about dissecting EU or US policies, but also diving deep into what's happening in places like Uganda and Liberia. The local challenges faced in these regions have global implications, reinforcing the need for an inclusive approach in policy discussions.

While the EU and US might often take center stage due to their significant influence, the collective effort should focus on ensuring that voices from all corners of the world are heard: Global strategies must be informed by local knowledge and experiences. 

In the coming months, DC-Sustainability members will meet again to shape the priorities for the year ahead, especially when it comes to envisioning AI governance and its impact in the media. The goal is to ensure that as the world of journalism evolves, it remains rooted in authenticity, inclusivity, and the pursuit of truth.

IGF 2023 Day 0 Event #21 Under the Hood: Approaches to Algorithmic Transparency

Updated:
Human Rights & Freedoms
Key Takeaways:

"Algorithmic Transparency" is difficult to define. It means different things to different people. In most cases it boils down to distilling down what information you actually want when you call for algorithmic transparency.

Calls to Action

After walking through a demonstration of "life of a query", participants are asked to provide feedback to help fine tune the presentation. Many participants thought repeating this exercise in the future would be beneficial.

IGF 2023 Day 0 Event #177 Transforming technology frameworks for the planet

Updated:
Sustainability & Environment
Key Takeaways:

Cooperative models and approaches to technology have created pathways for communities and movements to address their needs, including for digital inclusion and decent work.

,

It is critical that technological responses to planetary crises do not adopt a single model or approach, but rather support diverse community-led and cooperative models that centre care and solidarity.

Calls to Action

Governments must ensure that the precautionary principle is upheld in digital governance norms and standards, including policy responses to the role of technology corporations in carbon offsetting, and geoengineering.

,

All stakeholders must work to support models of technology that centre care and solidarity.

Session Report

On 7 October, 2023, the Association for Progressive Communications (APC), Sula Batsu, Nodo TAU and May First Movement Technology convened a pre-event discussion to the global IGF, focusing on cooperative models and approaches to transforming technology frameworks for the planet.

During the discussion, speakers from Sula Batsu, Nodo TAU and May First Movement Technology shared experiences from their work, emphasizing the critical importance of participation and accountability in cooperative models and approaches to technology.

Kemly Camacho reflected on the experiences of Sula Batsu in learning how to put care at the center of their business models using approaches that are rooted in feminism, solidarity, and collective care.

Speaking from the experiences of May First Movement Technology, Jaime Villareal shared his perspective on the importance of members of May First being able to collectively own, govern and maintain autonomous infrastructure.

From Nodo TAU, Florencia Roveri described the processes and challenges of transforming their e-waste management and recycling plant into a cooperative, and the value of working with existing cooperatives. Florencia reflected on the need to extend responsibility for electronic waste, and shift perspectives on the dangers of discarded technology.

Yilmaz Akkoyun, Senior Policy Officer of the German Federal Ministry for Economic Cooperation and Development (BMZ), reflected on the discussion from the perspective of the BMZ priorities for digitalisation, emphasizing that cooperation is essential in a holistic approach to address the root causes of the complex problems facing the world today.

Becky Kazansky, a postdoctoral researcher at the University of Amsterdam, framed the discussion of cooperative approaches to technology by reflecting on recent policy developments, and the importance for all stakeholders not to get distracted by technologies and tools that on the surface seem quite promising for mitigating and adapting to climate change, but have proven to be quite harmful for communities around the world.

On-site participants in the event shared questions and reflections on how transforming technology frameworks can be supported in practice, including through amplifying the work of cooperatives like Sula Batsu, Nodo TAU and May First Movement Technology.

Speakers emphasized the need for robust and community-led accountability mechanisms, support for environmental defenders, and shifting perspectives and narratives towards more technology frameworks that prioritize collective care.

IGF 2023 Networking Session #186 Surveillance technology: Different levels of accountability

Updated:
Human Rights & Freedoms
Key Takeaways:

• Surveillance is a burgeoning industry in the Middle East/West Asia and North Africa (MENA) region. The technologies are widely used to target and intimidate journalists, human rights activists, civil society organisations, and lawyers. Accountability for consequent harms is layered and can be examined at different levels.

,

• Documentation is key to hold the actors implicated accountable. This includes analysis of export controls at the supply level, human rights due diligence at the investment and development levels, and finally, the types of human rights violations and harms at the individual and social levels.

Calls to Action

• Documentation and advocacy need to be bolstered by binding actions and enforcement to remedy the compounding harms.

,

• The international community needs to seriously consider the prospects of universal jurisdiction to hold actors involved in surveillance technologies sale, development, deployment, and use accountable when human rights violations are committed.

Session Report

Surveillance Technology in MENA: Different Levels of Accountability

11 October 2023

Organiser and rapporteur: Gulf Centre for Human Rights (GCHR)

Chair: Khalid Ibrahim (GCHR)

Speakers:

  • Marwa Fatafta (AccessNow)
  • Asia Abdulkareem (INSM)
  • Samuel Jones (Heartland Initiative)

Session Report

This networking session was organised by the MENA Surveillance Coalition (MCCS), co-led by the Gulf Centre for Human Rights (GCHR) and Access Now, to introduce participants to the work of MCCS and other collaborators in the space of human rights and surveillance technology governance. Collaborators who joined this session include, the Iraqi Network for Social Media (INSM) and Heartland Initiative. The aim of this session was to create a connecting space for groups and experts working on advocacy for human rights due diligence in surveillance technology development, funding and investment, export, or import.

The following is a summary of key points raised by the speakers about their work to date, next steps, and potential future collaborations:

Khalid Ibrahim (GCHR) explained the ultimate goal of MCCS is to end the sale of digital surveillance tools to repressive governments in the region, fight for a safe and open internet, defend human rights, and protect human rights defenders, journalists, and internet users from governments’ prying eyes. Ibrahim demonstrated the extent of harms arising from surveillance spyware targeting in the case of his colleague, prominent human rights defender Ahmed Mansoor, who was the first victim of Pegasus spyware (by NSO Group) back in 2015 which is the same year in which he got the Martin Ennals Award for human rights. Ibrahim noted that, Mansoor was arrested on 20 March 2017, tortured and sentenced to 10 years in prison in solitary confinement.

Others targeted by spyware in the region also reap no justice through local mechanisms and institutions. Therefore, Ibrahim made an argument about universal jurisdiction for vindication of human rights violations. GCHR filed a complaint in France on 28 July 2021 against the Israeli software company NSO Group. Future collaborations lie in observing developments in other cases filed against NSO Group.

Marwa Fatafta (AccessNow) also explained the context for the creation of the MCCS: an urgent response to the proliferating use of commercial spyware and the digital surveillance tools in the MENA region. AccessNow has been investigating and exposing the depth and the spreads of how spyware, and different surveillance tools produced by NSO Group, among others, systematically used to target, monitor, and surveil human rights defenders, journalists, lawyers, civil society from Bahrain to Morocco, Saudi Arabia, UAE, Jordan, Egypt, you name it. On-going collaborations include a new lawsuit, that both GCHR and Access Now are working on, filed against the UAE surveillance company Dark Matter which hacked the device of prominent woman human rights defender Loujain Al-Hathloul. The court can exercise its jurisdiction to uphold human rights and most importantly send a message to the surveillance industry that they can indeed be held accountable. Ultimately, Fatafta concluded, the advocacy effort of MCCS is to expose where export controls have been lacking as well as to add pressure on governments to regulate this industry.

Asia Abdulkareem (INSM) emphasised the role of civil society organisations in documentation of numerous cases of surveillance and digital attacks against Iraqi activists. This documentation has helped to raise awareness of the problem and has put pressure on the Iraqi government to take action. Future work involves observation and scrutiny of enforcement by the Iraqi government which is often slow to implement reforms.

Samuel Jones (Heartland Initiative) approached accountability at the level of investors arguing that investors ought to extend exclusionary screens for “controversial weapons”, fundamentally incompatible with international humanitarian and human rights law, to surveillance technologies and spyware. Heartland Initiative is currently working with some of the investor partners, along with colleagues at Access Now, the Business & Human Rights Resource Centre, and other experts to develop similar criteria for spyware – meaning it would be excluded from investment portfolios – due to the emerging discourse suggesting it is also fundamentally incompatible with international law. As for their future work, Jones added, is mapping surveillance technologies and spyware being used in the MENA region, including companies, their investors, their corporate structures, as well as the human rights abuses facilitated by the use of their technology.

 

IGF 2023 Networking Session #80 Radical Imaginings-Fellowships for NextGen digital activists

Updated:
Human Rights & Freedoms
Session Report

Radical Imaginings-Fellowships for NextGen digital activists (Day 1, 16:30-17:30 UTC+9)

Young people need to be at the forefront of shaping the digital institutions and economies of today. But how do we create enduring pathways to effectively support this goal? How can we bring the voices of the most vulnerable into youth-led action? What models and approaches can we look to that are already in play? What are their successes and where are they falling short?

This networking session aimed to kick-start a community dialogue around re-imagining a model for fellowships that can facilitate early-career scholars and activists to be engaged in truly transformative work on the digital economy, and pioneer visions for feminist, sustainable, equitable and just alternative futures. It focussed on:

Understanding the needs and challenges of young activists working in CSOs, research organizations, academia and trade unions

Identifying key areas and types of work that remain significantly under-resourced and overlooked towards digital justice

Determining short to mid term priorities for action and the fora for advocacy

The networking session brought together young activists and professionals, organizations and grant makers to draw from their experiences and debate and deliberate upon the challenges for and possible solutions and good practices concerning fellowships in the field of digital governance/digital activism.

Challenges discussed

  • Highly competitive nature of fellowships (competition between fellows of similar background/field)

  • Online-only character of some fellowships that might limit accessibility

  • Lack of continuity between the fellowship period and after, where former fellows join a very competitive field afterwards without further support

  • Limited funding avenues apart from from Big Tech funders

  • Lack of involvement of fellows in the design of projects they work on

  • Defining fellowships in a too narrow way “digital” (e.g. not considering social or environmental challenges)

Solutions/good practices discussed

  • Inviting fellows from different fields and places to lower the internal competition

  • Involving fellows in the governance of the fellowship programs, e.g. through electing future cohorts of fellows or by co-designing the projects

  • Being flexible with the goal of the fellowship as people’s lives change and new opportunities or limitations arise

  • Involving fellows in all activities of the hosting organization and not just a distant add-on

  • Fostering networking among fellows with internal and external partners

  • Extending greater trust to fellows in balance with guidance and mentorship

  • Tapping into alumni networks as a way to support fellows post tenure

  • Centering respect and trust in funding, having limited rules on what to spend the money on and allowing fellows to prioritize resources

     

IGF 2023 Town Hall #105 Resilient and Responsible AI

Updated:
Sustainability & Environment
Key Takeaways:

Considering the situations including crises where dynamic interactions between multiple AI systems, physical systems, and humans across a wide range of domains may lead to unpredictable outcomes, we need to establish the discussion of resilient and responsible AI. We propose that a large complex system should be capable of maintaining/improving the value enjoyed by humans through the system in response to various changes inside/outside the system

,

In order to achieve system resilience in a human-centric way by letting humans make and embody their own value judgements, an interorganizational and agile governance mechanism is needed.

Calls to Action

The points presented above require urgent discussion and action under an international and comprehensive framework.

,

A broad outreach to the people including the general public is also needed.

Session Report

At the beginning of the session, Dr. Arisa Ema (The University of Tokyo), one of the organizers, explained the purpose of the session. The aim of this session is to expand the concept of "Responsible AI,” which is an important topic of AI governance, to "Resilient and Responsible AI" by considering the possibility of situations including crises where dynamic interactions between multiple AI systems, physical systems, and humans across a wide range of domains may lead to unpredictable outcomes.

First, Carly and Yui who are the pilots (operators of an avatar-robot) of OriHime (an avatar robot) talked about their experiences from the user's viewpoint of the technology. They have been in wheelchairs and feel the value of participating in society through the avatar robots. On the other hand, they have encountered situations where they could not handle irregulars because of the overreliance on technology.  Carly shared the experience that he was unable to turn on the power switchboard by himself and loss of communication with outside, when a power failure occurred by a lightning strike while working at home.  Yui talked about the anxiety and unnecessary apologies that people who need assistance face in a social system that is becoming increasingly automated. In a technology-driven society, where manuals are available but not always put into practice, they realized that this assumption would be broken not only in ordinary times but also in times of disaster, and that she would have to rely on people. The common conclusions of both stories, that is, the balance between technology and the manpower is important and that it should be considered that sometimes technology does not work, is suggestive. Furthermore, it made us realize that the nature of the crisis can be diverse for a diverse society. Next, Dr. Hiroaki Kitano (Sony), a researcher and executive of a technology company, who is currently working on an AI project for scientific discovery, pointed out that such an AI brings some positive effects for human being, but it also has a risk by misuse. Then, he also highlighted the possibility of future large-scale earthquakes in Japan and the importance of how to avoid excessive reliance on AI. There is a risk that AI will not be available unless communication networks, stable power and PC/mobile devices are available in accidents such as large-scale power outage when the dependency of AI in society is increased.

The organizers and three panelists, Dr. Inma Martinez (Global Partnership on AI), Ms. Rebecca Finlay (Partnership on AI), and Dr. David Leslie (The Alan Turing Institute), led the discussion based on the issues raised by OriHime pilots and Dr. Kitano. Dr. Martinez mentioned the necessity of defining resilience, and emphasized that the power of technology should be rooted in the values we have learned from our families and national cultures. By doing so, empowerment can create resilience. Ms. Finlay pointed out that while the assessments of AI systems before the launch are discussed, attention is hardly paid to how they affect different communities after they are released. The resilience and control methods are always required throughout the life cycle of AI, i.e., during the research phase, before and after launch. Focusing on machine-learning which has been the mainstream of AI in recent years, Dr. Leslie pointed out that data-driven systems may become vulnerable in a dynamic environment. As society and culture are gradually change, machine learning based systems driven by past data has the limitations. He emphasized the importance of considering resilience because excessive reliance on data driven systems has possibility to lead to stagnation in human creativity. In response to these discussions, Dr. Ema pointed out that we need to consider how technological and social perspectives on the current discussions such as generative AI will change. The following three points were pointed out by the audience.

  • The need for society to provide people with options for solutions.
  • The need for a more comprehensive impact assessment (technology, ethics, human rights, etc.) 
  • The risk of forgetting skills due to dependence on technology.

Then, a participant was asked about AI as a critical infrastructure. In response to this question, at first, Dr. Martinez said that AI is an infrastructure-based service, and it creates an unknown area for society. She mentioned the resilience of the communication infrastructure in which she was involved, and introduced an example in which a specific band continues to operate even if the whole network goes down in a disaster. She also pointed out the necessity of considering the self-repair mechanism of AI in the event of an infrastructural outage, and how to build not only systems but also human resilience. Ms. Finlay touched on the possibility that AI can be introduced in various ways with various implications, in response to Dr. Martinez. And she pointed out that systems need multiple layers of resilience. The way to understand how AI interact in a system is to map the system and understand its effects. Dr. Leslie pointed out that AI is rapidly becoming an infrastructure and general-purpose technology, and that it functions as an alternative for humans to think and act. AI is becoming a kind of a utility, but if it becomes an infrastructure, the question is who should control it. Dr. Ema said that it is difficult for individual companies to be held accountable when AI become infrastructural and go beyond the scope of a company, and that governmental and global discussions will be required.

As a summary of the discussion, the panelists highlighted the need for AI to be safe and have a solid foundation for society. They also emphasized the importance of defining and monitoring resilience to support society. In addition, they agreed the necessity of international research institutions to discuss AI from scientific and technological perspectives against the rapid commercialization of AI. In response to these comments, Dr. Ema concluded this discussion with the hope that all of us will work together to realize a resilient and responsible AI. The session received a variety of comments. A participant from public sector appreciated the uniqueness of the theme and the importance of discussion. On the other hand, another participant pointed out practical aspects such as how to handle large and complex systems composed by multiple AI systems. It is important to continue the discussion on this topic.

 

IGF 2023 Town Hall #39 Elections and the Internet: free, fair and open?

Updated:
Human Rights & Freedoms
Key Takeaways:

Importance of multi stakeholder approach but recognition of the lack of government/private sector engagement, in Africa region in particular, which leads to isolation and an inability to effectively moderate content. This can lead to the common use of Internet shutdowns as a means of addressing content issues such as hate speech, which is not the solution.

,

Whilst some governments may lack the tools, knowledge, digital literacy and access to the wider multi-stakeholder community to address issues of concern through effective content moderation, shutting down the internet does not address the root causes and only creates more problems, including undermining rights and the prosperity of a society. Internet shutdowns are also widely used as a deliberate tool for controlling the free flow of information

Calls to Action

Call on governments to cease use of the blunt tool of internet shutdowns which impedes the free flow of information during electoral periods, and threatens human rights and the democratic process as a whole.

,

Reinforce the importance of planning ahead through narrative and risk forcasting to pre-empt and mitigate shutdowns, with a view to developing knowledge and literacy around other means for addressing the issues Governments state they are addressing by shutting down the internet (e.g. hate speech). Addressing one problem by creating another is not the answer and the multi stakeholder community must continue to challenge the narrative.

Session Report

This session was facilitated by the FOC Task Force on Internet Shutdowns (TFIS), co-Chaired by the U.K. and Freedom Online Coalition-Advisory Network members Access Now and the Global Network Initiative. The session examined causes, trends and impacts of Internet shutdowns and disruptions, and explored how the multistakeholder community can work together to anticipate, prepare for, and where possible prevent Internet shutdowns before they occur, with a focus on identifying practical steps that can be taken ahead of ‘high risk’ elections in 2024.

Kanbar Hossein-Bor, Deputy Director of Democratic Governance & Media Freedom at the U.K. Foreign, Commonwealth & Development Office, provided opening remarks, noting that Internet shutdowns pose a significant threat to the free flow of information and are a fundamental impediment to the ability to exercise human rights, underscoring the importance of a multistakeholder approach to addressing these challenges. Mr. Hossein-Bor highlighted the Freedom Online Coalition (FOC) Joint Statement on Internet Shutdowns and Elections, launched during the session, which calls on States to refrain from shutting down the Internet and digital communications platforms amid electoral periods, as aligned with States’ international human rights obligations and commitments.

Speakers underlined the critical role access to the Internet and digital media platforms play in promoting free, transparent, and fair electoral processes. Panellists spoke on the negative reality of Internet shutdowns and their impact, noting its destructive consequences on economic prosperity and access to health care, as well as obscuring human rights violations. Panellists highlighted how Internet disruptions and preventing access to platforms during election periods are often justified by governments as a means to ensure national security and to mitigate disinformation, even though shutdowns and disruptions have proven to further exacerbate security risks, especially among already vulnerable groups. Speakers also highlighted big tech companies’ lack of engagement and product oversight in local contexts (e.g. hate speech moderation in local languages). Additionally, when examining government use of Internet shutdowns, panellists flagged governments’ lack of knowledge and experience regarding alternative tools to address security concerns amid elections in contexts of violence. In these contexts, full and partial shutdowns were used as a form of resistance and expression of sovereignty by governments in response to companies and systems they felt powerless to and did not know how to engage with. In addition to underlining the need for a multistakeholder approach and calling on telecommunications and digital media companies to ensure people have access to a secure, open, free, and inclusive Internet throughout electoral processes, panellists also recognised the role of disinformation as a risk cited by governments to justify Internet shutdowns and disruptions during elections. In order to address this challenge, speakers noted the following recommendations:

● Narrative forecasting: Anticipating the types of narratives that may be deployed at different points in the electoral process, and preparing a response;

● Overcoming selection bias: Finding ways to bring fact-based information into the right spaces;

● Preemptive responses to disinformation: Drafting preemptive responses to disinformation in order to reduce response time and minimise the spread of disinformation.

● Collaboration between civil society and Big Tech: Encouraging collaboration between local civil society organisations and big tech companies to address online content moderation in local contexts.

During the Q&A session, audience members enquired about government and civil society strategies to address and prevent Internet shutdowns, emphasising additional considerations to take into account when seeking to promote fair and open elections.

The U.K. closed the session by reiterating the importance of 2024 as a key election year, and also highlighted the publication of the Oxford Statement on the Universal Access to Information and Digital connectivity, developed following the Global Conference for the International Day for Universal Access to Information 2023.

IGF 2023 Open Forum #133 Accelerating an Inclusive Energy Transition

Updated:
Sustainability & Environment
Key Takeaways:

1. Artificial Intelligence can play a crucial role in the energy transition, but the ethical considerations must be taken into account. This requires a multistakeholder border crossing dialogue to make sure various perspectives are incorporated.

,

2. Leverage the power of youth in accelerating the energy transition and enable them to make a meaningful change on a local level.

Calls to Action

1. The IGF should strengthen international multistakeholder collaboration and exchange of knowledge on the topic of environment and sustainability.

,

2. Take values into account when utilising technologies such as Artificial Intelligence for accelerating the energy transition.

Session Report

This Open Forum focused on the role of AI in the Energy transition. With various perspectives at the table, the speakers discussed both the possibilities of AI and possible implications of the use of emerging technologies in the energy transition. Alisa Heaver, Senior Policy Officer at the Dutch Ministry of Economic Affairs and Climate Policy, opened the session by emphasizing the relevance of the topic and introducing the speakers. She highlighted the lack of references to sustainability in the policy brief presented by the tech envoy and advocated for a more pronounced emphasis on this crucial issue. Additionally, she underscored the historical relevance of the venue, where the Kyoto Protocol was signed, as a symbolic cue to prioritize sustainability in the context of digitalization and global collaboration. 

Before starting with the presentations and discussion on the role of AI in the energy transition, Hannah Boute, secretary of the Dutch Coalition for Sustainable Digitization and expert on the Guidance Ethics approach, firstly elaborated on the Dutch Coalition for Sustanable Digitzation and after thatexplained the format of the session with help of the Guidance Ethics approach
 

The NCDD is a Dutch initiative that, together with government, industry, university and civil society, works on enabling sustainable digitization and exploring the possibilities of technology to accelerate sustainability in verticals, for example the chances of ai for the energy transition. The NCDD is working on enabling the twin transition and next to that works, among other subjects, on the implementation of green software, enabling organizations to make their own IT green and standards for sustainability in IT. 

 
The Guidance Ethics approach is a bottom-up ethical method developed by Professor Peter Paul Verbeek, Rector Magnificus of the University of Amsterdam and Professor of Philosophy and Ethics of Science and Technology, and Daniël Tijink, responsible for ethics and strategy with the management team of ECP, platform for the information society. 

The approach brings together people, or actors, from four perspectives who are involved with a technology in a specific context. People who develop the technology (ICT), the professionals that will be working with the technology, people from policy or management and people who will be confronted with the technology such as citizens.  Together with these people the possible postive and negative effects of a technology within a specific context are explores. Behind effects, values can be found. During a workshop with the guidance ethics approachways to incorporate the identified values in the design of new technology, the implementation of technology and how we use technology within a specific context are explored.  

Today, we will not be doing a workshop with the Guidance ethics approach, but we will use it to explore the ethical dimensions of the application of AI in the energy transition. Neil Yorke-Smith will explain us how AI might be applied for the Energy Transition. After his explanation. Tim Vermeulen from Alliander will elaborate on the application of AI in the energy transition from a European perspective and Peach from the innovation hub in Cambodia will do so from the South East Asian perspective. 

Hannah actively requested participants to join the process of recognising values in the session by identifying them and submitting these values in the Mentimeter. During the panel discussion these values will be used to discuss the possible options for action to sustain these values in the design of AI, the application of AI on the energy grid and how we work with the technology in this context.  

 

Neil Yorke-Smith, Associate Professor at Delft University of Technology and part of the working group Energy & Sustainability of the NL AI Coalition, then further explained the role of AI in the energy transition. Neil emphasized that Artificial intelligence (AI) has the potential to revolutionize the energy sector, but its implementation must be carefully considered to ensure that it is ethical, sustainable, and beneficial for all stakeholders.   

One of the key benefits of AI in the energy system is the potential for enhancing the overall efficiency and effectiveness of energy utilization. The utilization of AI extends to various areas within the energy sector, including forecasting, system design, real-time balancing, demand response, and flexible pricing. These diverse applications of AI serve to significantly augment the efficiency and efficacy of the energy system, ultimately facilitating a shift away from reliance on fossil-based fuels. 

In the pursuit of incorporating AI into the energy sector, several crucial ethical, legal, social, and economic considerations need to be taken into account. While AI offers many opportunities, the societal impact is not yet fully understood. To ensure that AI is implemented in the energy sector in a responsible and sustainable manner, it is essential to thoroughly examine the ethical, legal, social, and economic implications.   

Furthermore, it is imperative to factor in societal values when designing AI systems, considering the preferences and values of potential consumers and society as a whole. Value-sensitive design emphasises the incorporation of values in the design. In conjunction with this, there is a call for a focus on code efficiency, aiming to reduce the size and resource consumption of AI algorithms, ultimately fostering sustainable practices. Long-term decision-making is also critical, prompting consideration of the lasting consequences of present decisions, particularly concerning infrastructure that will endure for decades. 

This also leads to the importance of accountability. It is essential to hold those responsible for developing AI systems accountable for their actions and the (environmental) impact of their technology. Furthermore, global cooperation is necessary, promoting the sharing of knowledge and experiences to facilitate the successful integration and advancement of AI within the energy system. 

Neil concludes by stating that the implementation of AI in the energy system must be done in a manner that is both trustworthy and just, with a focus on fairness and sustainability. It is crucial to consider the ethical, legal, social, and economic aspects of AI implementation from the very beginning and incorporate societal values into the design process of AI technology. Furthermore, attention should be given to enhancing code efficiency and making long-term decisions that account for potential shifts in societal values. Holding developers accountable for their technology's impact and fostering global cooperation and learning are also integral to the successful integration of AI in the energy system. 

 

European perspective 

Tim Vermeulen, Digital Strategy & Architecture at Alliander and member of the strategy board of the Dutch National Coalition on Sustainable Digitalisation, shared his perspective on the role of AI in the energy transition from a European perspective. He stated that the energy landscape in Europe is undergoing rapid transformation, primarily propelled by technological advancements such as AI and open-source technologies. While these developments offer promising opportunities, they also pose challenges, notably the potential introduction of biases in energy distribution and access. 

In this dynamic context, fairness emerges as a critical consideration for the energy transition, encompassing not only equitable energy distribution but also the reduction of CO2 emissions. Given the diverse energy mixes and unique challenges within different regions, it becomes crucial to approach the transition with a modular perspective, accounting for the specific circumstances and requirements of each country. 

Tim explained how the key factors shaping Europe's energy landscape include transparency, modularity, and technology. The adoption of a modular technology system enables collaborative resource-sharing among nations, fostering an open and cooperative approach to the development of a more sustainable energy sector. Sustainability, fairness, and integrity represent core values in Europe's approach to energy. Access to energy is regarded as a universal right, with the preservation of the energy system's integrity being indispensable for achieving sustainability and fairness. 

Efficiency and awareness play vital roles in the development of applications that drive the energy transition. Tim further emphasized the creation of clean and efficient code across all sectors as it directly influences the shift toward cleaner energy sources. Recognizing that every job has the potential to contribute to a clean and affordable energy future, the significance of a comprehensive approach across various sectors is highlighted. This underscores the importance of acknowledging the influence of jobs in shaping the energy transition. 

Furthermore, technology remains a significant catalyst, unlocking new possibilities and advancements across different segments within the energy sector. Experts acknowledge the potential of openness and complexity in technology, underscoring the need for continuous innovation and development. To foster a successful global energy transition, the sharing of knowledge and values on a global scale is imperative. Effectively managing the knowledge-based landscape on a global level serves as a fundamental driver of progress and collaboration within the energy sector. 

In conclusion, Europe's energy landscape is undergoing rapid evolution, accompanied by various advancements and challenges. Tim explained that fairness, transparency, modularity, technology, efficiency, awareness, and global knowledge sharing emerge as pivotal factors shaping the transition toward a more sustainable, fair, and affordable energy future. 

 

ASEAN Perspective 

Chantarapeach UT,Space and Sustainable Operation Officer at Impact Hub Phnom Penh, underscores that youth-driven innovation and entrepreneurship in green technology serve as crucial drivers in expediting the energy transition and combating climate change. Chantarapeach emphasises the necessity of backing the initiatives of young individuals in this domain through both financial and technical support, awareness campaigns, and the incorporation of inclusive decision-making processes. 

The scope of youth-led technological breakthroughs in green technology encompasses a diverse array of fields, ranging from green energy engineering and smart agriculture to optimizations in renewable energy, air quality monitoring, green building solutions, climate modeling, and eco-friendly transportation. Empowering young individuals to pioneer innovative responses to environmental challenges represents a significant stride toward the attainment of Sustainable Development Goals (SDGs) 7 (Affordable and Clean Energy) and 13 (Climate Action). 

An essential aspect of facilitating youth involvement in the energy transition involves raising awareness and fostering exposure to green jobs. Green jobs contribute to the advancement of sustainable energy practices, providing a multitude of opportunities for young individuals to pursue careers in areas such as green AI research, sustainability data analysis, renewable energy engineering, and clean tech research. By disseminating information and inspiring young people about these prospects, we can encourage their active participation in careers that contribute to a sustainable energy future, aligning with SDGs 7 and 8 (Decent Work and Economic Growth). 

Furthermore, the inclusion of young voices in decision-making processes concerning digital policy and climate change proves critical in ensuring an all-encompassing and fair energy transition. Platforms like the Cambodian Youth Internet Governance Forums and the Local Conference of Youth under UNGO serve as crucial spaces for young people to engage and express their viewpoints on these pivotal matters. By integrating the perspectives of the youth into the decision-making process, we can establish more efficient and inclusive energy systems, thus aligning with SDGs 7 and 13. 

She also emphasizes the role of harnessing renewable energy more effectively in Asia, where many countries heavily rely on fossil fuels. By concentrating on renewable energy technologies and enhancing energy sharing arrangements, Asia can lessen its dependence on non-renewable resources and promote sustainability, thereby aligning with SDG 7. 

In conclusion, Chantarapeach advocated for a comprehensive approach in supporting youth-led innovation and entrepreneurship in green technology. By providing both financial and technical assistance, promoting awareness of green jobs, involving young voices in decision-making processes, and maximizing the potential of renewable energy in Asia, we can empower and mobilize the youth to expedite the development of a sustainable energy future and combat climate change. Collaboration between the youth and adults remains pivotal in accelerating an inclusive energy transition. 

 

Panel discussion 

Collaboration and dialogue between policymakers, industry leaders, and civil society were essential themes throughout the panel discussion, emphasizing the importance of engaging multiple stakeholders from different sectors and regions to foster inclusive decision-making and equitable distribution of the benefits of an energy transition. The discussion also highlighted the immense potential of AI in driving an inclusive energy transition. However, technological advancements alone are insufficient for achieving an inclusive energy transition, necessitating a holistic approach encompassing social, economic, and environmental dimensions. Additionally, the panel discussion brought up the urgency of incorporating values in the design of technologies, addressing equity and social justice issues to prevent the perpetuation of existing disparities in energy access and affordability. 

 

 

 
 

 

 

 

IGF 2023 Open Forum #37 Planetary Limits of AI: Governance for Just Digitalisation?

Updated:
Sustainability & Environment
Key Takeaways:

The interaction between climate change and AI should be of central importance to all countries – taking potentials as well as risks into account. Particularly countries where AI and emerging technologies are developed should shine a light on this nexus and develop frameworks and approached to mitigate adverse impacts of AI on climate. Capacity building, information sharing and support for sustainable, local AI ecosystems should be promoted.

,

We need to ensure that AI does not create more problems than it solves, but rather serves people and planet. Therefore, efficiency of AI should be carefully and transparently evaluated. Environmental and climate considerations need to be incorporated into the development of AI.

IGF 2023 Open Forum #139 Non-regulatory approaches to the digital public debate

Updated:
Human Rights & Freedoms
Key Takeaways:

The current dynamics of freedom of expression on the Internet are characterized by: i) the deterioration of public debate; ii) the need to make the processes, criteria and mechanisms for internet content governance compatible with democratic and human rights standards; and iii) the lack of access, including connectivity and digital literacy. Diverse and reliable information and free, independent and diverse media are effective antidotes to harms

Calls to Action

reinforcing state obligations to promote a diverse and safe public debate, including proposing codes of conduct to political parties and public officials; and Fostering media sustainability.

Session Report

 

The current dynamics of freedom of expression on the Internet are characterized by at least three aspects: i) the deterioration of public debate; ii) the need to make the processes, criteria and mechanisms for internet content governance compatible with democratic and human rights standards; and iii) the lack of access, including connectivity and digital literacy to enhance civic skills. And this is closely related to dynamics of violence, disinformation, inequalities in the opportunities of participation in the public debate and viralization of extremist content. 

Diverse and reliable information and free, independent and diverse media are effective antidotes to these dynamics. Tackling disinformation, violence and human rights violations requires multidimensional multi-stakeholder responses that are well-grounded in the full range of human rights and that brings together regulatory measures, self-regulatory approaches, soft law norms, and policy initiatives.

 As people worldwide increasingly rely on the Internet to connect, learn, and consume news, it is imperative to develop connectivity. Access to the Internet is an indispensable enabler of a broad range of human rights, including access to information. An open, free, global, interoperable, reliable and secure Internet for all facilitated individuals’ enjoyment of their rights, including freedoms of expression, opinion, and peaceful assembly is only possible if we have more people accessing and sharing information online.

Additionally, in the informational scenario of media and digital communication, citizens and consumers should “be given new tools to help them assess the origin and likely veracity of news stories they read online”, since the potential to access and spread information in this environment is relatively easily and malicious actors benefit from it to manipulate the public debate. In this sense, critical digital literacy aims to empower users to consume content critically, as a prerequisite for online engagement, by identifying issues of bias, prejudice, misrepresentation and, indeed, trustworthiness. Critical digital literacy, however, should also be about understanding the position of digital media technologies in society. This goes beyond understanding digital media content to include knowledge of the wider socio-economic structures within which digital technologies are embedded: how are social media platforms funded, for instance? What is the role of advertising? To what extent is content free or regulated?

Given their importance for the exercise of rights in the digital age, digital, media and information literacy programmes should be considered an integral part of education efforts. The promotion of digital, media and information literacy must form part of broader commitments by States to respect, protect and fulfil human rights, and by business entities. 

Likewise, initiatives to promote journalism are key in facing informational manipulation and distortion, which requires States and private actors to promote the diversity of digital and non-digital media. 38

On the other hand, the role of public officials in the digital public debate is highlighted. It is recalled that state actors must preserve the balance and conditions of the exercise of the right of access to information and freedom of expression; Therefore, such actors should not use public resources to finance content on sites, applications or platforms that spread illicit and violent content and should not promote or encourage stigmatizing discourses. States must observe obligations to promote human rights, which includes promoting the protection of users against online violence. The State has a positive role in creating an enabling environment for freedom of expression and equality, while recognising that this brings potential for abuse. 

In this sense, it is worth noting a recent example in Colombia of a decision by the constitutional court that urged political parties and movements to adopt guidelines in their codes of ethics to sanction acts or incitement to online violence. 

In this paradigmatic decision, the court recalled the obligation of the State to educate about the seriousness of online violence and gender online violence and to implement measures to prevent, investigate, punish and repair it; and iv) insisted that political actors, parties and movements, due to their importance in a democratic regime, are obliged to promote respect for and defend human rights, a duty that must be reflected in their actions and in their statutes. Additionally, the court determined that the State should adopt the necessary measures to establish a training plan for members and affiliates of political parties and movements on gender perspective and online violence against women in response.

Considering that unlawful and violent narratives are propelled by state actors on the internet through paid advertisement, it is worth noting that these actors should follow specific criteria in the ad market. Any paid contracting for content by state actors or candidates must report through active transparency on the government or political party portals the data regarding the value of the contract, the contracted company and the form of contracting, the content, resource distribution mechanisms, audience segmentation criteria and number of exhibitions.

On the other hand, to make business activity compatible with human rights possible, the Office of the Special Rapporteur reiterates that Internet intermediaries have the responsibility of respecting the human rights of users. In this sense, they should:

  • Refrain from infringing human rights and address negative consequences on such rights in which they have some participation, which implies taking appropriate measures to prevent, mitigate and, where appropriate, remedy them. (Principle 11).
  • Try to prevent or mitigate negative consequences on human rights directly related to operations, products or services provided by their business relationships, even when they have not contributed to generating them. (Principle 13).
  • Adopt a public commitment at the highest level regarding respect for the human rights of users and that is duly reflected in operational policies and procedures (Principle 16).
  • Carry out due diligence activities that identify and explain the actual and potential impacts of their activities on human rights (impact assessments), in particular by periodically carrying out analyzes on the risks and effects of their operations.

 

In conclusion, the challenges facing the digital public debate necessitate a multidimensional approach. Soft law principles, education, self-regulation, and legal mechanisms can together create a robust framework to mitigate the harms we face online.

 

IGF 2023 Open Forum #132 The Digital Town Square Problem: public interest info online

Updated:
Human Rights & Freedoms
Key Takeaways:

- Without independent quality media we (as individuals and societies) cannot understand the world we live in, we cannot create a shared reality, and we cannot form opinions and make decisions –> independent news media is essential to democracy, to enable a system where everyone has a say in how they’re govern and to hold this system to account.

,

- Online platforms are occupying the public information sphere – this power must come with responsibilities. Regulation centering on human rights, public interest, and rule of law is essential in this regard.

Calls to Action

- Whole-of-society approach needed to safeguard quality information and to consistently uphold public interest amidst the transitional media landscape.

,

- We need to develop and implement a public interest framework to build a healthier online information space.

Session Report
  • Key takeaways
    • Without independent quality media we (as individuals and societies) cannot understand the world we live in, we cannot create a shared reality, and we cannot form opinions and make decisions –> independent news media is essential to democracy, to enable a system where everyone has a say in how they’re govern and to hold this system to account.
    • We find ourselves in a moment of prolonged crises for the media, and for democracy – but while this is a moment of crises it is also a great moment of opportunity.
    • We have to keep it human, especially in a time when AI threatens democracy – the news media has to double-down on humanity and integrity to overcome the challenges of the pollution in our current information ecosystem.
    • Necessary to ensure access to quality news globally – currently there is a divergence between online platforms’ majority users, where they make money, and where regulation is developed.
    • Everyone needs access to quality information, it should not be limited to those who can afford it, otherwise others are at risk of exploitation and control.
    • Online platforms are occupying the public information sphere – this power must come with responsibilities. Regulation centering on human rights, public interest, and rule of law is essential in this regard.
    • Both democracy and quality news media is about empowering people, to create and sustain an active citizenry and vibrant communities, which is a prerequisite of public, political, and democratic participation.
    • News media has been transforming significantly over the last decade but identifying characteristics are being built on principles and codes, being accountable to itself, based on quality, and being transparent about its processes and itself (to avoid any undue influence, be it commercial or political). Therefore, it can serve the public interest.
    • The 2023 Joint Declaration on Media Freedom and Democracy coined what is needed for the media to take on its fundamental role in democracy: https://www.osce.org/files/f/documents/3/2/542676.pdf.

 

  • Call-to-action points
    • Whole-of-society approach needed to safeguard quality information and to consistently uphold public interest amidst the transitional media landscape.
    • We need to develop and implement a public interest framework to build a healthier online information space.
    • Need to create and ensure technology for democracy, opportunity to harness technology’s potential (especially by thinking of local levels rather than scale).
    • Need to recalibrate the relationship between news media and online platforms, but also with democratic institutions.
    • Online platforms have to be more accountable and play a role in overcoming global digital information inequalities.
    • As the challenges are highly complex, any response requires an intersectional and multi-stakeholder approach.
IGF 2023 WS #570 Climate change and Technology implementation

Updated:
Sustainability & Environment
Calls to Action

Enhancing legal compliance and accountability in implementing environmental laws requires global efforts from governments, private sectors, and international organizations.

,

Sustainable digital transformation, involving transparent policies, sustainable design, and accessible technology solutions, is crucial to address climate challenges, requiring global collaboration and immediate action from all stakeholders.

Session Report

 

The intersection of sustainability, digitalization, and climate change has become a crucial topic in today's global concerns. This report synthesizes the key points discussed by the speakers from the session. These experts provided insights into how the digital age can both exacerbate and alleviate climate challenges, and their recommendations to address this complex issue. The Key Takeaways of the session were:

  • Digitalization and Its Environmental Impact: The speakers began by highlighting the growing significance of electric and autonomous mobility, emphasizing that digital technologies, especially electric vehicles (EVs) and autonomous mobility, place significant demands on energy production and computational power. This shift creates new challenges, such as the allocation of electricity from the national grid to EV users and the need for updated policies to accommodate this transition.
  • Insights into the European Union's strategy of a twin transition: Combining green and digital transformations were also shared. With emphasis on ambitious climate goals, such as a 50% reduction in emissions by 2030 and climate neutrality by 2050. To align sustainability with digitalization, the speaker proposed enhanced transparency regarding the environmental impact of digital devices, promoting entrepreneurial thinking for sustainability, and embedding ecological sustainability into design processes.
  • The importance of affordable and accessible technology solutions: There were concerns about the lack of necessary infrastructure to implement expensive technologies in many countries, as well as legal disputes and accountability related to environmental protection laws, emphasizing the need for effective enforcement and compliance mechanisms.
  • AI in Climate Mitigation and Adaptation: In mitigation, AI can optimize electricity supply and demand by considering weather conditions and electricity usage patterns. For instance, building energy management systems using AI can significantly reduce energy consumption during peak times. AI also contributes to climate adaptation by enabling the development of early warning systems and improving climate forecasting. These technologies allow us to take early countermeasures and ensure a stable food supply.
  • Negative Environmental Impacts of Technology: While technology offers solutions for climate change, it also presents environmental challenges, such as the energy consumption associated with electronic devices, data centers, and communication networks primarily powered by fossil fuels. The entire life cycle of electronic devices, from manufacturing to disposal, contributes to energy consumption and carbon emissions. Hazardous chemicals and e-waste pose environmental risks when not managed properly, especially in developing countries.

The discussions by various speakers highlighted the following unified actions: Ensure that digital technology contributes to sustainability goals and consider the environmental impact of digital devices; Invest in research and development to create green and energy-efficient technologies, especially for regions with increasing energy demands; Advocate for effective enforcement mechanisms and accountability in environmental protection laws globally; Encourage responsible consumption by extending the life cycle of electronic devices, reducing e-waste generation, and adopting sustainable practices in manufacturing; and Encourage collaboration between governments, businesses, research institutions, and individuals to harness the full potential of technology in combating climate change.

The global discussion on the intersection of sustainability, digitalization, and climate change is multi-faceted and addresses various challenges and opportunities, and needs more action from governments, civil society and the private sector. Through these unified calls to action, the digital age can be harnessed to mitigate climate change and transition toward a more sustainable future.

 

IGF 2023 WS #147 Green and digital transitions: towards a sustainable future

Updated:
Sustainability & Environment
Key Takeaways:

On October 11th, IGF 2023 workshop #147 was held. Seven experts from different fields and countries introduced their views on green and digital transformation development. Here are key takeaways: We must realize the urgency of the dual transformations, take practical actions of using digital technologies to implement the UN SDGs, and cooperate in interdisciplinary, intersectional and international ways.

Calls to Action

On October 11th, IGF 2023 workshop #147 was held. Seven experts introduced their views on green and digital transformation development. In the future, we plan to evaluate the opportunities and governance challenges encountered in the development process of dual transition, to propose a policy framework for global digital and green governance and formulate follow-up action plans.

Session Report

This workshop focuses on the deep integration of digital technology with sustainable and digital dual transformations, open science, wildlife management, urban management, knowledge management, and other fields, which can help achieve sustainable development.

TAO Xiaofeng, Professor of Beijing University of Posts and Telecommunications (BUPT),  the vice chair of Consultative Committee on UN Information Technology (CCIT),  China Association for Science and Technology (CAST),  chaired the workshop onsite. Zhou Xiang, Researcher of institute of Electrical Engineering of the Chinese Academy of Sciences, the member of Consultative Committee on UN Information Technology (CCIT),  China Association for Science and Technology (CAST) chaired the workshop online.

In this workshop, seven speakers presented their views on the topic " Green and digital transitions: towards a sustainable future ", and the details are below.

Professor Gong Ke, Immediate Past President of the World Federation of Engineering Organizations (WFEO) Foundational Fellow of International Science Council, the chair of  CCIT/CAST, made a presentation on the three “Musts” for accelerating sustainable and digital dual transformations. He emphasized that we must realize the urgency of the dual transformations, take practical actions of using digital technologies to implement the UN SDGs, and cooperate in interdisciplinary, intersectoral and international ways.

LIU Chuang, Director of Global Change Research Data Publishing & Repository(GCdataPR), World Data System(WDS), Vice Chair of FAO OCOP Regional Organizing Group in Asia and the Pacific, Professor of Institute of Geographical Sciences and Natural Resources Research, Chinese Academy of Sciences (IGSNRR/CAS),  presented the open science for green and digital transition. She took the GIES (Geographical Indications Environment & Sustainability) as an example of actions indicating the importance of open science and how the partner cooperation together to accelerate the SDGs in mountain areas, small islands and critical regions in climate changes.

 

Tomoko Doko, president and CEO, Nature & Science Consulting Co. Ltd., Japan, explained the wildlife management in Japan for a sustainable future. She analyzed that Japan has established a sound legal system for the protection and control of wildlife, and uses ICT technology to monitor the movement of wildlife and implement capture, contributing to sustainable development.

 

Horst Kremers, General Secretary of CODATA- Germany shared his excellent ideas and thoughts about digital twins in action. He pointed out that the challenges in process models and techniques for urban digital twin management, and put forward suggestions about action and achievement of objectives.

 

Ricardo Israel Robles Pelayo, Professor of Universidad Anahuac online, UNIR México and EBC, Mexico, noticed the challenges and commitments in digital technology and sustainable environment. He proposed a path for digital transformation based on the United States Mexico Canada Agreement (USMCA), including reducing greenhouse gas emissions, maintaining regional and global cooperation, ensuring fairness and justice in transformation, and coordinating relevant standards and norms.  

 

Daisy Selematsela from the University of the Witwatersrand and Lazarus Matizirofa from the South African Department of Science and Technology and South African Science Foundation, introduced the open access repositories as an accelerator for open journal systems and enhancer for the SA-SDGs HUB. They pointed out the background of the goal of sustainable development in knowledge management and analyzed the institutional policies, practices, and initiatives that affect research productivity in South Africa.

 

After all speakers finished their presentations, the experts started an open discussion. Several key questions are discussed, such as the key challenges and governance issues, strength the cooperations among multiple stakeholders, including govenments, science and technology groups, how to establish a scientific policy framework and provide policy guidance and regulation. The experts mentioned that the more important thing is to accelerating Sustainable and Digital Dual Transformations.

 

Finally, we can draw some conclusions from this workshop. Firstly, it is now widely acknowledged that a green transition cannot proceed without a digital technology. Across the globe, ambitious targets are being set to achieve carbon neutrality and reduce carbon emissions by the end of this decade. The international community is united in its commitment to drive the green transformation of the global economy. Secondly, today's speeches and discussions have given us fresh insights into the future of the Green and Digital Transitions. We understand that different stakeholders from various fields, such as Prof. Liu and Prof. Daisy from technical community, Ms. Doko from private sectors, face distinct challenges and governing issues, requiring customized approaches to their unique circumstances. Lastly, as Prof. Gong and the other three speakers stressed, we must realize the urgency of the dual transformations, take practical actions of using digital technologies to implement the UN SDGs, and cooperate in interdisciplinary, intersectional and international ways. Our combined efforts are key to overcoming the significant challenges and governing issues we face.

IGF 2023 WS #49 Cooperation for a Green Digital Future

Updated:
Sustainability & Environment
Key Takeaways:

DFI contains a clear commitment by the states to “cooperate to maximize the enabling effects of technology for combatting climate change and protecting the environment whilst reducing as much as possible the environmental footprint of the Internet and digital technologies”. This commitment complements other principles contained in the DFI.

,

The implementation of the DFI commitments requires cooperation between different communities and stakeholder groups, including the governments, civil society organisations, private sector and academia. It is particularly important to explore the possibilities for engagement with youth organisations and bringing the conversation about the impact of digital technologies to other communities.

Calls to Action

To mainstream issues linked to the impact of digital policies, products and services on the climate change and environment into other policies such as education, consumer protection, industrial policy. This also includes raising awareness among different stakeholder groups and end users.

,

To put in place data collection and analysis mechanisms to better understand both how the digital domain impacts the environment and climate change but also to better assess the positive contribution of new technologies to addressing the climate change, environmental degradation, biodiversity loss, etc.

IGF 2023 WS #501 Children’s digital rights: overcoming regional inequalities

Updated:
Human Rights & Freedoms
Key Takeaways:

Digital can represent both opportunities and risks for children. The more opportunities and skills, the more the risk they are subjected to. Historical, social and economic contexts can interfere with the development of skills and the security available to users, as It can leave them more vulnerable to economic exploitation online.

,

Governments and businesses need to commit to ensuring children's digital rights. Regulations cannot fail to consider its potential impact on children's lives.

Calls to Action

Regulations for the digital environment can be inspired by legislation in other countries, but they must not fail to take into account the local context and the local structures that can generate bottom up innovation. Intergovernmental networks can be essential for standardizing understandings and general paramaters for legislation for an open Internet

,

Platforms have the obligation to treat Global South Children with similar safeguards and highest level of regard, as they treat Global North Children. Different treatment and less protection for Global Majority Children impacts the right to non-discrimination. It is essential that global platforms develop products that are safe by default. It means that these products recognize and apply children's best interests, are transparent and accountable.

Session Report

Summary of Session:

Maria Mello initiated the debate by discussing recent school attacks in Brazil and the influence of the digital environment business model to exploit personal data, incentivize hateful content, and prejudice Global South children. The Global South faces unique challenges rooted in historical socioeconomic contexts, requiring tailored legislation and public policies for child rights. Two videos were shown about Brazilian children's perceptions, illustrating how they see the challenges and potential of the digital environment in their lives and in those of their peers.

Then began the speaker’s presentations:

Mikiko Otani (Attorney, International Human Rights Lawyer and immediate past chair of the UN Committee on the Rights of the Child): The Committee on the Rights of the Child, made up of 18 experts, ensures that the Convention on the Rights of the Child, ratified by 196 countries, is enforced. They review country reports and create "General Comments" that focus on children's rights. In 2019, they initiated a new General Comment addressing children's rights in the digital world, adopted in February 2021. This decision aims to guide the protection of children's rights, define state and business responsibilities, and empower children and parents in the digital era, enabling children’s education and preventing online risks like bullying, exploitation, and health issues. Post-COVID-19, online activity has increased, expanding discrimination. Children without digital access are excluded from essential services. The right to non-discrimination requires States to ensure that all children have equal and meaningful access to the digital environment. This involves providing free and safe access in public places and investing in policies and programs that support affordable and knowledgeable use of digital technologies by children. Discrimination can arise when automated processes make biased decisions based on unfair data collected related to a child.

Recommendations: The UN Child Committee country reviews can address specific recommendations about digital inequalities. International cooperation is needed to provide financial and technical resources for capacity building through multilateral approaches. UN Global policy must involve stakeholders, especially in the global South, whose rights are affected. Ensuring child participation in the Global South involves addressing issues like cost, access, and language barriers. Additionally, child rights impact assessments are important for businesses to safeguard child rights.

Lily Edinam Botsyoe (PhD student and information technology):  From the context of Ghana, it is important to think about actively increasing the talking about awareness with and for children on child online harms. Digital literacy is connected with the fact that not everyone is connected. Few people who are online are using the online spaces in such a way that their rights are not also trampled. Exploitation, misinformation and cyberbullying are just a few problems faced by Ghanaian children. It's important to think about the caregiver, ensuring that families and educators have enough support. Ghana already has a national online child protection policy but lacks meaningful implementation. 

Recommendations: International support is needed and can strengthen local organizations. 

Sonia Livingstone (Full professor in the Department of Media and Communications of London School of Economics): In a world marked by divisions and inequalities, it's essential for researchers and policymakers to understand how these inequalities persist from offline to online. Global Kids Online shows that as children gain more internet access, they encounter both opportunities and risks. The challenge is to maximize opportunities while mitigating risks. Risks don't necessarily lead to harm; vulnerability, protection, geographic location, and living circumstances must be considered in regulatory processes for careful interventions. Europe's experience that can be useful to the Global South is personal data safeguarding and addressing risks for consumer protection (eg. The European Digital Services Act). Concepts like Privacy by Design, Safety by Design, and Security by Design are essential for a holistic approach to children's rights.

Recommendations: Age-appropriate design codes can protect and empower children in different digital contexts, enhancing data protection and children's rights as data subjects. Risk assessments can emphasize transparency, accountability, and the provision of remedies. 

Waldemar Ortunho (Brazilian National Data Protection Authority President-Director):  The Brazilian National Data Protection Authority has the protection of children’s personal data as an absolute priority. The Authority has been investigating TikTok's data processing, particularly about children and adolescents, examining age verification effectiveness and proper data processing for those under 13 and 18. The results led to recommendations for the platform. Additionally, the Authority is investigating seven digital education platforms that share children's data with online advertising companies. These regulatory processes consider other global experiences but are adapted to Brazil's unique context, such as its large population and recent data protection legislation. The Authority recognizes the major challenges to protect children and adolescents as consent for verifying users' ages, the impact of digital platforms and games, and the potential risks in the digital environment. 

Questions: The Panel questions revolved around the following topics: a) different treatment by global platforms guidelines for children in the North and South and corporate responsibility; b) the clashes of LGBTQIA+ rights and Kids Online Safety regulation; c) the use of technology to emancipate the identities of Global South children; d) the role of the Brazilian National DPA in age verification and platform regulation; e) do we need increased attention in the field of regulation? f) does CG 25 provide international regulatory models? and g) the roles of international organizations in supporting child rights implementation in the digital environment.

Lily: Technology can provide support and awareness raising. Hotlines can be useful and new hotlines can be trained with international support. We need to balance regulation with innovation and make it useful for people. How can we replicate some of these best practices in other countries for the global South and provide further implementation of policies? 

Mikiko: We also need to understand the children's rights in a holistic way including the non-discrimination against those children who are LGBTQIA+. When the Committee drafted the General Comment 25 there was a lot of discussion about the age restriction or age verification. The Convention on the Rights of the Child is very much dependent on age cutting. What is an appropriate age? Safety by design and other protections by design require efforts from the business. In addition to protection against risks, children also need to be supported in acquiring online navigation skills. How can we use existing platforms and instruments to protect children? 

Waldemar: Brazil has been working with other actors, including internationally, to exchange information and experiences, for example, about age verification. The regulation of AI in Brazil passes through the National Congress and the Authority is present. A study is being made about AI and data protection. 

Emanuela Halfeld: A holistic approach to the environment, thinking about children’s rights outside the digital world, can be helpful to resolve inequalities and child online harms. In that sense, it would be useful to think about General Comment 25 and General Comment 26 in an integral approach.  

 

IGF 2023 WS #460 Internet standards and human rights

Updated:
Human Rights & Freedoms
Key Takeaways:
There are structural challenges in bringing human rights community/civil society together with the technical community/industry in a meaningful and continued way, and participate in standard-setting processes. The issue of exclusion of civil society and need to align technical standards with international human rights seems to be gaining traction; IGOs and Member-States are better aware of this issue and the urgent need to address it.
Calls to Action

Emerging technologies are quickly evolving - an opportunity to center human rights and include CSOs from the beginning in AI standards, recognizing barriers continue to make this difficult. Regional perspectives will continue to play an important role in standard setting. It'll be important for standard setting bodies to consider issues of internet fragmentation and contribute to elevating and reinforcing human rights at global level.

Session Report

Take-aways

  1. There are structural challenges in bringing the human rights community/civil society together with the technical community/industry in a meaningful and continued way, and participate in standard-setting processes.
  2. The issue of exclusion of civil society and need to align technical standards with international human rights seems to be gaining traction; international organizations and Member-States are better aware of this issue and the urgent need to address it.

Next steps:

  1. Emerging technologies, especially AI, are quickly evolving and this can be an opportunity to center human rights and include civil society from the beginning in AI standards, recognizing that historical barriers continue to make this difficult and that specific challenges related to AI may require a different approach. 
  2. Regional perspectives will continue to play an important role in international standard setting, especially as national and regional regulation on technologies, including emerging technologies, develop in the coming years. It'll be important for standard setting bodies to consider issues of internet fragmentation and contribute to elevating and reinforcing human rights at the global level. 

 

IGF 2023 WS #457 Balancing act: advocacy with big tech in restrictive regimes

Updated:
Human Rights & Freedoms
Key Takeaways:

Increasingly authoritarian states are introducing legislation and tactics of online censorship, including internet shutdowns, particularly during politically sensitive periods. There is an urgent need for civil society and big tech to coordinate in mitigating risks to online free expression posed by sweeping legislative changes and practices empowering authoritarian states.

,

Lack of transparency in big tech's decision-making process, in particular regarding authorities’ user data and takedown requests, exacerbates mistrust and hinders effective collaboration between big tech and civil society, especially under authoritarian regimes. At minimum, platforms should develop comprehensive reports with case studies and examples on their responses in order to keep the civil society groups informed and in the conversation.

Calls to Action

Civil society and big tech should initiate structured dialogues to create a unified framework for responding to legislation and practices that threaten online free expression, including internet shutdowns at the national, regional and global levels including through multi stakeholder fora such as the GNI.

,

Big tech companies must commit to radical transparency by publishing detailed policies and data on content moderation and government requests. The companies should establish a dedicated team that engages directly with local civil society, sharing information openly to address nuanced challenges faced in specific geopolitical contexts.

Session Report

Session report:

The session brought together a diverse group of stakeholders, including representatives from civil society, big tech companies, and policy experts, to discuss the pressing challenges of online censorship, data privacy, and the role of big tech and civil society in authoritarian states. The session also highlighted the importance of multi-stakeholder dialogues and offered actionable recommendations for all parties involved.

The session highlighted that any meaningful progress on ensuring access to the internet and combating censorship online in restrictive regimes can only be achieved in a broader context, in conjunction with addressing the lack of rule of law, lack of independent judiciary, crackdown on civil society and absence of international accountability. 

Key discussions:

  • Legislative challenges: Participants highlighted the rise in authoritarian states introducing legislation aimed at online censorship, often under the guise of national security or cybercrime laws. These laws not only enable content censorship but also force platforms to share user data, posing significant human rights risks and chilling effect for online expression.
  • Big tech’s responsibility: There was a general consensus that big tech companies have a significant role to play in this landscape. There was also a strong sentiment that platforms need to step up their efforts in countries like Vietnam, where civil society has limited power to effect change due to authoritarian rule.
  • Lack of transparency, especially in big tech’s decision-making processes in particular regarding authorities’ user data and content takedown requests, was a recurring theme. This lack of transparency exacerbates mistrust and hinders effective collaboration between big tech and civil society. Additionally, it allows authoritarian governments to apply informal pressure on platforms.
  • Other barriers that hinders collaboration between big tech and civil society that were flagged by civil society included issues with the current mechanisms available for civil society to engage with big tech - long reaction time, little progress, no consistent follow-up, concealed results of bilateral meetings between the government and the platforms and the fact that country focal points are often in contact with the government especially in oppressive regimes which puts activists at risk. 
  • Civil society's role: Civil society organisations emphasised their ongoing efforts to hold big tech accountable. They also highlighted the need for more structured dialogues with tech companies to address these challenges effectively.
  • Multi-stakeholder approach: Both civil society and big tech representatives agreed on the need for a multi-stakeholder approach to tackle the issues. There was a call for more coordinated efforts, including monitoring legislative changes particularly in the face of rapid changes in the online space.
  • Remote participants: Feedback from remote participants underscored the urgency of the issues discussed, particularly the need for transparency and multi-stakeholder dialogues.

Turkey and Vietnam as case studies

Turkey and Vietnam were discussed as case studies to illustrate the increasing challenges of online censorship and government repression in authoritarian states. Both countries have seen a surge in legislation aimed at controlling online content, particularly during politically sensitive times, and both grapple with the complex role of big tech in their unique geopolitical contexts. Big tech in both countries face a difficult choice: comply with local laws and risk aiding in censorship, or resist and face being blocked or penalised.

The civil society representative from Vietnam highlighted that Facebook has a list of Vietnamese officials that cannot be criticised on the platform, highlighting the extent of government influence. Facebook and Google have been complying with the overwhelming majority (up to 95%) of content removal requests. Activists also pronounce big tech’s inaction in the face of the growing problem with the state-back online trolls. 

Some concrete examples showcasing successful advocacy and collaboration between big tech and civil society groups were discussed, such as, in 2022, the government in Vietnam turned the hard requirement of storing data locally to a soft requirement after civil society activism mobilised platforms to lobby with the government. 

In the case of Turkey, an amendment package passed in October 2022, introduced up to three years of imprisonment for "spreading disinformation and imposed hefty fines for big tech companies, including up to 90% bandwidth throttling and advertising bans for non-compliance with a single content take-down order, further complicating the operating environment for big tech companies. Companies are now also required to provide user data upon request of the prosecutors and courts in relation to certain crimes. 

The panel highlighted that this set of laws and lack of transparency allow authoritarian governments to place big tech under significant formal and informal pressure. The threat of throttling in the event of a non-compliance with government requests creates a particularly heightened chilling effect on platform decisions and their responsibility to respect human rights.

On the eve of the general and presidential elections on 14 May 2023, YouTube, Twitter and Facebook restricted access to certain content that involved videos critical of the government and various allegations of crime and corruption against the ruling AKP. While YouTube did not issue any public statement about the censorship on their platform, both Twitter and Meta noted in their public statements that Turkish authorities had made clear to them that failure to comply with its content removal request would lead to both platforms being blocked or throttled in Turkey.

In its transparency report, Meta explained that their top priority was to secure access of civil society to their platforms before and in the aftermath of the elections; and they made the decision to comply with government requests to remove the content because, although critical of the government, the content was not directly linked to election integrity. 

The panel also discussed that GNI principles state that ICT companies should avoid, minimise or otherwise address the impact of government demands if national laws do not conform to international human rights standards. The initiative also focuses on capacity-building within civil society to engage effectively with tech companies. The representative from GNI also mentioned a tool called “Human Rights Due Diligence Across the Technology Ecosystem” which was designed to formulate constructive asks to the relevant stakeholders depending on whether this is a social media platform, telecom company or a cloud provider. 

Recommendations for big tech:

  • Develop contingency plans to protect access to platforms during sensitive periods
  • Conduct human rights due diligence before taking any compliance steps
  • Actively engage with local NGOs and invite them for consultations
  • Full disclosure of government requests and compliance actions (Twitter’s publication of the government’s communication on censorship ahead of the Turkish elections was a step in the right direction)
  • Tackle the rise of internet trolls 
  • Protect civil society groups from false mass reporting and illegitimate account suspensions 
  • Expand end-to-end encryption for users' data privacy 

Recommendations for civil society:

  • Closer coordination on how to advocate for digital rights to avoid fragmented, unimpactful calls and align strategies to create a stronger stand against the government’s actions
  • Work together with platforms to formulate a multi-pronged strategy envisaging both private sector and civil society perspectives
  • Work towards increasing public literacy on digital rights 
  • Bring international attention to these critical issues

Recommendations for states:

  • Diplomatic efforts must extend to digital rights e.g. make them a proviso in trade agreements 
  • Financial and logistic support for NGOs

 

IGF 2023 WS #386 Safeguarding the free flow of information amidst conflict

Updated:
Human Rights & Freedoms
Key Takeaways:
The information space has become an extension of the conflict domain. Digital risks and restrictions on the free flow of information can harm civilians affected by conflicts., Digital companies have become important actors in conflict and often find themselves in extremely challenging circumstances, including having to ensure safety of staff and dealing with conflicting demands by the belligerent parties. Beyond their responsibility to respect human rights and humanitarian law they should also be guided by the principle to minimise harm during conflict.
Calls to Action

International law, and in particular international humanitarian law, does need to be clarified as it does not always provide detailed answers on how to deal with the new trends in how armed conflicts are fought. For example, digital companies would benefit from greater guidance what it means for them to respect international humanitarian law.

,

There is a need for a multi-stakeholder approach, including international organisations, humanitarian actors, digital companies, and human rights organisations, to fill these gaps and issue guidance on how these challenges can be addressed.

IGF 2023 WS #356 Encryption's Critical Role in Safeguarding Human Rights

Updated:
Human Rights & Freedoms
Key Takeaways:

The technology landscape does not adhere to geographic boundaries. If you break encryption for one, you break encryption for all, undermining national security and potentially harming the groups you seek to protect.

,

To avoid negative policy outcomes, laws governing the use of encryption cannot supersede or overrule established international standards such as the right to privacy, freedom of expression, due process and access to information. Policy frameworks need to outlaw the use of spyware.

Calls to Action

Policymakers need to drastically improve their understanding of internet technologies, the infrastructure underpinning them, their built-in protective mechanisms, and the internet's business model to draft safer regulatory frameworks and make more informed policy decisions. To that end, policymakers should consider proposing laws that enshrine fundamental tech education and compel tech organizations to be more transparent about their practices.

,

Internet users and the 'average' consumer need to take power over their online data, demand the mainstreaming of encryption within their daily tools to make a case for why online privacy is vital to their digital practices. Whether it is to adhere to professional standards, such as client-attorney privilege, safeguarding patient data, maintaining a competitive advantage etc. the use cases are endless and everybody has "something to hide."

Session Report

This panel brought together professionals from the technology, policy, human rights and advocacy spaces to discuss international standards and policy considerations for human-rights-forward governance of encryption. The debate reflected on policymakers' need to balance the demands of national security with the protection of individual privacy and international human rights laws. The panelists discussed a number of measures that could help shape a comprehensive regulatory framework for encryption. 

 

  1. International Framework for Encryption:
  • Encourage international collaboration and adherence to human rights principles in addressing encryption and surveillance challenges. Countries should be accountable for adhering to international standards and guidelines.
  • Promote the global adoption of encryption best practices across nations. Provide technical assistance and capacity-building programs to countries and stakeholders that may lack the expertise to make informed decisions about encryption policies. This includes sharing knowledge about cybersecurity, encryption technology, potential risks, and global norms for safeguarding digital communications.
  • Involve stakeholders such as civil society organizations, technology companies, human rights advocates, and privacy experts in the development of international encryption standards. Ensure that diverse perspectives are considered.

 

  1. Education and Public Awareness:
  • Emphasize digital literacy from an early age, ensuring access to education that includes understanding the implications of surveillance, risks, and individual rights.
  • Promote awareness of the importance of encryption in protecting privacy and security.

 

  1. Normalize Encryption:
  • Normalize strong encryption as a global standard, much like HTTPS, for all online communications. 
  • Encourage all users and regulators to expect encryption by default for secure messaging and data protection.

 

  1. Reject Mass Weakening of Encryption:
  • Reject proposals for mass weakening of encryption that infringe on individual privacy. 
  • Consider the potential consequences of weakening encryption, for example, examine instances where lawful intercept ports have been misused previously.

 

  1. Protect User Privacy and Security:
  • Uphold strong customer and user protections to protect user data against unauthorized access.
  • Advocate for the adoption of distributed approaches and end-to-end encryption to return power to the hands of users, granting them control over their data, communications, and online activities.

 

  1. Strengthen Legal Protections Against Surveillance:
  • Strengthen legal protections against surveillance, with an emphasis on the invasive nature of surveillance technologies.
  • Enshrine in international frameworks what surveillance and encryption mean, inspired by robust international standards that ensure fair trials, freedom of expression, and limits on invasive surveillance.

 

  1. Differentiate Encryption from Content Moderation:
  • Distinguish between encryption as a safeguard for user privacy and content moderation as a separate issue.
  • Emphasize that encryption is not the problem, but rather a means of protecting personal data.

 

  1. Recognize the Limits of AI in Content Moderation:
  • Acknowledge the limitations of artificial intelligence (AI) in making perfect content moderation decisions. Highlight the potential for false positives in AI-based moderation systems, which can have severe consequences for individuals.
  • Address concerns about child sexual abuse material proliferating on private messaging apps by implementing effective content moderation measures such as safe and responsive reporting channels for users that do not compromise encryption.

 

  1. Ban Spyware Vendors and Technologies:
  • Implement a ban on spyware vendors and technologies that have been used to enable human rights abuses.
  • Address the proliferation of spyware by regulating and restricting its use.

 

  1. Regulate Vulnerability Exploits:
  • Regulate the sale and use of software vulnerability exploits, treating them as equivalent to 'small arms dealing' in the digital realm.
IGF 2023 WS #255 Digital Me: Being youth, women, and/or gender-diverse online

Updated:
Human Rights & Freedoms
Key Takeaways:

1. When it comes to youth people on the Internet is not only necessary to include, but also to hear as in a way that is meaningful and can be included in the final decision-making stage. 2. In the case of women and gender diverse people it is necessary to understand that access is not enough, there are variables and intersectionalities in the offline spaces that translate into the online spaces reproducing these inequalities and affectation of t

Calls to Action

1. Technology design, development, implementation, and use, as the design of policies must include young people, women, and gender-diverse people at all stages to ensure not only inclusion and representation, but also diversity and real impact in the technologies we want. 2. Multistakeholderism is key to addressing the rights of young people, women and gender diverse people online. Only through this can we envision an Internet that continues to

Session Report

In addressing that, the speakers indicate that females face a lot of cyberbullying online. For example, one speaker shared that Indonesia has seen a surge in online harassment and bullying of females and gender-diverse people, especially with the fact that some of those abuses are coming from people in different jurisdictions.  In giving content, one of the speakers shared a story about the incident in Indonesia. Since the COVID-19 lockdown in 2020, Indonesia has witnessed a staggering 300 percent surge in online abuse cases, leaving victims without access to essential legal and mental health support. The speakers also indicate the need for a global policy that will fight against such abuse and the role of platform regulators to also play their role. The key takeaways from the policy questions are.

    1. There should be enactment of global policy that fights against online abuse. Thus, facilitating international cooperation and forging global cooperation amongst law enforcement to address cross-border jurisdiction issues.

    2. It's pivotal to strengthen global partnerships to tackle online harassment. Policies must not only respond but must actively protect our digital citizens.

    3. Ethical AI practices and rules need to ensure that technology becomes a helper, not an enemy in our fight against online harassment.

    4. The government needs to advocate and enforce policies that foster a secure legal environment across legal enforcement. Social media and tech companies must incorporate safety by design principles, focusing not only on enabling report mechanisms but also on effectively addressing post-report issues, and preventing further perpetration.

In terms of how youth and gender-diverse groups are present currently on the Internet, the speakers mentioned that there is a prominent growth increase in the youth and gender-diversity groups in increasing and actively participating in Internet-related initiatives across various fields, and it's initially because of. There has been an increase in STEM education and other online training courses. Social media activism has promoted a lot of online groups that encourage a lot of online communities to provide a safe space for individuals and diverse group members from different backgrounds and different tech-related industries to connect and share experiences and just support each other. The speakers mentioned that the digital space can really be a minefield, especially for youth, women, and gender-diverse people. So, while it offers, a platform for advocacy and change, the backlash, especially from misogynistic and alt-right groups, is real, it's intense, and they are super organized Some key takeaways are.

    1. To give youths a platform where they can be able to share various ideas and engage with diverse people from different backgrounds from different countries.

    2. To discuss challenges and some innovative solutions that can help increase advocacy and amplify voices in interrelated issues.

    3. Platforms must invest in and reinforce rigorous content moderation to protect marginalized voices and dismantle hate-driven narratives.

The panel ended by encouraging a multi-stakeholder approach to solving the issue of online abuse. 

IGF 2023 WS #85 Internet Human Rights: Mapping the UDHR to Cyberspace

Updated:
Human Rights & Freedoms
Key Takeaways:

The vital role of Corporations in the definition, regulation, and enforcement of digital human rights.

,

The importance of corporate cooperation in digital human rights as a way to bolster and defend the multistakeholder paradigm for Internet governance.

Calls to Action

Urge corporate involvement in development of digital human rights.

,

Watch for the paradigm shift in digital human rights management to move very soon from algorithmic format to AI format.

IGF 2023 WS #64 Decolonise Digital Rights: For a Globally Inclusive Future

Updated:
Human Rights & Freedoms
Key Takeaways:

Colonial legacies continue to shape the internet and technology, perpetuating biases and reinforcing historical prejudices. Technology is not neutral and that the internet can replicate patterns of oppression, leading to a greater need for decolonization.

,

To truly decolonize the internet and digital space, it is essential to empower marginalized voices and communities. All stakeholders should actively seek out and amplify the voices of marginalized communities in discussions about technology and digital rights.

Calls to Action

Tech companies must actively recruit from marginalized communities. Governments should establish policies and regulations that require tech companies to prioritize diversity and inclusion in their technology development teams. Civil society organizations must raise awareness about diversity's benefits. Marginalized communities should actively engage in training and education programs that provide the necessary skills for technology development.

,

Tech companies, governments, and civil society organizations should collaborate to create platforms and spaces that value underrepresented voices. Governments should allocate funding for training and education programs that empower marginalized communities to navigate the digital space safely and effectively. Marginalized communities should actively share their experiences & collaborate with stakeholders to promote their digital rights

Session Report

Digital/Platform-based Labor: Precarious Conditions and Unequal Opportunities

The session brought to the forefront a critical issue: the pivotal role of human labor in the development of AI/digital technologies. It revealed that digital development relies heavily on the human labor located predominantly in countries of the global South, where thousands of workers are engaged in activities such as data collection, curation, annotation, and validation. Companies benefit from these platforms while also often bypassing labor rights and protections such as minimum wage and freedom of association. Further, the precarious labor conditions that plague these workers include low pay, excessive overwork, short-term contracts, unfair management practices, and a lack of collective bargaining power, leading to social, economic, and physical/mental health issues among these workers in the long run. One notable initiative in this regard is the Fair Work Project, which seeks to address these labor conditions in nearly 40 countries. The project assesses digital labor platforms based on a set of fair work principles, including factors such as pay conditions, contract management, and representation. The Fair Work Project aims to drive positive change within the digital labor market and create a fairer and more equitable working environment. Speakers asserted that technology production should not merely be a process undertaken by technologists and coders but should include local and marginalized communities who possess a deep understanding of cultural nuances and the specific issues they face.

Data Colonialism: Protecting ‘Self’ Online 

The speaker discussed the exploitation of personal data without consent and argued that personal data is often used for profit without knowledge or permission, highlighting the need for more transparency and accountability in handling personal data. The speakers emphasized how the terms of service on online platforms are often unclear and full of jargon, leading to misunderstandings and uninformed consent. One of the main concerns raised is the concept of data colonialism, which the speaker explained, aims to capture and control human life/behavior for profit. She urges individuals to question data-intensive corporate ideologies that incentivise the collection of personal data, which perpetuate existing inequalities, lead to biases in algorithms, and result in unfair targeting, exclusion, and discrimination. The speaker then suggested that individuals should take steps to minimise the amount of personal data they share online or with technology platforms. They emphasised the importance of thinking twice before agreeing to terms and conditions that may require sharing personal data. They also proposed the idea of digital minimalism, which involves limiting one's social media presence as a way to minimise data. She highlighted that data colonialism is infact a silver lining in the cloud as it provides an opportunity to create systems rooted in ethics. The speaker went on to advocate for the concept of ownership by design, which includes minimisation and anonymisation of personal data. However, she cautioned against an entitled attitude towards data use, arguing that data use and reuse should be based on permissions rather than entitlements or rights. She called for more transparency, accountability, and individual action in minimising data sharing and also emphasised the need for critical digital literacy programmes.

Digital Sovereignty and Intellectual Property

The speakers explored the intricate web of internet regulation and its impact on digital sovereignty and decolonisation. Multinational companies are found to subtly impose their home country's laws on a global scale, disregarding national legal systems. Intellectual property, such as the Digital Millennium Copyright Act (DMCA), is cited as an example of this behavior. 

Gender-Based Disinformation

Addressing the challenge of gender-based disinformation in South Asia is a central concern highlighted in the session. Women, trans individuals, and non-binary people are often targeted by disinformation campaigns that aim to silence and marginalize them. The session emphasizes the importance of documenting and combating gender disinformation and advocating for collaborative approaches that involve diverse stakeholders.

Digital Literacy and Bridging the Digital Divide

One key aspect illuminated by the session is the need for digital literacy programs and skills training to empower marginalized communities. The speakers advocated for democratizing access to digital education and ensuring that training is contextualized and relevant. This inclusive approach recognizes the diverse needs and cultural specificities of different communities, enabling them to harness the power of digital tools effectively. 

Decolonizing the Internet and Digital Technology

The concept of decolonizing the internet and digital technology production is evoked as a process that involves not only the use of digital technologies but also the transformation of the production process itself. By incorporating diverse perspectives and local context into technology creation, the aim is to avoid biases and discrimination. The speakers advocated for adapting platform policies to respect cultural differences and acknowledge human rights, rather than solely adhering to external legislation.

 

Conclusion:

The journey towards a decolonized internet and technology landscape is ongoing. It requires
continuous reflection, dialogue, and action. We can all strive for a digital space that respects
and empowers all individuals, regardless of their background or geographic location. By
working together, we can create a future where the internet truly becomes a force for
equality, justice, and liberation.

IGF 2023 Launch / Award Event #159 Digital apologism and civic space: the peruvian case

Updated:
Human Rights & Freedoms
Key Takeaways:

1. The legislation surrounding the crime of apologia to terrorism in Peru, particularly its stricter penalties for online expressions, raises significant questions about the balance between security and freedom of expression in the digital realm.

,

2. The lack of differentiation between various online platforms and the broad definition of technology used in the law makes it essential to critically assess the potential consequences of the legislation on freedom of expression. The implementation of these laws may affect individuals' rights and raise concerns about discrimination based on the medium of expression.

Calls to Action

1. Stay Informed: Stay updated on the evolving laws and regulations regarding freedom of expression in the digital age.

,

2. Demand accountability: it is important to make government accountable on how they are prosecuting terrorism apology. We need to know the cases, is results and how it is being managed by the police, the attorneys and the judges.

IGF 2023 Launch / Award Event #46 The State of Global Internet Freedom, Thirteen Years On

Updated:
Human Rights & Freedoms
Key Takeaways:

• The multistakeholder model for internet governance is a crucial part of combating cyber threats, strengthening human rights and democracy online, and maintaining a global, open, free, and secure internet.

,

• Laws governing the digital space that are developed in democracies can have drastically different and unintended consequences for people’s rights when imposed in less free contexts.

Calls to Action

• The Freedom Online Coalition should be more inclusive in its efforts to engage with civil society around the world.

,

• Democracies should ensure that they are modeling rights-respecting legislation and regulatory approaches that will not restrict human rights online in less free spaces.

Session Report

Moderator Allie Funk began the session with an overview of findings from Freedom House’s Freedom on the Net 2023 report, which examined how artificial intelligence is deepening the crisis of internet freedom. She noted that AI drives intrusive surveillance, empowers precise and subtle censorship, and amplifies disinformation campaigns as generative AI lowers the barriers to entry for the disinformation market. She shared that if AI is designed and deployed safely, it can be used to bolster internet freedom. She closed by noting that as AI augments digital repression, there is an urgent need to regulate it, drawing on the lessons learned over the past 15 years of internet governance, namely: not overly relying on companies to self-regulate, centering human rights standards in good governance of the internet from governments, and the importance of involving civil society, particularly from the global majority. 

Olga Kyryliuk discussed how the internet freedom space has changed in the last ten years. She described how initial hopes were that the multi-stakeholder model would make it easy to reach consensus on a way to regulate technology, and that ten years ago, many also felt that legal regulation would be able to catch up with technological advancement. She noted that, looking back, regulation has still lagged behind, but there is now a greater recognition of the importance of digital rights. She shared that innovations in AI and other technologies have brought new risks and opportunities, particularly when it comes to governments balancing their safety and security interests with protecting rights online. She closed by noting that continued multistakeholder collaboration is positive, but many people want more than just venues for discussion, but actionable results such as initiatives or partnerships that will lead to change. 

Guus Van Zwoll discussed walking the tightrope of the “Brussels effect” and trying to ensure that regulations adapted by other countries with lower rule of law standards will not have adverse human rights impacts. He touched on the difficulty of balancing between fighting censorship and fighting disinformation. He described work done in the Netherlands to ensure that regulation incorporates strong requirements for transparency and references to the guiding principles on business and human rights, so that if other countries copy EU regulations, these considerations that were reached through a long multistakeholder process will already be baked into the laws. He noted that when the Netherlands has bilateral discussions, Dutch policymakers urge other government to adapt human rights and democratic clauses in their regulations.  

Emilie Pradichit discussed the proliferation of harmful cyber laws throughout Southeast Asia that target dissenting voices in the name of security, and cases in which people in Thailand and Laos have been imprisoned for speaking the truth or sharing criticism on Facebook. She identified the lack of clear definitions for terms like national security as a problematic part of such regulation, and that voluntary commitments from tech companies do not do enough to counter such problems. She expressed that companies should have meaningful engagement with other stakeholders, both on how to prevent harm and to provide remediation after the fact, not just to tick the box of consulting civil society with no follow-up. She noted that digital rights organizations are small and cannot combat the misuse of platforms by governments on their own, but end up being told that companies cannot do anything either. She called for decisions about how tech companies and AI should be regulated to come from those who have been most impacted, through meaningful engagement that holds the powerful to account. 

On multistakeholder engagement, Guus discussed efforts through the Freedom Online Coalition (FOC) and other initiatives, to incorporate and mainstream the Dutch Cyber Strategy among civil society groups, to ensure that while digital security remains high, there are principles for governments seeking to balance this with human rights, developing the governance structures to protect against a surveillance and censorship apparatus. 

Olga commented on the desire among many in civil society for greater clarity about engaging in the FOC and other initiatives. She called for greater opportunities, in addition to the FOC advisory network, such as bringing back the Freedom Online Conference, as a venue for civil society to consult with FOC member governments on issues including AI. 

Emilie emphasized that the FOC has not yet made itself accessible among civil society groups in Southeast Asia or other contexts across the majority world, where rights defenders are most under threat from digital authoritarianism and struggling under repressive governments. She pointed out the role that FOC governments could play in pressuring less democratic governments or companies that are operating in repressive contexts, particularly in cases where those still in-country are unable to speak out safely. 

Olga added that getting access to government stakeholders at regional level IGFs and other meetings can be a challenge for civil society. She suggested that FOC governments should work to incentivize governments to engage with local and regional communities outside the global IGF, in order to develop partnerships and work together in a meaningful multistakeholder way. 

Throughout the Q&A, panelists discussed the challenges for civil society in engaging with other global efforts, including the UN’s Global Digital Compact. Panelists also discussed the difficulty of ensuring that laws that are built on models from the EU, whether it be the DSA, DMA, or EU AI Act, still include the positive protections for human rights defenders without being imposing regulations that are overly burdensome and not responsive to local needs and realities.  

Olga highlighted the importance of dialogue and conversations happening early on, before a law is drafted and adopted, to ensure that it is responsive to the local context, which sometimes requires advance capacity building as well. Emilie shared the frustration that civil society in Southeast Asia often feels with government-led regulation efforts, as there are few to no opportunities to engage. She noted that governments will say they are adopting global standards as a way to receive diplomatic applause, while still refusing to engage with human rights defenders or other stakeholders. 

Guus noted that the Brussels effect was not always intended, and that although EU governments developed these laws, the way they have had global impacts was not something that was planned, which makes civil society feedback a crucial part of the learning process to improve the implementation of future regulations. 

No feedback was received from remote participants during or after the session. 

IGF 2023 Lightning Talk #94 The technopolitics of face recognition technology

Updated:
Human Rights & Freedoms
Key Takeaways:

To strengthen the public debate on the potential risks of using face-recognition technologies for public security and education.

Calls to Action

Public sector in general, but mainly in the fields of education and safety, as well as law-makers are called into action to critically and transparently involve civil society in the debate on the risks and repercussions of implementing high-risk technologies such as face-recognition.

Session Report

This Lightning Talk focused on the project to mobilize researchers, social movements and sectors of civil society for a public debate on the use of face-recognition (FR) by State agents in public spaces in Brazil.

It was acknowledged that the use of FR as a mechanism for the prevention and repression of crime satisfies the demands for safety fed by the fear of urban violence. However, it was argued that the use of this technology leads to expanded vigilantism with a discriminatory algorithmic bias in public spaces and can create hostile spaces that favor the violation of fundamental rights and guarantees such as freedom, privacy, personal data protection, assembly and association. Speakers highlighted how, in the course of this debate, legislation has been developed with members of congress to ban the use of FR in public spaces and how the use of FR has become more common in Brazilian schools, with a focus on the case of the state of Paraná, in the South of Brazil. 

The session covered the following points:

  • The use of face-recognition (FR) technology in Brazil, where speakers gave a little bit of context and explained how a resistance to FR has been built in the past few Years.
  • Arguments for banning FR.
  • The naturalization of FR, and the escalation of its use in the state of Paraná, with massive usage of FR in schools.
  • And a conclusion arguing that this is a battle being waged.

The main presentation ended with the statement that the debate on FR in Brazil has been a milestone in the inclusion of racial, socioeconomic and gender issues on the sociotechnical agenda. It also represents, for the same reasons, progress in the debate on the penal system and mass incarceration in a country which has the third largest prison population in the world. With more than 900,000 prisoners, of whom 45% are temporary detainees, Brazil lies behind only the USA and China. Today, the debate about these issues can be found in the press, traditional and independent media and parliaments as well as in the, albeit very often hypocritical, discourse of agents of the State and the private sector.

Finally, it was said that this whole debate is not a question of merely banning the use of a technology that has technical failings, but of questioning its technopolitical dimensions and challenging the unfair, racist structure of surveillance, monitoring and repression systems in Brazil.

Therefore, it is fair to say that the main aim of this session was to raise awareness about the debate on the potential risks of using face recognition technologies for public security and education in countries with a history of repression and discrimination against minority populations.

Once the case in Brazil was presented within 20 minutes by both speakers, the discussions focused two main issues: The first one was centred in the problem of naturalising the use and acceptance of surveillance technologies in schools through low-profile implementations without proper critical assessment or public control. The second issue focused on the ways in which these projects are socially, politically and economically constructed in regard to the actors and interests involved in the spread of face-recognition technologies in many sectors of society.

Session was closed in time.

Speakers: Fernanda Bruno and Rodrigo Firmino. Moderator: Rodrigo Firmino.

IGF 2023 Lightning Talk #60 Rights by Design: Privacy Engineering for the Rights of All

Updated:
Human Rights & Freedoms
Key Takeaways:

Conducting a human rights impact assessment is crucial when developing a new product, feature, or tech policy. As privacy is an enabling right for many other rights, potential privacy harms for individuals are likely to also have human rights implications. These privacy harms can be assessed using established frameworks and taxonomies (Microsoft Harms Modeling Framework, LINDDUN, PLOT4AI, Citron & Solove's privacy harms taxonomy, etc.).

Calls to Action

We call on both technology companies and policymakers to introduce ethics and threat modeling training into their organizations. It is crucial that every individual involved in the product or policymaking process reflects on their own moral compass, the privacy harms experienced by others, and their "red lines" of what they will refuse to build or legislate in order to build the moral imagination necessary to assess their work's ethical impact.

Session Report

The aim of this Lightning Talk session was to introduce privacy as an enabling right and how rights protections can be built into technology products and policymaking (“Rights by Design”). An additional goal was to present privacy from a range of perspectives to encourage attendees to challenge their preconceptions about privacy. Based on the feedback received from attendees, the session successfully achieved both aims for the small but highly engaged audience on-site. We were delighted to hear from attendees with experience in privacy and data protection that they had gained new insights from the session.

The session consisted of a 20-minute presentation followed by a Q&A and discussion. We will briefly recap the presentation content here; for the complete slides, please see the link in the session description. To begin, the speakers asked attendees whether they thought privacy mattered, discussed examples of why privacy is important, and defined privacy to ensure everyone was aligned on the concept for a productive discussion. They next outlined the privacy rights granted to individuals under data protection law and discussed how digital privacy is a crucial enabler for other fundamental rights, including examples of how these rights contribute toward achieving the Sustainable Development Goals (SDGs). This message was reinforced by a case study of how biometric privacy - or lack thereof - impacts individuals’ rights. After defining privacy engineering and privacy by design with examples from architecture and software, the speakers introduced Rights by Design as a necessary extension of the Privacy by Design principles. Finally, the presentation concluded with proposals of how Rights by Design could be applied to technology, both in the corporate world and in policymaking.

These proposals prompted a lively discussion after the presentation. As this was the last session of the day in Speaker’s Corner, it was possible to continue the discussion well beyond the planned 10 minutes. Attendees’ contributions focused on the presentation’s two calls to action to introduce ethics and threat modeling training into organizations working with technology and to ensure that human rights impact assessments are conducted when developing new technology products, features, or policies. Attendees raised their concerns about how feasible ethics training would be in a multicultural organization - given that beliefs about what is right and wrong vary widely - and whether this should (or even could) be implemented via regulatory efforts. The speakers clarified that the goal of such ethical training is not to instill specific values but rather to raise individuals’ ethical awareness, encouraging them to develop their own moral compass and establish personal ethical red lines, for example which product use cases they would refuse to develop software for on moral grounds. Regarding making such training a regulatory requirement, the discussion reached a consensus that this would likely be less effective than multi-stakeholder engagement to make such training a market norm. Establishing a market norm would however only be effective for establishing this in companies, whereas such training is also essential in policymaking fora. 

There was considerable interest in the session topic before the session, which led to insightful preliminary discussions in Days 1 and 2 of the conference. We would like to thank all involved - both session attendees and those who joined for discussions beforehand - for their valuable contributions. In particular, we would like to thank the Design Beyond Deception project team from the Pranava Institute, who introduced their research initiative investigating deceptive UI/UX design and educating designers in ethical, human-centered design practices. The team contributed training materials to share with session attendees and with the speakers’ communities after the conference, and described their research into how deceptive design practices differ in the Global South. Together we discussed how such deceptive designs target the highly vulnerable, such as impoverished communities without access to traditional financial services, and how such exploitation might be prevented through legislative and educational efforts.

It was unfortunately not possible to conduct the session in a hybrid format as planned, as the room had no camera or microphone to stream the session. Due to confusion about the assigned room (the talk temporarily had no room assigned in the schedule) we found out too late to make our own recording arrangements. We therefore have no feedback from remote participants to report. However, we have shared the slides online and will open discussions within our communities about the topic. If organizing a session in future, we would confirm the room assignment and available equipment on the first day of the conference to ensure sufficient time to make arrangements to offer the session in a hybrid format. This was a key learning for us as session organizers, as we would like to ensure that stakeholders unable to attend the IGF in person are still able to make a full contribution.

IGF 2023 Lightning Talk #99 Technological innovations and sustainable development: how can internet of things & AI help solve global challenges?

Updated:
Sustainability & Environment
Key Takeaways:

Acceleration programs enabling & financing close collaboration between startups and corporations, cities and public institutions can be a very effective tool in innovation absorption, diffusion and implementation in the fight against global problems and challenges

,

More testing areas and labs for innovative solutions with global problems solving potential will be very helpful in spreading their impact

Calls to Action

It's vital to spread the idea of acceleration programs enabling & financing close collaboration between startups and corporations, cities, NGOs and public institutions

,

We meed more testing areas and labs for innovative solutions with global problems solving potential

Session Report

INTRODUCTION

The session started with introductions of the speakers and the represented entities.

Mr. Jacek Bukowicki introduced himself as a representant of the Polish Agency for Enterprise Development, Department of Startups Development. The Agency’s mission is to support development of SMEs (small and medium-sized enterprises) in Poland. Since its foundation in 2000, a great deal of Agency’s efforts is distribution of European Funds through a variety of evolving programs, projects and calls for proposals. Apart from financial support, the agency also offers educational services as well as research, evaluation and analysis reports. Mr. Bukowicki described the agency’s flagship incubation and acceleration programs aimed to help innovators, budding entrepreneurs and startup teams at pre-seed, seed and early stage level of development, including first proof-of-concept / solution testing collaboration in real business & industrial infrastructure of the key market players (corporations, state-owned companies, public institutions etc). One of the very successful initiatives is called Poland Prize, an internationally renowned soft-landing & acceleration program meant to attract foreign startups and innovative solutions to Poland from all over the world.

Mr. Przemysław Nowakowski introduced himself as representant and expert on behalf of the Łódź Special Economic Zone. The main mission of the Zone is bringing the best startups and solutions to Poland. The Zone has been awarded as one of the top 10 special economic zones worldwide. It has been selected to be a key operator of multiple PAED acceleration programs, including the last edition of Poland Prize.  

MAIN SUBJECT

Both speakers agreed that many global challenges are already addressed and can be solved with brave and revolutionary ideas from innovators and small agile startup teams. One of the barriers lying on the way of their implementation is a financial gap as well as reluctance to take risks by stakeholders and market players. By providing direct PoC grants, one of the benefits encouraging all engaged parties to take an active part in acceleration programs is taking over the responsibility for possible failure of solution testing and validation process.

EXAMPLES

Next, the discussion concentrated on showing examples and cases of how technological innovations engaging internet connectivity and resources (e.g. connected devices, internet of things, artificial intelligence) can help solve global challenges in various areas. The first discussed area calling for action was climate change and its consequences.

Connected weather sensors together with machine learning / AI software can help not only predict, but also mitigate the consequences of the growing number of incidents of weather hazards & disasters like fires, floods, earthquakes, hurricanes, droughts – saving more human lives as well as resources.

Mr. Bukowicki described a few solutions addressing the problem of recent heavy rains and flooding intertwined with severe droughts affecting big cities as well as farming areas. One of the mentioned solutions is based on small connected devices installed in crucial gates of a sewage system allowing safer management of rain water flow and its retention within a city’s infrastructure. The other solutions enabled also local rain water retention, local electricity productions and then relieving droughts with retented water. In turn, Mr. Nowakowski mentioned solutions for smart cities, new circular economy way to developing a city power grid and communications.

Climate change also directly affects farming and food production capabilities. There are already many solutions addressing the problem of food waste, food production shortages as well as zero hunger challengeConnected weather & ecosystem field sensors can help farmers predict and act in most accurate time so as to secure the best vegetation conditions for crops in these more and more unpredictable times. New innovative technologies also enable more effective vertical farming as well as extended light and temperature conditions in green houses, offices, plants or other facilities.

Regarding the need for sustainable cities, climate change is just one of the factors affecting the quality of life in urban areas. Mr. Bukowicki mentioned innovative solutions addressing the problem of city air pollution monitoring and warning systems as well as solutions aiming at its prevention and reduction.

New solutions already support companies in transitioning to electric vehicles and implementing EV charging infrastructure. By means of telematics data and advanced predictive algorithms, it’s easier to provide recommendations for selecting suitable electric vehicles, considering the individual needs and preferences of each organization. New systems are entirely data-driven, which allows companies as well as individual citizens to effectively manage costs and efficiently implement electromobility. Mr. Nowakowski mentioned a startup  with a solution enabling a more efficient and economical use of energy for street lighting. Another challenge  - the common shortage of parking spaces in city centers - is already solved by platform enabling sharing the already existing parking facilities.

The quality of life cannot be secured and improved without good health, advances and better accessibility of medicine, prevention and therapies as well as the overall wellbeing environment. Artificial intelligence, machine learning solutions, wearable devices etc. combined with huge amounts of digitalized medical data can help implement better and more effective disease prevention solutions as well as provide more accurate therapies and diagnosis for both patients and doctors. Mr. Bukowicki named a few already incubated and commercialized solutions that would not come out without the support of incubation programs and engaged professionals.

Lots of global challenges can also be solved through innovations supporting Industry 4.0 and modern infrastructure. Mr. Nowakowski mentioned  AI powered solutions for heavy industry, drones and 5G connectivity as an answer to the addressed problems.

SUMMARY

The discussion could have been continued for another half of the hour if the session time had been longer. Conclusions included recommendations on what we need next and in the years to come to allow more technological innovations to solve more global challenges.

The speakers encouraged the audience to ask questions and provided contact details to those wishing to keep in touch for future reference and potential collaboration. The discussion with some of the listeners continued after the session finished, outside of the room.

IGF 2023 Lightning Talk #124 Youth for Digital Inclusion & Environmental Sustainability

Updated:
Sustainability & Environment
Key Takeaways:

OnePile youth initiatives aim to encourage and provide green digital solutions to excavate new interest and knowledge through developing their reading habits along with the idea of sustainability.

,

Youth-driven initiatives can start with small-scale Sustainable Development Goals (SDGs) projects and employ creative solutions to leverage digital platforms in promoting sustainability.

Calls to Action

Harness the youth’s creativity, and together we can accomplish the Sustainable Development Goals (SDGs) through small-scale digital initiatives! Dare to dream big and initiate SDGs actions!

,

Facilitate the engagement of young individuals with various stakeholders on platforms aimed at fostering their growth and progress towards a sustainable future.

Session Report

The session started by addressing the importance of promoting youth in digital inclusion while focusing on sustainability education as well as the idea of Sustainable Development Goals (SDGs) that matters in today’s community. The factors that influence access to digital education revolve around barriers which relate to the infrastructure and the divide between the urban and rural areas.  

OnePile is a technology-driven Initiative organization that focuses on leveraging technology to solve problems sustainability and education in Hong Kong.  The organization aims to tackle matters such as upskilling people from different ages with the right competencies to be able to have access to new knowledge through books, making use of digital education where it is accessible through its Smart Book Crossing Cabinet, and encouraging the circulation of books.  This is done by allowing books to be donated and taken for free, reducing the need to print new books, saving the carbon emitted from their production.  Also benefits the underprivileged in Hong Kong by allowing them to gain free access to books. 

The key metrics of the concept of sustainability of education is implemented through circulating books, the organization has circulated over 78,000 books, equivalent to an estimated carbon cost of 3,600 tons. By conducting STEM workshops with local schools in Hong Kong, allows the organization to introduce the latest technologies and process behind the design of the Smart Book Crossing Cabinets project, such as basic STEM skills and how it would make a step towards the education of sustainability. The motto of OnePile is that “Books are agents of change, providing valuable knowledge, reducing carbon emission at the same time, as well as reducing impact towards the environment.” 

The second part of the session was allocated to introduce SDG book club, a youth-led initiative project.  The aim of such a project is to increase the awareness of reading habits and sustainable culture, where inspired by the SDNS survey 2023 conducted in Hong Kong, showing that only 20% of the population within the city acknowledge the concept of SDGs, therefore the project has been developed.  Through supporting the youth development, SDGs was introduced through organizing different events (i.e. Plastic Reduction Charity Run and Potted Plants Combated Climate Change event), leading our youth to focus on the matter of inclusion environmental sustainability. 

Furthermore, OnePile is also supportive towards innovation ideas, by allowing the youth team to create online applications (i.e. Website and Apps). The online application highlights the feature of classifying books about Sustainable Development Goals, allowing the users to find books based on their SDGs interests. The application also allows readers and users to share their perspectives through its comment session, creating an experience of engaging with the culture the organization is trying to promote and its dedication to SDGs 11, that is to develop sustainable communities and cities.  

The guest speaker showcased the efforts undertaken in Hong Kong to promote the Sustainable Development Goals (SDGs), emphasizing how these initiatives fostered community cohesion and unity. By actively encouraging and engaging youth, the speaker demonstrated how they played a pivotal role in working towards achieving the SDGs. Their involvement not only empowered and inspired young individuals but also reinforced the idea of collective responsibility in driving positive change for a sustainable future.

During the Q&A session, participant suggested that emphasizing the need to explore and nurture collaborative relationships can leverage the experience and expertise of existing coordinators while harnessing the energy and innovative ideas of passionate young individuals. By fostering such partnerships, the collective impact can be enhanced and create a powerful synergy in advancing the SDGs agenda to the community.

IGF 2023 Lightning Talk #3 The Internet’s Carbon Curse: Can We Break It?

Updated:
Sustainability & Environment
Key Takeaways:

A reset and reinvention of the processes, methodologies and mechanism of internet based innovation should be modeled after a structure of inclusive human oriented inclusive use, Eco-friendly by design from policy to operation of the internet innovations and infrastructure.

,

The role of users specifically the youth demographic should be empowered to be sustainable in terms of technology consumption and coordination of thought leadership provision that intersects a digital reset for the communication, transportation and energy intersections with the help of emerging technologies to reverse and fix the gaps of internet based emissions

Calls to Action

A digital reset for the social, economical and political structure of society geared towards sustainable innovation

,

The digital phycology shouldn’t omit its contribution to current state of rapid climate change, rather counteract with policy frameworks and accountability structures modeled after the equal footed multi stakeholders approach

Session Report

Breaking the Internet's Carbon Curse: A Report by the Emerging Youth Initiative

 

Introduction

The Internet is often seen as a virtual world that transcends physical boundaries and limitations. However, the Internet is not immaterial or invisible. It relies on a complex and vast infrastructure of cables, servers, routers, devices, and applications that consume energy and emit carbon dioxide. The Internet is also not neutral or universal. It reflects and reproduces the existing inequalities and injustices in the distribution and access of energy and digital resources. The Internet is, therefore, both a challenge and an opportunity for achieving environmental sustainability and social justice.

 

How can we harness the power of the Internet to create a more sustainable and inclusive world? How can we reduce the environmental impact of the Internet while expanding its benefits for society? How can we balance the need for digital inclusion with the need for environmental protection? These are some of the questions that motivated and guided this report, which presents the findings and recommendations from the Emerging Youth Initiative (EYI) network, a group of young digital thought leaders who participated in an interactive session at the Internet Governance Forum (IGF) 2023.

 

The Emerging Youth Initiative believes that youth have a vital role to play in shaping the future of the Internet and its governance. We believe that the Internet can be a catalyst for innovation and transformation that supports green policy and consumer choices. Our vision is to empower meaningful digital transformation for youth by collaborating on projects, events, and advocacy campaigns that address the issues of Internet governance and sustainability.

 

This report is based on the  session at the IGF 2023 titled " the Internet's Carbon Curse, Can  we break it and the workshop How to Achieve a Sustainable Internet". The sessions was designed to engage the IGF community in a dialogue about the challenges and opportunities of reducing the digital carbon footprint. The session also featured a dialogue work through from a panelist that illustrated the life cycle of ICT and Internet technology from sourcing materials to end usage and disposal.

 

The report aims to share the insights and outcomes of the session with a wider audience, as well as to provide concrete actions and recommendations for different stakeholder groups to achieve a sustainable Internet.

Background

The Emerging Youth Initiative has  a network of young professionals, activists, and researchers who are interested in the issues of Internet governance and sustainability. Our  network's mission is to empower meaningful digital transformation for youth by collaborating on projects, events, and advocacy campaigns.

 

We organized a session at the IGF 2023 titled "the Internet's Carbon: can we break it ". The session was held on day 0 October 8, 2023, and attracted participants from different stakeholder groups, such as governments, civil society, academia, and private sector.

 

 Our session used an open-ended question methodology to engage the participants in a dialogue about the challenges and opportunities of reducing the digital carbon footprint. 

Methodology

The inclusive design approach of moderation featured a dynamic discussion with participants on these thought schools that debated around the below thematic statement: 

 

: "Digitization is here to stay, cutting back energy use when there’s a digital divide is unrealistic. How can we balance the need for digital inclusion with the need for environmental protection?"

 

 Findings

Digitization is inevitable and beneficial for society, but also recognized that it has a negative environmental impact. The concern still exists  about the unequal distribution of energy and digital resources among different regions and communities, which creates a digital divide and exacerbates social inequalities.

 

Though several factors contribute to the digital carbon footprint, such as inefficient energy use, reliance on fossil fuels, lack of recycling and reuse, and consumer behavior possible solutions such as improving energy efficiency, increasing renewable energy sources, extending device lifespan, and promoting user awareness are potent measures to reduce the digital carbon footprint 

 

 ICT and Internet technologies play an  enabler role to  other interlinking sectors to reduce their carbon emissions by providing solutions for smart transportation, smart buildings, smart agriculture, and smart manufacturing. Emphasis should be put on the need for careful design and implementation of these solutions to avoid rebound effects or environmental unintended consequences.



 

Recommendation: 

  •    Governments should adopt policies and regulations that incentivize green ICT practices, such as carbon taxes, subsidies, standards, and labels.
  •   Private sector should innovate on better climate offsets through collaborative spaces on green tech, open grid, and climate data insight solutions. They should also invest in renewable energy sources and adopt circular economy principles.
  •   Civil society should advocate for digital sustainability issues and hold governments and the private sector accountable for their actions. They should also educate and empower users to make informed choices about their digital consumption.
  •   Academia should conduct research on the environmental impact of ICT and Internet technologies and develop solutions that minimize their carbon footprint. They should also collaborate with other stakeholder groups to share knowledge and best practices.
  •  Media should play a critical role in digital sustainability reportage with evidence-backed sources and objective climate agendas. They should also counter misinformation and disinformation spread by digital platforms that contribute to their carbon emissions.
  •  Users should adopt responsible digital behavior, such as reducing unnecessary online activities, choosing energy-efficient devices and applications, and disposing of e-waste properly. They should also demand more transparency and accountability from service providers and policymakers.



 

Discussion

Our demonstrated the interest and engagement of the IGF community on the topic of digital sustainability. The session also showed the diversity and complexity of the issue, as well as the need for collaboration and coordination among different stakeholder groups. 

The session provided a valuable opportunity for our organization and youth to showcase its work and vision, and to connect with other actors and initiatives in the field.

 

Conclusion

The Internet is a double-edged sword that can cut emissions and shape anew modern sustainable industry, but also has a significant carbon footprint that needs to be reduced. Emerging youth initiative is committed to sensitizing and mobilizing youth on the topic of environmentalism, youth, and climate technology. Our network invites other actors and initiatives to join its efforts and collaborate on breaking the Internet's carbon curse.

IGF 2023 Launch / Award Event #30 Promoting Human Rights through an International Data Agency

Updated:
Human Rights & Freedoms
Session Report

2 Key Takeaways

  1. Digital transformation and so-called “artificial intelligence (AI)” – which can more adequately be called “data-based systems (DS)” – comprise ethical opportunities and ethical risks. Therefore, it is necessary to identify ethical opportunities and ethical risks at an early stage in order to be able to benefit sustainably from the opportunities and to master or avoid the risks.
     
  2. In the avoidance and mastering of risks, technology-based innovation can in turn play an essential role.

2 call-to-action points

In order to allow humans and the planet to flourish sustainably, the following 2 concrete measures are proposed:

  1. Human rights-based data-based systems (HRBDS): Human rights-based data-based systems (HRBDS) means that human rights serve as the basis of digital transformation and DS;
     
  2. An International Data-Based Systems Agency (IDA) should be established at the UN as a platform for technical cooperation in the field of digital transformation and DS fostering human rights, safety, security, and peaceful uses of DS as well as a global supervisory and monitoring institution and regulatory authority in the area of digital transformation and DS – following the model of the International Atomic Energy Agency (IAEA) at the UN.
IGF 2023 Open Forum #161 Exploring Emerging PE³Ts for Data Governance with Trust

Updated:
Data Governance & Trust
Session Report

Introduction

As digital technologies increasingly intersect with privacy concerns, the OECD's insights on Privacy-Enhancing Technologies (PETs) demonstrate that these technologies can enhance privacy and data protection and foster trust. This workshop built on such foundational work, aiming to expand our understanding and application of PETs to explore the multifaceted role of privacy enhancing, empowering, and enforcing technologies (PE³Ts) in fostering privacy and data protection, while enabling the trustworthy use of personal data for growth and well-being.

Distinguished panelists discussed not only how the combination of PETs such as synthetic data, homomorphic encryption and federated learning can enable the trustworthy collection, processing, analysis, and sharing of data, but also explored how digital technologies can be leveraged to enforce privacy laws, enhance transparency, improve accountability, and empower individuals to take more control over their own data. In so doing the session provided a platform for multistakeholder dialogue, aiming to generate insights into the opportunities and challenges of PE³Ts, exploring how these technologies can foster greater trust in the digital landscape, critically contributing to a safer and more inclusive Internet for all.

Privacy-Enhancing Technologies in Action

Panelists detailed how PETs such as homomorphic encryption and differential privacy can be instrumental in data protection. This included the ICO's approach to PETs as tools for secure data sharing, emphasizing alignment with legal compliance and their role in facilitating safer data practices.

Panelists also shared examples, such as Mozilla's deployment of PETs and their integration into products like the Firefox browser. In this context, panelists discussed the broader implications of PETs in enhancing user privacy without compromising functionality, particularly in the context of advertising.

Digital Technologies and Privacy Enforcement

Panelists also illustrated how digital technologies for automation can streamline privacy enforcement. Insights from NOYB’s approach were shared where digital technologies are used for monitoring and addressing privacy violations effectively, highlighting the potential for scaling enforcement activities using digital tools. In this context the need for ongoing education and adoption of PETs within EU institutions and across its member states was also highlighted.

Proactive Privacy Management and Integration

Panelists later discussed concepts for proactive privacy management through software analysis, proposing methods to assess and ensure privacy from the development phase of software products. Such approach suggested a shift towards embedding privacy considerations early in the technology design process. Panelists also stressed the importance of integrating PETs with traditional privacy management practices. In this context, panelists discussed the challenges and opportunities of adopting PETs in various organizational contexts, emphasizing the need for strategic privacy risk management.

Conclusion and Recommendations:

The workshop underscored the multifaceted role of PETs in enhancing and enforcing privacy within digital landscapes. The collaborative discussions highlighted the importance of integrating technological solutions with regulatory and organizational frameworks to achieve robust privacy protections. It led to the following recommendations:

  • Enhanced Collaboration: Encourage multi-stakeholder dialogues to further develop and refine PETs.
  • Increased Awareness and Training: Promote broader understanding and skill development in PETs across all sectors.
  • Guidance and Best Practices: Develop comprehensive guidelines that help organizations implement PETs effectively.
IGF 2023 WS #421 Quantum-IoT-Infrastructure: Security for Cyberspace

Updated:
Cybersecurity, Cybercrime & Online Safety
Key Takeaways:

Strengthening IoT Security by Design: The Internet of Things must incorporate security by design to counteract inherent vulnerabilities. This approach should include adopting internationally recognized standards and best practices to ensure robust security across all IoT deployments.

,

Adapting to Emerging Quantum Technologies: As quantum computing advances, it presents both potential threats and solutions for cybersecurity. National and global strategies should evolve to include quantum-resistant cryptographic methods to safeguard future digital communications and data integrity.

Calls to Action

For Governments and Large Industries: Initiate and enforce policies requiring that security by design is a fundamental criterion in the procurement of ICT services and products. This shift will drive broader adoption of secure practices throughout the technology ecosystem, contributing to a safer internet environment.

,

For the Technical and Academic Community: Collaborate on research and development of quantum-resistant cryptographic techniques. This collective effort is crucial to prepare our digital infrastructure for the arrival of scalable quantum computing technologies.

Session Report

Session Content
Introduction and Opening Remarks:
Carina Birarda opened the session by highlighting the escalation of cybersecurity incidents globally and the importance of international standards in countering these threats.

Presentations:

Wout de Natris focused on IoT security from a consumer and procurement perspective, emphasizing the necessity of security by design in IoT devices and the role of governments and industries in demanding secure ICT services.
Carlos Martinez Cagnazzo discussed the security of critical internet infrastructure, detailing the deployment of DNSSEC and RPKI and the broader implications for routing and domain name resolution security.
Maria Luque explored the implications of quantum technologies on cybersecurity, addressing the vulnerabilities of current cryptographic systems and the potential of quantum computing to disrupt security protocols.
Olga Cavalli shared insights from the perspective of governmental cybersecurity strategies, focusing on capacity building and public policy in Argentina, and highlighted the unique challenges faced by developing countries.
Discussions and Q&A:
The discussion session facilitated by Carina Birarda allowed panelists to delve deeper into strategies for advancing cybersecurity across different sectors and the need for a unified approach to tackle emerging challenges. Topics of particular interest included the adoption of security by design, the implementation of international standards, and the potential impact of quantum technologies on global security.

Key Outcomes:
Consensus on Security by Design: There was unanimous agreement on the necessity of integrating security features at the design phase of IoT and other critical technologies.
Call for Collaborative Action: The need for collaborative efforts among governments, industries, and the technical community was emphasized to enhance the adoption of best practices and standards.
Focus on Quantum Preparedness: The discussions underscored the urgency of preparing for quantum technological advancements by developing quantum-resistant cryptographic methods.
Conclusions:
The session concluded with a strong call to action for all stakeholders to enhance cybersecurity measures, adopt robust security protocols, and prepare for the challenges posed by quantum computing. The insights shared will contribute to the formulation of comprehensive guidelines and best practices for securing cyberspace.

IGF 2023 Networking Session #64 Worldwide Web of Youth: Cooperation for Enlightenment

Updated:
Digital Divides & Inclusion
Key Takeaways:

1. Youth engagement is one of the key drivers of Internet Governance because they are the most open-minded and enthusiastic generation that can listen to opinions from all over the world and take real actions

,

2. Enlightment projects in the IT field are needed in the Global North no less than in the Global South because it can help to share the approaches from all over the world

Calls to Action

1. Involve more tech youth in the IG field via education projects and NRI's

,

2. Create a more inclusive space for online participants of IT conferences, so that more people can share their view on the IG topics

Session Report

Worldwide Web of Youth: Cooperation for Enlightenment let young people to present their projects aimed at involving youth in the field of internet governance, developing digital literacy and involving youth in IT.

Pavel Pozdnyakov: the Summer School on Internet Governance, the “Digital Reality” discussion club, the CC Youth Council and a special course for young people held on the eve of the Russian Internet Governance Forum. Pavel also added that the Russian IGF traditionally concludes with a youth session, where young people present Russian projects related to internet governance, and speakers from other countries share their experience in involving young people in the field of internet governance.

The Center for Global IT Cooperation spoke about its youth projects, in particular about the Youth Internet Governance Forum (Youth RIGF), which has been held every year since 2021, as well as about the initiative born on the sidelines of this Forum - the Institute of Youth Digital Ombudsman.

Marko Paloski and Shradha Pandey explained how you can be involved in the ISOC Community and what opportunities will be open for you after that. They pointed out that this is a great platform to start learning about Internet Governance and be involved into the IG movements. Also, this is a good platform to present your own projects in digital literacy or other relative field. Supportive youth will always help you.

All of the speakers advised to start your projects within your own country or region. They also encouraged more tech youth to be involved in the IG movement and IGF itself make online participation more meaningful for everyone. 

IGF 2023 Day 0 Event #79 A Global Compact for Digital Justice: Southern perspectives

Updated:
Global Digital Governance & Cooperation
Calls to Action

We need to move the GDC in a manner that grapples honestly and boldly with its implementation challenges – how principles and rules can and must address inequality and injustice in the digital paradigm. Anything less will only embolden the few corporations and countries that desire to keep the status quo, This is untenable and will be unacceptable.

Session Report

The inequality of the digital economy presents an urgent challenge to development and democracy. If Agenda 2030 is to be realized, bold and committed action is needed to a) share the benefits of digitalization with all countries and peoples, b) govern digital resources democratically, and c) make digital policies and laws fit for catalyzing innovation that counts. The ultimate test for a well-guided digital transition is in the public and social value it can create, and the human freedoms it can expand. The political declaration adopted at the High-Level Political Forum on Sustainable Development in September 2023, rightly alludes to the participation of all countries in the digital economy. Its focus on infrastructure, connectivity, and the affirmation of digital rights of people is noteworthy. The Global Digital Compact (GDC) will need to carry this consensus forward, with nuances of the particularities required for our common digital future.

The 2023 Internet Governance Forum (IGF) pre-event in Kyoto on ‘A Global Compact for Digital Justice: Southern Perspectives’ was proposed by the Global Digital Justice Forum, the Dynamic Coalition on Platform Responsibility, and the Dynamic Coalition on Internet Rights and Principles to explore the central question: how can we build a GDC that furthers digital justice, especially in the majority world?

The event brought together speakers from governments and civil society in a multistakeholder dialogue structured in an innovative ‘BUILD IT, BREAK IT, FIX IT’ format.

The BUILD IT round delved into the promise of the GDC to fix global governance deficits in digital cooperation as seen from the prism of intergovernmental organizations in charge of the World Summit on the Information Society (WSIS) lines, governments, and civil society representatives. The following speakers made inputs during this round.

  • Amandeep Singh Gill, UN Secretary-General's Envoy on Technology

  • Regine Grienberger, Cyber Ambassador, German Federal Foreign Office

  • Shamika N. Sirimanne, Director, Division on Technology and Logistics, United Nations Conference on Trade and Development (UNCTAD)

  • Alison Gillwald, Executive Director, Research ICT Africa

  • Renata Avila, CEO, Open Knowledge Foundation

The session began with the UN Tech Envoy Amandeep Singh Gill’s inputs, who affirmed the idea of building through the GDC, a shared vision and a global framework for digital governance that is negotiated by governments but is open to participation by regional organizations, private sector, and civil society. He emphasized the need to a) shape a transition away from a solutions orientation to ecosystems and infrastructures for digital development, and b) go beyond the connectivity paradigm, and shift the attention towards digital public infrastructure to create inclusive innovation spaces that focus more on capacity.

Regine Grienberger, Cyber Ambassador from the German Federal Foreign Office, began by acknowledging the continued digital gap/divide and its significant impact on the SDG process and suggested that this be an important focus of the GDC. Grienberger also advocated for the consultative process to take a local/national to global approach, and emphasized the need to engage in more cross-regional discussions, especially on issues like artificial intelligence (AI). Additionally, he made the critical observation that the GDC process needs to be anchored in the basic tenets enshrined in cornerstone UN documents, such as the Human Rights Charter.

In her input, Shamika Sirimanne from UNCTAD observed how the gains of connectivity have been skewed, with a few transnational corporations and nation-states being able to embrace the digital revolution optimally while others lag behind. Given that the structural inequalities in the digital order compound the effects of other inequalities, we are confronted increasingly by a digital inequality paradox, where, as more people are connected, digital inequality is amplified. In this context, Sirimanne underscored that the GDC process had an imperative to go beyond the connectivity paradigm and bridge the gap between actors who possess the technological and financial resources needed to harness the digital and those who don’t. She outlined the need for quality and affordability of access, skilling opportunities to navigate the digital economy, and equal participation of countries in the global regime to shape the rules of the game so that the opportunities of the digital paradigm could be reaped more equitably.

Meanwhile, Alison Gillwald from Research ICT Africa pointed to the most pressing global challenges of our time, which include the climate crisis and the issue of widening inequality, including digital inequality as a starting point to her input. These need to be addressed through a collective and collaborative renewal of the social contract that was anchored in human rights and gender equality in order to rebuild trust and social cohesion and enhance digital inclusion. Like Sirimanne, Gillwald observed that the layering of advanced digital technologies over underlying structural inequalities compounds the effects of digital inequality, especially in regions with glaring infrastructure and capacity deficits like Africa. In this regard, she noted that the GDC process needed to focus on infrastructure and digital public goods.

The concluding input of the round came from Renata Avila from the Open Knowledge Foundation who argued that for many countries of the Global South contending with a severe debt crisis and lack of resources, decisive action that could address the geopolitics of global inequality and injustice was the top priority. Avila emphasized an urgent need for financing and international commitments for the development of digital infrastructure, skills, and regulatory capacities for all countries to navigate the terrain, as well as renewed commitments from international financial institutions towards these goals. Additionally, she pointed to the unmet promise of knowledge equality and the trend of knowledge capture of think tanks, academia, and civil society by Big Tech. In this regard, she held the reform of the IP regime as an important agenda for the GDC to take up.

The BREAK IT round in turn, critically interrogated the efficacy and effectiveness of the proposals in the GDC across its various dimensions, focusing on information disorder, AI and human rights, reining in Big Tech power, guaranteeing a free and open internet, and IGF reform for effective digital governance mechanisms at the global level. The following speakers made inputs as part of this round.

  • Helani Galpaya, CEO, LIRNE Asia

  • Alexandre Costa Barbosa, Fellow for the Weizenbaum Institute and Homeless Workers Movement - Technology Sector, Brazil

  • Nandini Chami, Deputy Director, IT for Change

  • Megan Kathure, Afronomicslaw

  • Dennis Redeker, University of Bremen and Digital Constitutionalism Network

Helani Galpaya from LIRNE Asia noted in her critique of the GDC process that several developing countries when faced with an immense challenge of fiscal squeeze, focused on devoting resources to basic development needs and were unable to spare attention on digital governance issues, which compromised the dialogue and involvement within the process overall. Galpaya also highlighted the inability of the GDC to address the disparity of national regulations on critical issues such as taxation and grapple with the unacknowledged reality of a highly digitally fragmented landscape, which made consensus building a difficult proposition. Additionally, she pointed out the failures of the multilateral system in being unable to hold its own member states accountable for draconian digital laws and policies that were harmful to citizen rights, something that the GDC process had not really taken into account.

In his input, Alexandre Costa Barbosa from the Weizenbaum Institute and the Homeless Workers Movement - Technology Sector, Brazil, focused on the key aspect of sustainable digital public infrastructure (DPI) and the lack of clarity around the concept. In the absence of a multistakeholder dialogue or collective definition, this important aspect of the GDC was in danger of being defined and captured by a Big Tech spin of the discourse, rather than allow for the possibilities of interoperable, open, and accessible DPIs that are locally responsive. Barbosa additionally pointed to the silence on the critical issue of labor and contended that the GDC process must have more discussions on this topic in particular its connections to the field of generative AI.

Nandini Chami from IT for Change in her critique, underscored how the aspirations of the WSIS seem to be forgotten and waylaid in the GDC processes. She further observed that the reduction of data rights to privacy as is prone to, in current discourse simply erases data extractivism, which continues to be the fault line of geopolitical and geo-economic power. In this context the GDC process did not fully recognize that rights in data extend to people’s claims over data resources, and their right to collectively determine how they see value generation from digital intelligence.

Pointing to the inversion of basic rules for the marketplace in the way Big Tech controls public functions, recasts society and citizens into individual users and consumers, and squeezes labor in the transnational AI chains, Chami urged the audience to push back against the silent consensus that Big Tech cannot be regulated. She called on political commitment to begin the change and member states to measure up in this regard.

Meanwhile, Megan Kathure from Afronomicslaw observed that the historical choices in internet governance that had enabled the rise of Big Tech had also given rise to a narrative of ‘limits of multistakeholderism’ in bringing forth a global digital constitutionalism. She stressed that the fundamental issue with the current GDC process is that it risked entrenching the regulatory dilemma of global governance of the digital and affirming this narrative. In her input, Kathure highlighted two gaps in the current GDC process. The first is that it failed to acknowledge the complementarity of rights with state duties and simply expected states to refrain from certain actions without enshrining correspondent duties. She argued that the GDC must go beyond taking multilateral commitments from states and corporate actors and needed to outline a regime of consequences for inaction, thus dealing head on with the realpolitik of global digital governance. Second, Kathure observed that the GDC process did not conceptualize human rights holistically and discussed the fact that current proposals did not capture the indivisibility of human rights adequately.

In the concluding input for the round, Dennis Redeker from the University of Bremen and Digital Constitutionalism Network, highlighted emerging findings from research on how the general public in various countries viewed the consultative process. Redeker highlighted the discrepancies in agendas that dominated vis-à-vis those that people held as important and expressed wanting more involvement in, and pointed to the a consensus among general public about reduced involvement of the private sector in policy processes.

In the FIX IT round, the session rounded up responses towards the issues raised in order to conclude with a forward-looking roadmap on what the GDC needs to foreground for furthering an inclusive, people-centered, development-oriented digital future. The following speakers made inputs as part of this round.

  • Ana Cristina Ruelas, Senior Program Specialist, United Nations Educational, Scientific and Cultural Organization (UNESCO)

  • Anriette Esterhuysen, Senior Advisor, APC

  • Prapasiri “Nan” Suttisome, Project Officer, Digital Rights, Engage Media

  • Emma Gibson, Global Coordinator, Alliance for Universal Digital Rights for Equality Now

  • Luca Belli, Professor, Fundação Getulio Vargas (FGV) Law School, Rio de Janeiro

Ana Cristina Ruelas from UNESCO, highlighted the regulatory efforts undertaken by UNESCO for a new platform society. Ruelas observed that a lot of ground needed to be covered in the local-to-global regulation of social media platforms and the algorithmic control. Additionally, she pointed to the fact that no one actor could solve all issues and proposed the idea of a regulatory framework of networks, which would allow stakeholders to take a more interconnected approach to digital governance.

Anriette Esterhuysen from APC urged stakeholders to look at the existing norms and principles in the digital space as a starting point. She also held that the GDC was not being meaningfully informed by the current state of digital inequality and urged for this tokenism to be challenged. What is to be put at the center is not the techno-fascination of the corporate narrative but a people-created and -controlled narrative. Esterhusyen called for a feminist and radical vision of digital transformation in this regard. She stressed on the importance of granular data and public statistics to allow for a clear cognizance of the depth and breadth of economic injustice and the uneven distribution of opportunities associated with the digital.

Prapasiri “Nan” Suttisome from Engage Media, in her input, pointed out how powerful countries use free trade agreements to stifle digital rights of peoples and countries in the Global South. Trade rules are used to arm twist governments to hyperliberalize data flows, take away local autonomy of public authorities to govern transnational corporations and their algorithms, prevent the scrutiny of source code, and legitimize a permanent dependence of developing countries on the monopoly corporations controlling data and AI power. This kind of infrastructural dependence is tantamount to a neo-colonial order and Suttisome observed that unless the indecency and impunity of some actors in the digital space is countered, and countered now, any compact is bound to fail.

Meanwhile, Emma Gibson in her input presented the work being undertaken by the Alliance for Universal Digital Rights (AUDRi) for Equality Now, and called for the adoption of a universal digital rights framework, rooted in human rights law and underpinned by an intersectional feminist perspective. The GDC needs to be a feminist process to be truly transformative. She presented the nine principles developed by AUDRi based on equal protection from persecution, discrimination, and abuse; equal access to information, opportunity, and community; and equal respect for privacy, identity, and self-expression

In the concluding input, Luca Belli from FGV presented three structural challenges that made the GDC process ineffective. Belli pointed to the issues fragmented landscape, which went beyond geography and also extended to the trend of taking siloed regulatory approaches to digital issues; the presence of outsized political and economic interests that played against policy strategies (for instance between private sector and domestic governments) and the fact that for the private sector, the bottom line of shareholder interest always trumps public interest, making regulatory compliance a challenge at all times. By way of remedies, Belli suggested moving the GDC in a manner that grapples honestly and boldly with its implementation challenges.

 

IGF 2023 DC-DAIG Can (generative) AI be compatible with Data Protection?

Updated:
AI & Emerging Technologies
Key Takeaways:

AI transparency and accountability are key elements of sustainable AI frameworks but different stakeholders and policy debates define and interprets such concepts in heterogeneous fashion.

,

Most AI governance discussions focused on and are led by primarily developed countries. The Data and AI Governance (DAIG) Coalition has proved to be one of the few venues with strong focus on AI in the Global South.

Calls to Action

The DAIG Coalition will keep on promote the study on key data and AI governance issues such as algorithmic explicability and observability which are critical to achieve sustainable policy frameworks.

,

The DAIG Coalition will maintain and expand its focus on Global South perspectives, striving to increase participation from African countries.

Session Report

Session report: Can (generative) AI be compatible with Data protection?

IGF 2023, October 10th, 2023, WS 10 - Room I


The session explored the tension between the development and use of AI systems, particularly generative AI systems such as ChatGPT, and data protection frameworks. The DC aims to present a diverse set of views, in the spirit of multistakeholder debate, from various sectors, countries, disciplines, and theoretical backgrounds.

Professor Luca Belli, Director of the Centre for Technology and Society at FGV Law School, opened and moderated the session. He discusses the concept of AI Sovereignty – “the capacity of a given country to understand, muster and develop AI systems, while retaining control, agency and, ultimately, self-determination over such systems”. Regulating generative AI is a complex web of geopolitical, sociotechnical, and legal considerations, whose core elements compose the AI Sovereignty Stack.

Armando Manzueta, Digital Transformation Director, Ministry of Economy, Planning and Development of the Dominican Republic – gave insights on how governments can try to use generative AI in their infrastructure and public services. When an AI complies with data privacy laws along with a transparent decision-making mechanism, it has the power to usher in a new era of public services that can empower citizens and help restore trust in public entities improving workforce efficiency, reducing operational costs in public sectors, and supercharging digital modernization.

Gbenga Sesan, Executive Director, Paradigm Initiative, Nigeria – emphasized the role of existing data protection laws, but also how this ongoing discussion on generative AI opens an opportunity for the countries that do not yet have a data protection law to start considering introducing one to regulate mass data collection and processing. There is also a need to de-mystify AI and make it more understandable to people. Sesan also pointed out that there is a lack of diversity in the models of generative AI like ChatGPT, as well as a need to establish review policies or mechanisms when they deal with information on people.

Melody Musoni, Policy Officer at the European Centre for Development Policy, South Africa – spoke on how African countries are taking steps to carve out their position as competitors in the development of AI. There is a need for AI to solve the problem in the African region. E.g., the digital transformation strategy showed the urgency for Africa to start looking into AI and innovation to develop African solutions. The speaker also mentioned setting up data centers through public-private partnerships.

Jonathan Mendoza, Secretary for Data Protection, National Institute of Transparency Access to Information and Protection of Personal Data (INAI), Mexico - explores current and prospective frameworks, giving a descriptive account of ongoing efforts to promote transparency and accountability. Due to the diverse nature of the population in the Latin American region, generative AI can pose a threat and therefore a policy to process personal data must be in place. There is also a need to balance the ethical designing of AI models and the implementation of AI to make these models more inclusive and sustainable while reducing potential threats.

Camila Leite, Brazilian Consumers Association (Idec) - explored the general risks of AI on the Brazilian consumer population. Financial and Mobility services can immensely benefit from generative AI, however there have been instances in which the output from generative AI was found to be manipulative, discriminatory, and violated the privacy of people. It is important to put consumer rights and protection at the heart of policies regulating generative AI.

Wei Wang, University of Kong - elucidates the disparate conceptualizations of AI accountability among various stakeholders at the Chinese level, thereby facilitating an informed discussion about the ambiguity and implementability of normative frameworks governing AI, specifically regarding Generative AI. China has a sector-specific approach contrary to the comprehensive one as seen in the EU, UK, etc. China has established measures to comply with sectoral laws and Intellectual property laws.

Smriti Parsheera, Researcher, CyberBRICS Project, India - discusses the why and how of transparency obligations, as articulated in the AI governance discussions in India and select international principles. She argues that the need for transparency permeates through the lifecycle of an AI project and identifies the policy layer, the technical layer, and the operational layer as the key sites for fostering transparency in AI projects.

Michael Karanicolas, Executive Director, UCLA Institute for Technology, Law and Policy - argues for the need to develop AI standards beyond the “auspices of a handful of powerful regulatory blocs”, and calls for the inclusion of the Majority World into standard-setting processes in international fora.

Kamesh Shekar, Senior Programme Manager, Privacy & Data Governance Vertical, The Dialogue - argues for a principle-based approach coupled with a detailed classification of AI harms and impacts. He proposes a detailed multistakeholder approach that resonates with the foundational values of responsible AI envisioned by various jurisdictions geared toward ensuring that AI innovations align with societal values and priorities.

Kazim Rizvi, Founding Director, The Dialogue - spoke about domestic coordination of regulation and then international coordination. Alternative regulatory approaches can also be looked upon through public-private partnerships.

Giuseppe Cicu, PhD Student at the University of Turin and corporate Lawyer at Galgano Law Firm - spoke about a framework to regulate AI by Corporate Design to fit together business management and AI governance concerns into a step-by-step implementation process, from strategic planning to optimization. He provided a game plan for responsible AI by bringing transparency and accountability into the organizational structure of the firm and having a human in the loop. The approach is grounded in the human rights global framework and privacy policies. He suggests that corporations introduce an ethic algorithmic legal committee.

Liisa Janssens, LLM MA, scientist department Military Operations, unit Defence, Safety and Security, TNO the Netherlands Organisation for Applied Scientific Research - provides a focused responsible AI framework for military applications, developed through a scenario-setting methodology for considering AI regulation’s virtues and shortcomings. The disruptive nature of AI is considered in the face of the demands of Rule of Law mechanisms to trace the requirements that make up responsible use of AI in military.

Comments and questions: What are the key privacy principles at a normative level (e.g., transparency and data minimisation, purpose limitation) that should be ensured so that generative AI can comply with them? Will the data protection laws expand their scope to include non-personal data since most of the data to train a generative AI is non-personal data.

IGF 2023 DCPR A new generation of platform regulations

Updated:
Key Takeaways:

1. Need for Platform Regulation: Professor Yasmin Curzi and Professor Luca Belli have consistently stressed the urgent need for the regulation of digital platforms. The DCPR, for nearly 10 years, has been a prominent entity in advancing research and championing actionable solutions. Their comprehensive studies highlight the significance of digital platforms on democracy, markets, and human rights.

,

2. Emphasis on Transnational Dialogues: Professor Belli accentuates that mere regulation isn't sufficient. A deeper understanding of systemic risks requires global conversations that consider the unique aspects of local contexts. The DCPR has concentrated on various legislative frameworks, such as those in Brazil, India, China, and the EU regulations, to appreciate how these platforms influence and adapt within different environments

Calls to Action

Importance of an open, accessible internet governed by multiple stakeholders, encompassing gender equality, children's rights, sustainable development, and environmental aspects. All entities, from governments to the private sector, must utilize these principles as benchmarks for their internet governance; Platform governance discourse needs to delve into substantive concerns that platforms pose, such as their environmental and labour impacts

Session Report

The Dynamic Coalition on Platform Responsibility (DCPR) session at the Internet Governance Forum (IGF) 2023 provided an invaluable forum for discussing the multifaceted challenges and opportunities in digital platform governance. This session was marked by insightful dialogues among experts from diverse fields, reflecting the DCPR's commitment to fostering a multi-stakeholder approach in addressing the complexities of platform regulation.

Key Highlights and Discussions:

  1. Professors Luca Belli and Yasmin Curzi, DCPR coordinators, highlighted the decade-long commitment of the DCPR in researching and addressing the challenges posed by digital platforms. They stressed the importance of not only acknowledging the necessity for platform regulation but actively engaging in research and practical solution-seeking.
  2. Professor Belli underscored the need for fostering transnational dialogues to address systemic risks presented by digital platforms. The session delved into legislative frameworks in countries like Brazil, India, China, and the European Union, emphasizing the need for context-sensitive regulation.
  3. Tatevik Grigoryan from UNESCO introduced the concept of Internet Universality, advocating for a global approach to internet governance based on principles of openness, accessibility, multi-stakeholder participation, and addressing cross-cutting issues.
  4. Samara Castro highlighted Brazil's proactive stance in social media regulation and misinformation control, discussing legislative, executive, and judiciary efforts. Brazil's experience serves as an inspiration for other nations in creating a safer, transparent internet.
  5. Anita Gurumuthy and Monika Zalnieriute emphasized the need to go beyond procedural principles and address substantive concerns like platforms' environmental impact and labor effects, calling for a holistic approach in platform governance.
  6. Rolf Weber emphasized the importance of accountability beyond compliance and the necessity of observability in platform governance, suggesting a model where platforms are transparent and answerable in their operations.
  7. Shilpa Jwasant provided an in-depth analysis of the Indian context, focusing on the recent developments in the IT Act. She highlighted how the Act is shaping the digital landscape in India, discussing its impact on user rights, data privacy, and the regulatory challenges faced by digital platforms operating in India. Jwasant’s insights into India’s regulatory approach underscored the balance between harnessing technological advancements and protecting fundamental rights.
  8. Sofia Chang delved into the Chinese scenario, particularly the country’s approach to algorithmic regulation. She elaborated on how China is navigating the complex interplay between technology, state control, and user rights, offering a unique perspective on how algorithmic governance is evolving in a highly digitized, prioritizing digital sovereignty. 
  9. Monika Zalnieriute brought a critical lens to the discussion on informational pluralism on social media platforms. She raised concerns about the private and opaque interests of big tech companies, emphasizing the need for greater transparency and accountability in how these platforms manage and disseminate information. Zalnieriute argued for a more equitable digital ecosystem that respects diversity of thought and counters the monopolistic tendencies of major tech firms.

The session benefitted from active participation from in-person attendees, who provided feedback and posed questions, enriching the discussions. Their contributions highlighted the global interest in developing effective platform governance models and underscored the need for inclusivity in these dialogues.

Conclusion: The DCPR session at IGF 2023 successfully facilitated a comprehensive exploration of digital platform regulation, stressing the importance of a multi-stakeholder, inclusive approach. The discussions and calls to action from this session are expected to guide future strategies and policies in the realm of platform responsibility.

IGF 2023 DC3 Community Networks: Digital Sovereignty and Sustainability

Updated:
Digital Divides & Inclusion
Key Takeaways:

There are different dimensions of sustainability, environmenta sustainability being one of them. Communuty networks provide added value, beyond connectivity - local services, content, promote circular economy.

Calls to Action

Raise awareness about CNs in urban areas - it's not only valuable for remote rural areas. Option to build community networks to support existing local services, instead of providing connectivity first and adding services on top.

Session Report

The IGF 2023 session of the Dynamic Coalition on Community Connectivity focused on the digital sovereignty and environmental sustainability aspect of community networks. Session participants provided their perspectives and best practices. Some panelists authored and co-authored the official DC3 outcome, a report titled "Community Networks: Building Digital Sovereignty and Environmental Sustainability".

The session started with the launch of the report, presented by Luca Belli. The report is a compilation of of five different papers/ chapters.

Atsuko Okuda from ITU Asia Pacific opened the session, presenting some ITU statistics on the state of connectivity in the region and globally.

Raquel Gatto from CGI Brazil spoke about community networks initiatives in Brazil and the newly formed working group within Anatel, the telecom regulator.

Amreesh Phokeer from the Internet Society presented some of ISOC's community network initiatives, and provided insights on their environmental impact.

Pedro Vilchez from guifi.net presented guifi's efforts to incorporate circular economy into their project.

Nils Brock from Rhizomatica / DW Academy spoke about using local materials such as bamboo for building towers, and the positive impact on the environment that comes with the use of local resources.

Carlos Baca from Rhizomatica presented the initiative of National Schools on Community Networks and how capacity building contributes to environmental sustainability.

In his closing remarks, Luca Belli highlighted the link between community networks and digital sovereignty.

 

IGF 2023 Open Forum #57 Procuring modern security standards by governments&industry

Updated:
Cybersecurity, Cybercrime & Online Safety
Key Takeaways:

1. Modern internet standards (such as IPv6, DNSSEC, HTTPS, DMARC, DANE and RPKI) are essential for an open, secure and resilient Internet that serves as a driver of social progress and economic growth. Such standards have been developed, but their use needs to increase significantly to make them fully effective. Procurement policies have proven to be an effective means of ensuring that these standards get traction and are used more widely.

,

2. Not using modern standards is a risk for the individual internet user. However, often users are not aware of it (because standards are "under the hood") and there are economic network effects that prevent users from fully benefiting immediately ("first mover disadvantage"). Research by IS3C has shown that public-private partnerships can play a crucial role in the creation of transparancy and awareness which is crucial to reach critical mass.

Calls to Action

1. To governments and TLD registry operators: Monitor the usage of modern internet security standards (such as IPv6, DNSSEC and RPKI) in the public sector and in society. For this, they can make use of open source tools such as https://Internet.nl and even extend it (eg tests for Universal Acceptance and for accessibility). Such tooling provides transparancy, helps end-users articulate their demand, and creates an incentive for vendors to comply.

,

2. To governments and industries: Publish procurement policies regarding modern internet security standards. These can be reused by others when creating procurement policies. Furthermore vendors could use these as requirements for their software and systems. The list with most important internet security standards that was created by IS3C (https://is3coalition.org/) can be used as a reference (consultation untill  5 Nov 2023).

Session Report

Moderator Olaf Kolkman introduced this Open Forum by elaborating on the role of modern security standards in securing the internet. He emphasized that we need to secure the internet for the common good. One of the challenges that comes with securing the internet is the slow adoption of security standards. Therefore, this Open Forum highlights tools that enhance the adoption of modern security standards.

The Role of Open Standards particularly in procurement, experiences in the Netherlands

Modern internet standards (such as IPv6, DNSSEC, HTTPS, DMARC, DANE and RPKI) are essential for an open, secure and resilient Internet that serves as a driver of social progress and economic growth. Gerben Klein Baltink and Annemieke Toersen explained the role of standards in procurement and their experiences in the Netherlands. The role of open standards in promoting a safer, more secure, and well-connected internet has become increasingly recognized, with initiatives like the internet.nl test tool which contribute significantly to this progress. The tool is primarily aimed at organizations, attracting both technical personnel and board members, and allows them to assess if their mail, website, and local connections comply with established standards.

In the procurement and supply chain management domain, the Forum Standaardisatie think tank has been actively promoting the use of open standards, advocating for more interoperability. With 25 members from government, businesses and science, the forum advises governments on the adoption of open standards, emphasizing their importance in promoting information exchange, ensuring interoperability, security, accessibility and vendor neutrality.

The Dutch government has pursued a three-fold strategy to promote open standards. Firstly, through the implementation of a "comply or explain" list of 40 open standards, carefully researched and consultated with experts. This have led to increased adoption, particularly in areas such as internet and security, document management and administrative processes, like e-invoicing. Government entities are mandated to use these standards, with required reporting if not followed.

Secondly, the government has fostered national and international cooperation, facilitating workshops on modern email security standards within the EU, and engaging with prominent vendors and hosting companies such as Cisco, Microsoft, and Google. They have also facilitated the reuse of internet.nl code in various projects, such as aucheck.com and top.nic.br.

Finally, the Dutch government actively monitors the adoption of open standards, evaluating tenders and procurement documents, and ensuring that the standards are included. Reports are submitted to the government, and efforts are made to support and guide vendors who may  lagging behind in the adoption of these standards.

Lessons learned from these efforts emphasize the importance of consistently checking for open standards in procurement processes and providing guidance and support to encourage their usage. The comprehensive approach taken by the Dutch government, along with collaborations with various stakeholders, has contributed significantly to the wider adoption and implementation of open standards, fostering a more secure and interconnected digital environment.

Procurement and Supply Chain Management and the Business Case

Wout de Natris and Mallory Knodel elaborated on the role of the Internet Standards, Security, and Safety dynamic coalition in enhancing internet security and safety through various initiatives. The coalition has established three working groups targeting Security by design on the Internet of Things, Education and Skills, Procurement and Supply Chain Management and the Business Case, aiming to contribute to a more secure online environment.

Their ongoing projects involve the deployment of DNSSEC and RPKI, exploring emerging technologies, and addressing data governance and privacy issues. They strive to persuade decision-makers to invest in secure internet standards by developing a persuasive narrative incorporating political, economic, social, and security arguments. The Procurement and Supply Chain Management and the Business Case working group have released a comprehensive report comparing global procurement policies, shedding light on existing practices and advocating for more transparent and secure procurement processes.

The coalition highlights the need for greater recognition and integration of open internet standards into government policies, emphasizing the importance of universal adoption of standards for data protection, network and infrastructure security, website and application security, and communication security. They aim to provide decision-makers and procurement officers with a practical tool that includes a list of urgent internet standards to guide their decision-making and procurement processes.

By focusing on streamlining and expediting the validation process for open internet standards in public procurement, the coalition seeks to enhance procurement policies, resulting in more secure and reliable digital infrastructure. Overall, their collaborative efforts and initiatives aim to create a safer online landscape for individuals, organizations, and governments by promoting the secure design and deployment of internet standards and advocating for the adoption of open internet standards in government policies.

The report from is3coalition.org highlights a concerning trend where governments fail to recognize the critical components that enable the internet to function effectively. This issue has been a recurring question in various research endeavors, prompting the Working Group (WG) to prioritize and compile existing security-related internet standards and best practices in the field of ICT.

Best practice awards go to: the GDPR in the European Union provides common understanding and harmonization with regards to the security of information systems; the Dutch Ministry of the Interior and Kingdom Relations makes mandatory standards deployment. The ‘Pas toe of leg uit’-Lijst (comply-or-explain list) of the Dutch Standardisation Forum is a document containing 43 open standards that all governments in the Netherlands have to demand when procuring ICT; and Internet.nl: the tool used to track standards adoption by an organization’s website based on three indicators: website, email and connection. The software has been adopted in Australia, Brazil, Denmark and Singapore.

IS3C provides decision-takers and procurement officers involved in ICTs procurement with a list containing the most urgent internet standards and related best practices. This assists them to take into account internet security and safety requirements and procure secure by design ICT products, services and devices, making their organizations as a whole more secure and safer. By raising awareness and emphasizing the significance of internet security and safety requirements, the report seeks to prompt officials to consider and integrate these crucial standards into their operational frameworks.

To gather insights and perspectives on this critical issue, the coalition is conducting a consultation on the report until November 5th at 10:00 UTC. This consultation aims to engage stakeholders and experts to discuss and address the challenges associated with the recognition and implementation of internet security standards by governments.

Report: https://is3coalition.org/docs/is3c-working-group-5-report-and-list/

Perspectives from India

There are many examples of good efforts and effective tools enhancing internet security. One of these examples comes from India. Mr. Satish Babu highlighted that the Trusted Internet India Initiative was initially established at the India School of Internet Governance (inSIG) in 2016 and has since 2018 been collaborating with the Global Forum for Cyber Expertise.

InSIG organized GFCE’s Internet Infrastructure Initiative (Triple-I) Workshop in 2018, 2019, 2022 and 2023 as Day 0 events of inSIG. The Triple-I workshop seeks to “...enhance justified trust in the Internet” by building awareness and capacity on Internet-related international standards, norms and best practices. In its 2023 edition, the Triple-I workshop announced a new initiative that attempts to measure periodically the compliance of Indian websites, DNS and email services to modern security standards (to begin in 2024).

During the T3I workshop, it was emphasized that digital technology plays a crucial role in fostering India’s growth. The digital public infrastructure, which serves over a billion citizens, facilitates applications related to financial health, logistics, and more. However, the workshop shed light on the existing weak levels of compliance within these systems. In response to this observation, volunteers associated with T3I conducted extensive research to identify areas of improvement.

Building on their research findings, the initiative now plans to conduct comprehensive testing and disseminate the results to all stakeholders. The aim of this effort is to enhance compliance levels across Indian digital platforms, ensuring that they meet modern security standards and contribute to a safer and more secure digital environment. 

Perspectives from Brasil

Mr. Flavio Kenji Yanai andGilberto Zorello shared their experiences from a Brazilian perspective. The Brazilian Network Information Center (NIC.br) is a non-profit civil entity that since 2005 has been assigned with the administrative and operational functions related to the .br domain. NIC.br is actively investing in various actions and programs to improve internet services across different sectors. Their initiatives are geared towards disseminating knowledge and best practices, contributing to a safer and more secure internet environment in the country.

A key project they are currently undertaking is the TOP Teste os Padrões (Test the Standards) tool, which was initiated in December 2021 and utilizes Internet.nl provided by the Dutch government. As part of the Safer Internet program, their objectives include providing support to the internet technical community. This involves collaborating with various groups to develop technical teaching materials and promote good practices aimed at raising awareness within the technical community. Their efforts have yielded positive results, as statistics indicate a reduction in misconfigured IP addresses.

Furthermore, they have implemented the Mutually Agreed Norms for Routing Security (MANRRS) in Brazil, leading to a notable increase in the number of participants. The statistics reflect continuous improvements in various aspects of internet security within the country. With significant incumbents responsible for approximately 50% of the internet traffic in Brazil, the implementation of version 1.7 of internet.nl, currently in the validation phase, has been instrumental. The tool is being widely disseminated in conjunction with the Program for a Safer Internet, with government entities also starting to utilize it to test their websites and email services. The TOP tool has proven to be of immense value in fortifying the internet infrastructure in Brazil.

IGF 2023 WS #109 The Internet in 20 Years Time: Avoiding Fragmentation

Updated:
Avoiding Internet Fragmentation
Key Takeaways:

There is a level of Internet Fragmentation today, which manifests at a technical, regulatory and political level. There is a chance, however, to act upon present and future fragmentation and design the future Internet we want. We should consider incentives (where economics has played a central role), think how to find convergence, design a future Internet for people, and be ready for this debate to be impacted by geopolitics and climate crisis.

,

To get to a best case, future scenario, we should take an incremental, interactive approach to devising solutions (including regulation); our actions should have a compass and be principles-based (openness and permissionless innovation emerged as central guiding principles); strive for inclusivity in governance and standards, take guidance from human rights frameworks and engage actively in difficult areas where there is tension or “chaos.”

Session Report

This workshop proposed to discuss Intern Fragmentation through a forward looking exercise. The session opened with the moderator inviting the panel and audience to think of the Internet in 2043, what good would look like, and what it would take us to fulfil the hoped-for future we want.

The panellists started off by sharing their thoughts on what it entails imagining the future, based on past experience. 

Olaf Kolkman from the Internet Society highlighted it is hard to predict the future and what technologies would triumph, exemplifying with his erroneous prediction that webpages would not go beyond academic libraries. Sheetal Kumar from Global Partners Digital spoke about ubiquity of smartphones and connectivity as a crucial development and in looking to the future, encouraged the audience to think about what we want the Internet to feel like; she believes the internet will continue to grown in embeddedness and finds that how the internet will evolve will depend on what we Internet we choose to create. French Ambassador for Digital Affairs, Henri Verdier —who created his first web-based company in the 90s— shared a story about how he erroneously predicted that Wikipedia would fail to take off. Professor Izumi Aizu from Tama University mentioned that we are oftentimes overly optimistic of the future, which in reality may be composed of different shades and colours. The future is bound to surprise us with unpredictable events like Fukushima or the unfolding conflict in Gaza. Lorraine Porciuncula from the Datasphere Initiative spoke of being a digital native, and the optimism felt during the Arab spring. She recalled the sense of opportunity and “capability” brought by technology. Time showed that there are good and bad aspects to technology, yet she encouraged the audience to reconnect with a sense of optimism. 

The moderator introduced the discussion paper submitted as part of the session (https://dnsrf.org/blog/the-internet-in-20-years-time-what-we-should-hav…) which lays out there potential future scenarios:

  • Scenario 1: Continued Status Quo. In the first scenario, we muddled along, continue the current course of action and end up with an internet that continues in its present trajectory with some signs of fragmentation;
  • Scenario 2:  Fully Fragmented Internet. the second scenario is one of complete fragmentation, either divided at the technical layers, at ideological layers or regulatory layers or all three; 
  • Scenario 3: Strengthened, non-fragmented Internet. The third scenario is one of a bright future where we get our act together.

The moderator invited the panel and audience to comment on what they see as the most likely future and why, and at what layer they see the most risk.

Olaf said that in reading the scenarios, he was struck about how the future is already here. Many of the things described in the scenarios —such as drivers for the fragmentation of the technical layers of the Internet, are already happening, and if they take off, they will splinter the internet. He explained that the value he sees in the Internet lies in its openness, the scientific method of sharing knowledge, and to be able to probe, query and scrutinise one another. He commented in particular on scenario 1, where we see a mix of closed networks coexisting with the Internet. This is about being proprietary, about the Internet being closed, about the Internet developing services that people pay for, where people connect to servers to access specific services, and the interconnectivity is less important. This is an entirely different notion from the Internet that exists to connect us to the rest of the world, where we get to choose services. To Olaf, openness is a best case scenario, where the richness of the Internet really lies. 

The moderator took a round of early comments from the audience. 

  • Barry Leiba said that what has driven the evolution of the Internet is the innovation in applications and services. He therefore thinks that a great idea for an application (perhaps yet to come) is what will drive the Internet of tomorrow, including another set of standards and technologies. He highlighted the role of standards in shaping the way we will experience technology. 
  • Andrew Campling stated that we are also at an inflection point. Up to now, the Internet was seen as a force for good. He finds we are now at the point where the balance is shifting to the Internet becoming a source for harm with the rise of disinformation and CSAM. Adding to the point of standards, he urged for standards development organisations (SDOs) to become more diverse.
  • Michael Nelson from the Carnegie Endowment for International Peace came in next. He taught a class about the internet future(s), where he highlighted to his students that the best way to understand what is coming in terms of technology, is not to understand what the technology can do, or what governments want it to not do, but rather to look at what the users want. So we should ask ourselves, what will drive companies and governments to do better? He concluded by saying “I am a technology positivist but political negativist.” 

The moderator returned to the panellists. Izumi described the first scenarios of mixed networks co-existing with the Internet as a scenario of chaos. He consulted a number of AI tools on the subject of the panel and shared the findings with the audience. Chat GPT said that, while there is fragmentation due to economic and political reasons, the ethos of the Internet as a tool for global communication will likely persist. Bard was even more optimistic and said the Internet might become even more unified. He challenged the audience to think not of a better internet, not for the sake of the Internet itself, but for the sake of a better society, which is a different perspective on how to understand the Internet. 

Lorraine, on the other hand, said that in her view, we will not have an issue of fragmentation around the Internet’s technical layers, but we will have a very concrete challenge on the regulatory side. This issue is reflective of not only the fragmentation of the Internet, but of the fragmentation of society. She urged the audience to consider “how are we (as societies) going to get along? What are the incentives?”  Regulators will regulate what they are scared off: they want to control national security, democratic processes, content, and so on. So when taking of regulatory-driven fragmentation, the question becomes “How will we work to find convergence?” 

 Ambassador Verdier said that he is uncertain what scenario will materialise, but that he knows what we should fight for. We know what the Internet brought us in terms of possibilities. Now there is great centralisation, if you look for example at submarine cables. He finds that big tech does not care for decentralised internet, and that “we need to fight for that interconnected, free, decentralised internet.” He also reflected on John Perry Barlow’s notion of Cyberspace (https://www.eff.org/cyberspace-independence), where the Internet felt like it was somewhere far off in “cyberspace”. Now the digital is embedded in all aspects of life: education, health, and even war and peace. He finds that the fragmentation of the technical layer would be an extremely bad scenario, as now interdependence holds it all together. If the internet were to fully fragment, the temptation to disconnect each other’s internet would be very high, war would be waged on infrastructure itself. So far we have cyberwarfare, but no attempts to disconnect internets. Beyond the technical layer, there is a political and legal layer. From a legal point of view, he sees it would be better to have regulatory convergence but if you believe in democracy, you need to respect regulatory proposals that are reflective of local prerogatives, as is the case in France. 

Sheetal came in next and said she finds that we have the capacity to build and design our own future, even though there are power asymmetries to be aware of. She picked up on the notion of how the Internet of the future should feel: it should feel liberating, especially to those who do not occupy those positions of power. She hopes for a future Internet that does not reflect the inequalities of our society. This will require that those who build the technologies and develop the standards, open up spaces to those communities affected by technology developments. In terms of what we should do, she highlighted “we know exactly what we need to do, we just don’t do it at the moment.” There are many useful tools and guidance on how to build a better, human-rights-respecting Internet. We should utilise and leverage those in shaping the Internet of tomorrow.

The audience came in with a new round of comments:

  • Web 3 and money. Georgia Osborn picked up on money being a huge incentive on the Internet, and currently money being a massive driver for the development of blockchain technologies, Web 3.0, alternative naming systems, and cryptocurrencies. She asked the panel to reflect on whether those forces are bound to further fragment the Internet, or not.
  • Interoperable laws. Steve del Bianco from NetChoice highlighted the impact of fragmentation through regulation, and stated that regulation is the main challenge we will confront, one that is already unfolding. There appears to be no cost associated or consequences for governments, particularly authoritarian governments that want to control what their citizens see. He highlighted how IGF 2023 was largely about AI, but not about collaboration. “We have been hearing competing views about how it should be regulated and where it needs to go. That is not going to work transnationally.” He encouraged the audience to think of ways of documenting the cost of fragmentation, and raising the “pain level” for bad regulatory proposals.
  • Bertrand Le Chapelle from the Internet and Jurisdiction Network also spoke about legal interoperability. He said that fragmentation is not driven by technical objectives but by politics. The legal fragmentation is a reflection of the international political system, which today is heavily influenced by notions of national sovereignty. The legal fragmentation is what prevents us from dealing with online abuse in many cases. The framework for accessing electronic evidence is non-existing or insufficient. He agreed with Ambassador Verdier that countries have a “democratic freedom/capacity” to do what they deem right for their citizens, but if we want to preserve interoperability we need to reduce the friction at the legal level. He also thinks we need to have heterogeneous governance frameworks that allow the coexistence of government regulation, company’s self regulation, and other frameworks that operate independently yet are able to speak to and with one another.
  • Involvement of the global south and regions with ideological disagreement. Nikki Colosso from Roadblocks came in next. She  pointed out how  a lot of conversation in IGF 2022 dealt with incorporating the global south and inclusivity. She asked the panel what specific steps companies and civil society can take to involve users from countries that are not represented in these conversations or those from countries where there are differences from a geopolitical perspective.
  • Digital Colonialism. Jerel James picked up on the issue of profit as an incentive. Money is how power gets flexed on certain communities. He asked about digital colonialism and how it may be sanctioned. As we see antitrust regulation for monopolies exists in our traditional finance system, he asked whether there are possibilities to sanction resource extraction by big tech as a means to stop digital colonialism.
  • Bad behaviour in the online realm. Jennifer Bramlet from the UN Security Council spoke next. She focuses on how bad actors exploit ICTs for terrorism, including use to recruit and radicalise individuals. They look at what is considered unlawful and harmful language across jurisdictions, from a regulatory perspective. Looking to the future, they are concerned about crime and terrorist activity in the metaverse, and how it may be tackled going forward when regulation hasn’t quite yet caught up with online criminal challenges we see today.  Her question to the panel was how do you deal with bad behaviour in the online realm.
  • Call not to lose sight of the social value of the Internet. Vittorio Bertola came next. He believes Europe is producing regulation precisely to preserve the global nature of the Internet, not to break it. Also, if the future of the internet is decided by what people want from it, people want entertainment and social media attention. If we focus on that we lose sight of the social purpose of the technology. Just doing things because we can or because money is not enough.

Ambassador Verdier responded first by saying he shares the aspiration by Bertrand of interoperable legislation. But while we can work on making progress in that direction, we are not there yet. France is fighting for regulation of big tech, which they see as a private place built on the internet. In his view, “you can do that and still protect the global internet.” 

Sheetal elaborated on what we can do. On legal fragmentation, she expressed there is need for harmonisation. She finds we have human rights standards to guide us, we have the rule of law and our institutions. We can use those human rights standards and guidance for shaping the online space. She also seconded the need to protect the openness of the Internet and the ability to build your own apps and technology. She also supported the need to protect the critical properties of the internet, and how that comes hand in hand with the need to make standards bodies more inclusive. She also encouraged all participants to take the conversation home, to ensure that we are vocalising the values we want to be reflected on the Internet of tomorrow, and ensuring that those get executed. She concluded with an invitation: “Let's not be nostalgic, let’s look forward.” That requires giving users control, and not letting governments or companies determine what the future is about.  

Izumi reacted to Vittorio and Bertrand. He agreed that the future of the Internet depends on the wills of people/users, and that it also depends on legal frameworks. He wanted to add additional dimensions to consider, two factors that are unknown: climate and politics. We may get together hosted by the UN in 20 years time, independently of how politics plays out, who wins what war. Climate change, however, is an existential threat, we may think it is an external factor to the internet, but may well shape the future of the Internet, it may even lead to war. In the 1940s, we killed each other a lot. We then had the Cold War, and then came the Internet. Perhaps the timing was right, as the East and West were open to coming closer together. That political will is what allowed the Internet to get picked up. China wanted to have technology and science, that is why China accepted the Internet, to have growth and innovation and technology. Now China and India have reached the point where they do not need the West anymore. He concluded by inviting us to think of not the Internet of the future. The question has to be how the present and future will offer something better for society.

Lorraine picked up on notions of what the Internet can do for people. She highlighted that narratives matter, so it is not about the Internet, but about the digital society. Now, when we reflect on ”what is our vision for the Internet? what do we want the Internet to feel like?” she finds that we do not have a clear, shared vision. If the issue were walled gardens, we could use tools for antitrust and competition for users to move to other platforms. But the truth is that with the Internet, one government can’t fix it all, so it’s all about governance. We need to focus on asking ourselves “how do we cooperate? how do we govern? What are our economic and social objectives?”

Olaf concluded by explaining that not having infrastructure at all is the ultimate fragmentation. Empowered communities is the way forward, like IXPs, communities networks, that is truly bottom. He also added thoughts on standardisation. When you talk about economics and standardisation, standardisation is to a large extent industry driven and industry politics; we need to put that on the table and understand it. With economics, consolidation happens, even if you have open technologies, companies will try to extract money from using those open technologies. And you will have an accumulation of power to the point governments might say this is too much, and want to regulate it. But we need to remember you don’t need standards for every innovation. The founder of blockchain did permissionless innovation, open innovation (he did not innovate via standards making bodies). Innovation happens today, not just in standards organisations. If you ask me from a technical perspective, where to go in the future I say: Open architecture, so that people build on the work of others, open code so that it can be reused, and open standards.  

There was a last round of comments from the audience:

  • Yug Desai, ISOC Youth Ambassador, thinks in 20 years from now we will have fragmentation, not by design, but by default due capacity gaps. He finds the standards are unable to keep up with the pace of innovation, and not sufficiently inclusive of the users.
  • Mark Dattysgeld highlighted the importance of open source and the role of research driving AI. He said we should ask ourselves whether that is the new paradigm that takes things forward. This point was reinforced by Lucien Taylor on the example of TCP/IP.

The session wrapped with final recommendations from the panel about what to do next:

Raul Echeberria from ALAI finds we already have a level of internet fragmentation, and we need to live with that. The incentives of policy makers are diverse, and not always driven by the search for the best outcomes for all. Our mission has to be protecting the Internet. In terms of what to do, his proposal is to go for “gradual objectives and commitments, instead of going for the whole packet.”  In sum, he suggests an incremental approach.  He also said that in speaking to policy-makers, we need to make our messages sharper and clearer, and better outline what governments should not do. Lastly, he shared he recently participated in a discussion with parliamentarians, all of whom were over 50 years old. They spoke about fears, but it is important we do not develop policies based on fear, and let’s not let fear stop evolution. 

Lorraine reiterated the points we heard so far – being clear on what the objectives are, being incremental– and added being iterative. There is no ultimate regulation that will get it right, so we need to test stuff and iterate. The system is hard to predict and it moves fast. We need processes and institutions that are more agile. Like in software development, we need to identify the bug, and have multi-stakeholder conversations to address them. True multi-stakeholderism works when it seeks to be inclusive in an intentional way, particularly of communities that are underrepresented.

Ambassador Verdier added he thinks we can agree on a compass. In his view, we should stand for 3 aspects of the Internet’s golden age: unprecedented openness and access to information, which to date has not been fully accomplished as we still have a digital divide; unprecedented empowerment of communities and people; and permissionless innovation. He reiterated that fragmentation can come from the private sector, not just rogue states.

Olaf emphasised the point of the compass, saying our work needs to be principles-based. We need to make a differentiation between evolution OF the internet and evolution ON the Internet. We can get to those shared principles if we talk of the evolution OF the Internet. When we talk about empowerment, individualism, autonomy ON the Internet it gets more complicated to arrive at shared principles.

Sheetal added we need to assess how governments do regulation, and how companies operate from a human rights perspective. Are they human rights respecting, is there accountability, transparency? Are our governance and standards body inclusive? She summarised her points as protecting critical properties as they evolve, adopting a principles based approach, building on the human rights framework, and creating more inclusive spaces.

Lastly, Izumi highlighted that there were no Chinese or Indian representatives in the high-level session on AI, which to him is telling of the level of fragmentation that already exists. It wasn’t like that 18 years ago, we have fears. He encouraged the audience to go out into the world of chaos, to engage where there is tension, to think outside the box.

IGF 2023 DC-IoT Progressing Global Good Practice for the Internet of Things

Updated:
AI & Emerging Technologies
Key Takeaways:
When using IoT devices and services, strong identification becomes key to protect these from tampering. This identification may be between devices, for instance those that together provide a service, or form together a so-called “cyber physical system” such as a car, a house, an airplane, etc. When this identification is between people and devices, there needs to be sufficient measures in place to ensure privacy by default.,

With the ongoing growth of IoT deployment throughout our world, scaling issues are important to consider. Going forward to design imperatives need to be taken on board: (1) security by design - every device needs to be protectable (and updatable when needed); and (2) every device needs to be as carbon neutral as possible (as there will be many, including those that are dependent on power).

Calls to Action

Require appropriate security measures for IoT devices that can be handled by those that use them, and ensure appropriate labeling (dynamic for those devices that are software updatable) to make it possible for user to assess the risks and take the necessary measures.

,

Set global standards for this, as it concerns devices that are developed all over the world, and are deployed all over the world. National/regional initiatives will need to take global good practice into account.

Session Report

IGF 2023 DC-IoT Progressing Global Good Practice for the Internet of Things

The session considered IoT governance from various perspectives. To understand baseline IoT evolution, associated challenges, opportunities and responses, the IoT could best be understood as an internet of data, devices, systems or functions. For simplicity, we can call these “Internets of X” (IoX). Each perspective brings its understanding of what is possible, desirable or undesirable and tools and processes needed for governance.

Each approach must be considered in its own terms, but they start from a common base of experience and must ultimately come together to provide good governance. This leads to the need for an ecosystem comprising of stakeholders such as technical experts, governments, service providers, manufacturers, users, standards bodies, military vs civilian organisations, etc., varying in global and regional perspectives.

One immediate consequence is that IoT governance must respect a range of perspectives. Our fundamental principles are unlikely to be universal, especially when applied to specific IoT contexts. By analogy with the sensors and actuators of the IoT itself, governance needs to ‘sense’ the interests and perspectives of all significantly affected parties and somehow balance them to inform decisions at various levels. In other words, it requires multistakeholderism. It is not that specific expert groups (e.g., engineers) are insensitive to the needs of others (e.g., end users) but that they may misunderstand their interests, capabilities and behaviour.

The session began with a consideration of simple and recognisable use cases in which major challenges can already be seen (though they will become more complex). IoX components and their complex or hybrid assemblages will and should interact with others, so they must be identified uniquely and discovered with appropriate levels of precision, reliability, and permanence and be capable of enrolment in or separation from IoX systems. The concept of ‘identity’ has some subtlety. For instance, a smart home must be able to recognise and be recognised by new IoT components added to the system on a permanent or temporary basis, accorded the right kinds of access and privileges and tracked or remembered appropriately. These identities enable necessary functions, including the granting of trust. But they need not be unique, durable or universal. Indeed, categorical or shared identities (e.g., type certification) may be more practicable, scalable, flexible, future-proof, secure and robust to, e.g., (hardware, software or data) updates and interconnection or federation to create identifiable hybrid systems. Three subtleties linked to identity that came up in the discussion were security (including but not limited to cybersecurity), privacy (including but not limited to data privacy) and ownership (including protections against identity theft or misuse and, conversely, the use of identity to carry liability or responsibility).

Various identity schemes were discussed, ranging from central registries of semi-permanent discrete identities (along the lines of the DNS model) to purely transactional or temporary mutual authentication and identification schemes. These have advantages and drawbacks ranging from theoretical to practical, including technical, legal, commercial, security and other considerations. No single approach seemed to fit all foreseeable circumstances. In placing these in context, the panel recognised that the same concepts applied to the human beings (and organisations) that create, operate and use the IoX. For example, a person is more important than devices or data attributed to him/her, and human rights and responsibilities (e.g., of association and expression) cannot safely be extended to, say, their smart digital assistants. This cuts two ways; it may not be useful to hold a human being accountable for what their devices do in response to interactions with other systems, which the ‘user’ may not even perceive, let alone understand or control. Conversely, the automation of routine functions may result in their receiving less considered and responsible human attention, with unintended, undesirable and possibly irreversible results.

The discussion also considered desirable properties that might provide an ethical framework for IoT governance. Many are familiar, e.g., interoperability, transparency and accountability, robustness, resilience, trustworthiness, user empowerment, privacy and security. They are not IoT-specific but may need to be reinterpreted in that context. For example, IoT devices can harvest a wide range of data almost invisibly, which creates general privacy and security risks and affects global development, e.g., via ‘data colonialism’ whereby devices originating in and provisioned by the global north can be used to capture data from users in the global south to produce innovations for the benefit of the north and to lock in users in the south in ways that inhibit their techno-societal development.

One desideratum came up in relation to technologies, service provision, use cases, data issues, labelling and certification schemes and legal frameworks, and scalability. This is a generic issue, but the panel highlighted aspects that stand out clearly in the IoT context. One is complexity; as systems scale quantitatively, their qualitative properties may change and, with them, the appropriate kind of governance. Rules may need to be more general, neutral, principles- or function-based. Alternatively, governance may need to move between the data, device, software, etc., planes as systems interconnect in larger and more diverse ways. Another is practicability; effective governance may require limits on scale or interoperability. A further aspect is Quality of Service (QoS). The IoT-specific emphasis on low latency can constrain system scale, security or flexibility. Beyond this, QoS considerations may lead to multi-tier systems, which may reduce economic welfare, hinder interoperability or distort innovation. Large-scale systems may also be more susceptible to intentional or accidental compromise; effective access control in large environments may lead to inappropriate inclusions or exclusions. Under laissez-faire evolution, IoT systems may reach stable sizes and configurations, but these may not be optimal. Finally, very large systems may be difficult to govern with national or self-regulatory arrangements. For example, identification and certification schemes that identify individual devices or types scale with their number but cannot identify even pairwise interactions (which scale as the square of the number of interacting entities). As scale increases, management overloads, costs increase, and utility and use eventually decline. This, however, depends on the governance architecture; a centralised system (analogous to the cloud) offers economies of scale (or diseconomies) and a natural platform for observing systemic behaviour and emergent threats (if not weak signals). However, it creates additional power asymmetries and vulnerabilities; no one governance architecture will likely fit all cases. The group also mentioned other aspects of scale, such as environmental impact.

Another aspect that ran through the various phases of the discussion was trust and trustworthiness; beyond the customary discussion of e-trust, the panel contrasted high-trust and Zero-trust approaches to the problems of identification and interoperability.

The issue of AI in the IoT comes up often but not in depth. The panel recognised that it complicated the IoT, especially when considering smart devices and the emergent intelligence of connected systems. Foreseeability and explicability were discussed, as was the possibility that data-driven systems might be particularly vulnerable to noisy or biased data.

The panel considered various legal approaches and the ‘regulatory game’ being played out among countries, industries and civil society groups. Governance competition could spur the development of innovative and effective standards if different approaches can be compared and a suitable global standard emerges through a kind of ‘Brussels Effect’. This seems more promising than a too-rapid imposition of global standards and regulations whose implications cannot be foreseen. However, this result is not guaranteed; we could see damaging fragmentation or a rich diversity of approaches matching different contexts. Research on policy initiatives in 40 countries around the world shows that governments often do not regard modern global open source standards and global good practices with security at the core as “important”. It was suggested that governments could lead the way by taking such standards actively on board in their procurement activities. Keeping the discussion going and actively engaging with other DCs guarantees a positive outcome and an increased understanding of good global practices in IoT governance. Three important takeaways:


·       

IoT data, especially AI-enhanced, should be understandable, accessible, interoperable, reusable, up-to-date and clear regarding provenance, quality and potential bias.


·       

At the level of devices, there need to be robust mechanisms for finding, labelling, authenticating and trusting devices (and classes of devices). These should survive retraining, replacement or updating but be removable when necessary for functional, security or privacy reasons. To ensure IoT functionality, trustworthiness and resilience, market information and incentives should be aligned. Labels provide a powerful tool; many countries have developed and adopted IoT trust marks, and the time has come to start working towards their international harmonisation.


·       

Functions are not all confined to single devices, designed in or provided by system integrators; they can also be discovered by end-users or emerge from complex system interactions in cyber-physical systems (CPS) and IoT-enabled services. Governance requires methods for recognising, protecting and controlling these functions and their impacts.

-=O=-

IGF 2023 DCNN (Un)Fair Share and Zero Rating: Who Pays for the Internet?

Updated:
Avoiding Internet Fragmentation
Key Takeaways:

Large platforms generate enormous amount of traffic, but at the same time, they contribute to network infrastructure costs e.g. by building undersea cables or content delivery networks. The most traffic intense platforms have been zero rated for almost a decade by most operators of the world, including those currently proposing “fair share contributions” and in most Global South countries zero rated models are still very common.

Calls to Action

More comprehensive analyses of interconnection market is needed, including assessing the role of content delivery network. Increased multistakeholder confrontation to foster a better understanding of the issues at stake and whether proposed solutions such as fair share are de facto needed or not.

Session Report

 

The purpose of this session was to explore the so called “fair share” debate, which is rising in popularity especially in the European Union and South Korea and moving rapidly to Latin America. The session also discussed the connection between fair share and zero rating schemes, especially popular in the countries of the Global South.

 

The session adopted an evidence-based approach, featuring multiple stakeholder perspectives, discussing to what extent fair share and zero rating can be beneficial for the internet economy and to whether they contribute positively or negatively to the sustainability of the internet ecosystem.

 

Furthermore, panelists will explore how these two core issues connect with broader debate on Internet openness vs Internet fragmentation. The session was structured according to the following agenda:

 

Brief intro by Luca Belli, Professor and Coordinator CTS-FGV (5 min)

 

First slot of presentations (6 or 7 minutes each)  

  • Artur Coimbra, Member of the Board of ANATEL, Brazil  
  • Camila Leite, Brazilian Consumers Association (IDEC)
  • Jean Jaques Sahel, Asia-Pacific Information policy lead and Global telecom policy lead, Google  
  • KS Park, Professor, University of Korea
     

Q&A break (10 to 12 minutes)

 

Second slot of presentation (6 or 7 minutes each)

  • Maarit Palovirta Senior Director of Regulatory Affairs, ETNO
  • Thomas Lohninger, Executive Director, Epicenter.works
  • Konstantinos Komaitis, non-resident fellow, the Atlantic Council  

 

 

Participants stressed that, over the past decade, we have witnessed an increasing concentration in few internet platforms, with regard to social media or cloud computing, and such players generate a very relevant percentage of internet traffic. There is a wide range of ongoing regulatory initiatives aimed at frame large platforms, but over the past two years an additional type of regulatory proposal has been surfacing: imposing network fees to large platforms so that they pay their “fair share” of network related costs.

 

In countries such as Brazil, 95% Of users utilize internet access primarily for instant messaging and social Media (e.g. WhatsApp Facebook and Instagram are installed on 95, 80 and 70% of Brazilian smartphone respectively) and virtually all video recordings shared online in Brazil are hosted on YouTube.

 

Large platforms generate enormous amount of traffic, but at the same time, they contribute to network infrastructure costs e.g. by building undersea cables or content delivery networks

 

The most traffic intense platforms have been zero rated for almost a decade by most operators of the world, including those currently proposing “fair share contributions” and in most Global South countries zero rated models are still very common.

 

Some relevant points that need debate and clarification:

 

1) large platforms generate a lot of traffic because they have a lot of customers, not because they engage in any illegal or inappropriate practice; It is true that in most countries they have extremely low level of taxation, compared with their profits, but to cope with distortion it would be much wiser to review their taxation regime rather than simply shift part of their revenues to interne access providers.

 

2) Some regulators or operators have portrayed large platforms as free riders of internet infrastructure. This it is not correct, as platforms also invest enormously in infrastructure, e.g. by building submarine cables and large content delivery networks that are essential to maintain good quality of service good user experience;

 

3) Participants stressed that the topic of fair share with the topic of zero-rating are connected as large platforms have not become responsible for such enormous amount of traffic by chance, but the most traffic intense apps have been zero rated for almost a decade by most operators of the world, as we demonstrated with an empirical analysis, which was to annual output of this coalition already in already in 2018.

 

Actions suggested:

 

More comprehensive analyses of interconnection market is needed, including assessing the role of content delivery network

 

Increased multistakeholder confrontation to foster a better understanding of the issues at stake and whether proposed solutions such as fair share are de facto needed or not.

IGF 2023 DC-PAL Public access evolutions – lessons from the last 20 years

Updated:
Digital Divides & Inclusion
Key Takeaways:

There is an increasing disconnect between trends in connectivity and real-world outcomes, even on the basis of the limited data that we have. There is a strong need to invest in stronger data collection as a basis for meaningful internet governance decision-making

,

Public access, as a multipurpose means of helping people make the most of the internet, has proven itself as an adaptable and effective means of achieving people-centred internet development. It has proved its work faced with shocks, in allowing engagement with new technologies, and as a means of localising digital inclusion policies

Calls to Action
The full potential of public access as a way to address the decoupling of progress in extending connectivity and broader social progress needs to be part of internet strategies going forwards
Session Report

Evolutions in Public Access

 

It has been 20 years since the WSIS Action Lines were defined, setting out the importance of connectivity libraries and providing multifunctional public access centres. This fitted into a broader strategy that focused not only on finding rapid and effective ways of bringing the potential benefits of the internet to more people, while acknowledging the importance of a focus on people in order to turn this potential into reality.

 

The introduction to this session therefore set out the question of how public access as a concept has evolved over the past 20 years, as a basis for assessing its continued relevance and to understand how its place in the wider internet infrastructure has changed. It drew on written contributions shared by UNESCO and the Internet Society in particular, which noted, in particular that public access had been proven not to compete with public access, that libraries had proven to be adaptable and responsive, that public access had been a basis for service innovation and partnership, and that the fact of offering other services made libraries particularly valuable as public access venues.

 

Maria Garrido and Matias Centeno (University of Washington) set out the challenge faced, based on data collected as part of the Development and Access to Information report. Crucially, this underlined that good progress in general in bringing people online was not being reflected in other areas seen as vital for making access to information meaningful, in particular around equality and fundamental rights online. This illustrated the potential weaknesses of a tech-only approach.

 

Ugne Lipekaite (EIFL) offered a rich set of evidenced examples of how public access had proven its ability to help solve wider policy challenges, as well as its ongoing essential role in working towards universal connectivity. It had, indeed, been a driver of entrepreneurship and growth.  Crucially, many of the same trends could be observed in very different parts of the world, opening up possibilities for mutual learning in terms of how to develop public access most effectively.

 

Woro Titi Haryanti (National Library of Indonesia) described how public access was at the heart of a national strategy to develop library services as a means of improving lives. Centrally, the emphasis was on ensuring connectivity, providing adaptable content and building staff skills in order to develop programming that could combine public access with other support (including via partners). Thanks to this work, the library was increasingly seen as a partner for wider social development programming.

 

Don Means (Gigabit Libraries Network) underlined that libraries were often early adopters of new technology, providing a means for people not just to get to know the internet, but also new ways of working with it. They had also proven their role in connecting online services with users, for example to ensure that those needing to use eGov services were able to do so. They also offered a crucial backstop of parallel access technology, which boosted resilience.

 

The audience was then asked to share views via Mentimeter. They underlined their agreement with the idea that public access had a key role in the connectivity infrastructure and in future strategies, as well as broadly believing that public access complements other forms of connectivity.

 

 

Key themes that emerged in the discussion included:

  • Public access had proved a structure for delivering on the promise of the localisation of the internet and digital inclusion efforts in particular. Rather than a purely tech-led, supply-side approach, public access centres allowed supply and demand to meet effectively and inclusively.
  • The definition of meaningful access in general needed to include access to meaningful support services for those who needed them in order to make the most of the internet.
  • It was important to develop wider internet resilience strategies, in order to keep things going in times of disaster. Public access was a key part of this.
  • We needed to change the narrative about libraries in particular, and recognise (inside the library sector and outside) their role as agents for digital inclusion.
IGF 2023 Town Hall #134 The Digital Knowledge Commons: a Global Public Good?

Updated:
Data Governance & Trust
Key Takeaways:

The digital knowledge commons make a key contribution to what the internet is, with strong potential for growth, through AI, opening collections, and more inclusive practices

Calls to Action

We need to stop regulating the Internet as if it was only made up of major platforms – this risks harming public interest infrastructures

Session Report

Safeguarding the Knowledge Commons

 

As an introduction to the session, the moderator underlined that while shared knowledge resources had initially been included in definitions provided of digital public goods, they were not such a strong focus of subsequent initiatives. In parallel, UNESCO’s Futures of Education report had placed the concept of a Knowledge Commons at the centre of its vision, seen as a body of knowledge which is not only accessible to all, but to which everyone can make contributions.

 

Finally, organisations working around knowledge had long promoted the importance of realising the potential of the internet to enable global access to knowledge, and address barriers created in particular by intellectual property laws.  

 

Tomoaki Watanabe (Creative Commons Japan) underlined the particular questions that new technologies and in particular AI offered, thanks to the generation of new content that could potentially be free of copyright (3D data, scans, AI-generated content). This had the potential to create dramatic new possibilities that could advance innovation, creativity and beyond.

 

While there clearly were questions to be raised around information governance and AI (not least to highlight AI-generated content), copyright appeared to be a highly inadequate tool for doing this.

 

Amalia Toledo (Wikimedia Foundation) cited the connection between the concept of the knowledge commons and the need for digital public infrastructures that favoured its protection and spread – something that was ever more important. Wikimedia represented just such an infrastructure, but remained the only such site among the most used on the internet, with a constant risk of underfunding.

 

Moreover, laws were increasingly made with a focus on commercial platforms, but which caused collateral damage for non-commercial ones such as Wikipedia. Efforts to expand intellectual property laws brought particular risks when they failed to take account of the positives of a true knowledge commons.

 

Subsequent discussion highlighted the following issues:

  • The knowledge commons as a concept raised interesting questions about governance, and in particular how to ensure that it was inclusive and meaningful for everyone. There was a need for actors applying rules, such as Wikipedia and libraries in order to make it functional and sustainable.
  • The need to look beyond copyright as a tool for regulating information flows, given how blunt it was, and in particular in the context of AI to take care in taking decisions. Too often, Generative AI was mistaken for all AI, and policy choices risked imposing major costs even on research and education uses.
  • The value of a more holistic approach to upholding the knowledge commons in general, and the public domain in particular, in order to safeguard them and realise their potential to support wider efforts to ensure that the internet is a driver of progress and inclusion.
IGF 2023 Day 0 Event #161 Towards a vision of the internet for an informed society

Updated:
Digital Divides & Inclusion
Key Takeaways:

Importance of localization - if we want to promote inclusive internet we need to localize our approaches

,

Libraries are natural partners for any actor in the Internet inclusion space

Calls to Action

People should re assess their mindset about libraries and see them tech test beds, key sources of content and community infrastructures

Session Report

As awareness grows of the limitations of a purely technological definition of connectivity, as well as of the complex economic, social and cultural implications of the increasing ubiquity of the internet, the need to find a way to realise the goal of a human-centred internet grows. This session drew on the experience of libraries around the world as institutions (staffed by a profession) focused on the practicalities of how to put people in touch with information, and to help them use it to improve their lives. 

Winston Roberts (National Library of New Zealand (retd)) set the scene, highlighting the place of libraries in the original WSIS Agenda, which of course included strong reference to connecting libraries and the value of multi-purpose public access centres. He highlighted that while 20 years had passed, the evolution of the internet had only underlined the importance of having institutions like libraries in order to support universal and meaningful use, as part of a broader approach to internet governance. Thanks to this, it was not only possible to deal with the worst excesses, but also to unlock some of the potential that the internet creates in order to achieve goals around education, social cohesion and beyond. 

Nina Nakaora (International School of Fiji) highlighted the work that libraries had done in particular during the pandemic in order to provide access to learning materials. Again, this illustrated the value of having actors in the wider internet system focused on ensuring that public interest goals were achieved, especially where the market was unlikely to create solutions. She highlighted that, at the same time, to play this role there was a need for libraries to benefit from investment in hardware, connectivity and skills to deliver this.

Rei Iwaski (Notre Dame University, Kyoto) reflected on the Japanese experience of providing information services through libraries. She echoed the point made by Nina Nakaora that this is a potential that can only be realised when libraries are integrated into wider planning. Their cross-cutting missions meant that they often did not fit easily into any one policy box, and also needed to build their own sense of agency as actors in internet governance.

Misako Nomura (Assistive Technology Development Organisation) highlighted the particular situation of users with disabilities. Once again, this illustrated the need to move beyond a laissez-faire approach, and to look at how to connect people with opportunities. Her work included both developing materials for persons with disabilities and ensuring access to technology and wider support. With an ageing population, finding ways to bridge accessibility gaps would be an increasingly important part of wider digital inclusion efforts, and so a strong and properly resourced set of institutions to do this would be essential. 

Woro Titi Salikin (National Library of Indonesia) brought practical examples, again, of the power of facilitating institutions such as libraries in helping people to make the most of internet connectivity in order to deliver real-world change, in particular focused on gender inclusion and supporting entrepreneurship. The Indonesian experience demonstrate that it was possible to make change happen at scale through the right balance of centralised support and local flexibility to adapt services to circumstances. 

The subsequent discussion highlighted the following key points:

- the need to integrate libraries into wider strategies in order to realise their potential. Indonesia offered a strong example, with the close connection between the national library as coordinator of a wider network and central government. Elsewhere, this wasn't the case, and opportunities were being missed

- the fact that librarians too often lacked the sense of agency and skills necessary to fulfil their potential as facilitators of digital inclusion. The sector was at risk of remaining in traditional roles, especially when partnerships with other actors could not be formed. There was a need to build awareness of the responsibility that libraries have in the digital world

- the fact, nonetheless, that libraries do have a unique and flexible role in society which could be mobilised to support a wide range of different agendas

Collectively, the conclusions pointed in the direction of the need to reaffirm the role of libraries, both as a means of activating libraries and librarians themselves, but also to state the case for the place of libraries both as actors in internet governance processes, and as partners for delivery. This is at the heard of IFLA's Internet Manifesto Revision, currently underway, to which all participants were invited to contribute. 

 

IGF 2023 DC-CIV Evolving Regulation and its impact on Core Internet Values

Updated:
Avoiding Internet Fragmentation
Key Takeaways:

1. The Internet has been self organising with as little regulation as possible for it to work and if strong regulation is introduced it will hinder its technical functioning. Too much regulation will damage interoperation. As Internet networks evolve into space with no borders there are questions marks as to how its Core Values will be sustained.

,

2. One of the major policy tensions in digital life pits anonymity against accountability. Anonymity has been a key aspect of internet activity, but we have painfully learned that full anonymity can be exploited in ways that allow bad actors to escape being held accountable for the harms they cause. Systems must be developed to bring accountability without compromising essential anonymity - and layering identity levels is one way to do it.

Calls to Action

- The Internet community including the private sector, civil society, technical community should actively engage with governments to make them understand why a multistakeholder IGF is important.

,

- Use of encryption needs to continue - as without encryption many of the functions of the Internet's safety will be negatively impacted.

Session Report

 

DC-CIV Evolving Regulation and its impact on Core Internet Values

Report on the Internet Governance Forum (IGF) Session.

Main Report

The Core Internet Values, which comprise the technical architectural values by which the Internet is built and evolves, also comprise ‘social’ or, in other words, ‘universal’ values that emerge (or derive) from the way the Internet works.

The Internet is a global medium open to all regardless of geography or nationality. It is interoperable because it is a network of networks. It doesn't rely on a single application. It relies on open protocols such as TCP/IP and BGP. It is free of any centralized control, except for the needed coordination of unique identifiers. It is end to end, so traffic from one end of the network to the other end of the network goes. It is user centric and users have control over what they send and receive and it is robust and reliable.

The Dynamic Coalition on Core Internet Values held sessions at every previous IGF. During the 2023 IGF at Kyoto, the Coalition discussed the topic of "Avoiding Internet Fragmentation"  with "International Legal Perspectives" as the sub theme - all part of this year’s “Internet We Want”.

The following questions were examined during the session:

  • In a changing world and a changing Internet, should the Internet stick to its Core Values?
  • Should more legislation be needed? If yes, then how should it be drafted?
  • What are the risks of "changing" the Core Internet Values for the future of the Internet?
  • Could we end up with fragmentation? With the end of the Internet as we know it?
  • Could we end up with a better, safer, cleaner post-Internet network of networks? Is this achievable or is this a pipe dream? Does this have an impact on democracy across the world?

Panelists included  Lee Rainie, Jane Coffin, Nii Quaynor, Iria Puyosa, Vint Cerf with interventions from the floor moderated by Sébastien Bachollet as Co-Chair at Kyoto together with Olivier Crépin-Leblond.

Deliberations

The deliberations during this meeting by panelists' presentation, participant interventions and Q&A are reported here without attribution to the specific panelist or participant.

Broadly and roughly  there have been four notable 'phases' that could be seen as 'revolutions' in Internet evolution:

  • Home broadband. It sharply increased the "velocity of information" into people’s lives, bringing support for the way it democratized creativity, story-telling and community building. But it also spawned concern about misinformation, for example, in the medical community – and concern about the type of content to which children might be exposed. 
  • Mobile connectivity. Mobile phones became ubiquitous and became all-purpose “extra body parts and brain lobes” that allowed people to reach out and be contacted at any time, anywhere, without the need for knowledge on how to operate a computer. But a backlash grew about the ways in which phones disrupted people’s time use and attention allocation.
  • Social media.  Exposed users to new information and allowed them new ways to share their lives and create. The backlash has focused on the impact of social media on people’s emotional and mental health (especially for younger women and girls), the way social media can be used for information war purposes, enabled political polarization and tribalism, and menacing behavior like bullying and physical threats.
  • Artificial intelligence. Often functioning unnoticed and uncommented upon, AI allowed people to live their lives more conveniently, efficiently and safely. It promised productivity increases. But the backlash starts with people’s inherent wariness of anything that might challenge their rights, their autonomy and their agency. There are widespread concerns about job loss, bias and discrimination, and whether AI can be used ethically. 

It is worth noting that these and other concerns have mostly arisen at the level of applications, rather than the essential architecture of the Internet. Unfortunately, the concerns at the cultural, legal and social level usually drive policy deliberations that could limit the way the Internet functions.

Users almost unanimously support the Core Values of the Internet: open, free, secure, interoperable, end-to-end, permissionless innovation. The revolutions and the backlash they engendered:

Beyond those general concerns about digital functions, there is evidence that different people have different experiences of those revolutions. Those group differences drive concerns and calls for further regulations. At the group level, it is clear that divisions by gender, age, race/ethnicity, class, nationality, religious affiliations, affect people’s online experiences. There are also divisions along the lines of people’s level of awareness and their knowledge about technology, and their traits cause them to experience and react to technology differently. 

To further complicate the picture, it is clear that individual technology users act in different ways under different circumstances. They are not necessarily predictable and their actions are often contingent, transactional, and context specific. This makes it very hard for those designing policies to take into account the variety of ways people will use technology or have concerns about its impact on them.

In global surveys and other research, there is a division that pits individuals against society. Individual actors are often confident they can navigate the problems of information and communication ecosystems, but others are incapable of doing so. That results in an almost universal sense that “I’m OK, but the rest of the world is not”. 

How should policy makers understand that and take account of such an array of social, cultural, and legal variance as they try to think about regulations for the Internet? It is a chaotic picture that suggests that policy proposals affecting the basic functioning of the Internet should be undertaken with great caution and much humility.

The Internet has been self organizing its network of networks with as little regulation as possible for them to work. There is a lot of support for this self-organization on the network level even though in some cases the shared objective of developing networks for people who do not yet have access appears to have been lost.

Regulate

Caution is advised when facing pressure to “regulate fast... because some serious harm is upon us". Quick and ill-designed regulations may undermine online freedoms or lead to Internet fragmentation.

Before regulating, it is necessary to assess the tradeoffs of different policies as well as the suitable technical implementations of those policies.

Unfortunately, pressure to legislate is driven by public opinion on harms - often emphasized by governments to impose legislation. Law enforcement requests for access to private communications, national security, and cyber-sovereignty agendas dominate public debate in most countries.

The Internet will not be the same if it is run in a non open way - and we can see that with countries where there is a zeal to pass laws to "protect the interests of the regimes".

The intent may have originally been laudable but they may also have side effects.

For instance, we observe this problem in legislation threatening end-to-end encryption under the urge to provide more safety for children online, legislation establishing widespread Internet surveillance pretexting rising concerns related to violent extremism, cyber-sovereignty agendas undermining net neutrality, and cybersecurity policies that pose a risk to interoperability. 

Technical solutions to online harm must ensure respect for human rights and the rule of law in line with the principles of necessity and proportionality. Any restriction of access to the Internet must be lawful, legitimate, necessary, proportional, and non-discriminatory.

Civil society and the Internet technical community must continue collaborating in facing overregulation trends threatening Internet Core Values.

Some participants in the meeting pointed to further study in countries like Finland and Estonia, that have advanced in terms of e-governments. It was also mentioned that the borderless nature of the Internet would expand with a more widespread use of “satellite Internet” and Internet Exchange Points in Space - thus bringing a new perspective on cross-border issues.

Key Takeaways 

  1. The Internet has been self organizing with as little regulation as possible for it to work and if strong regulation is introduced it will hinder its technical functioning. Too much regulation will damage interoperation. As Internet networks evolve into Space with no borders there are question marks as to how its Core Values will be sustained.
  2. One of the major policy tensions in digital life pits anonymity against accountability. Anonymity has been a key aspect of Internet activity, but we have painfully learned that full anonymity can be exploited in ways that allow bad actors to escape being held accountable for the harms they cause. Systems must be developed to bring accountability without compromising essential anonymity - and layering identity levels is one way to do it.
    Such systems must be designed with clear and minimal implications for deep architectural changes. A layered approach (possibly in the application layer) may be desirable. 

Call to Action

  1. All stakeholders should actively engage in understanding, appreciating, and expanding knowledge of the Internet’s Core Values and the damages that may arise from actions that, deliberately or as unintended consequences, impinge negatively on them. The list is not long and it starts by layered architecture, packet switching, “best effort” i.e. design for resilience against failure, interoperability, openness, robustness (Postel), end-to-end (meaning that most functions that are not packet transmission are a responsibility of the “edge”, and implying network neutrality), decentralization, scalability, and, as a consequence, universal reach and “permissionless innovation”.
  2. Laws, norms, and treaties must all be commensurate with these values and only impinge on any of them after a deep analysis by all stakeholders, and with safety valves to avoid irreversible unexpected consequences down the road. 
  3. The Internet community including the private sector, civil society, technical community should actively engage with governments to make them understand why a multistakeholder IGF is important.
  4. Use of encryption needs to continue - as without encryption many of the functions of the Internet's safety will be negatively impacted.

 

IGF 2023 WS #209 Viewing Disinformation from a Global Governance Perspective

Updated:
Global Digital Governance & Cooperation
Key Takeaways:

1. A more nuanced approach to disinformation is called for, which should not only focus on social networks or digital platforms but also consider the wider media landscape. Furthermore, more empirical research is needed to realistically assess the dangerousness of disinformation. We should not simply take for granted the effect of disinformation on people's thinking and (voting) behaviour.

,

2. There is not one global solution against disinformation that works in every instance or context. It is unlikely that governments agree on how to address disinformation. However, what is needed is a common set of principles that guides how we think of and act upon disinformation. Human rights and access to information must be front and center of such principles.

Calls to Action

1. Regional human rights courts need to be resourced in a way that they can function as mechanisms in the regulation of disinformation.

,

2. High quality journalism is an effective means against the impact of disinformation but faces an uncertain future. More work needs to be done to strengthen independent journalism particularly in countries with a high incidence of disinformation.

Session Report

 

Workshop Report  - IGF 2023 WS #209: Viewing Disinformation from a Global Governance Perspective

 

Workshop process

Part 1:

The workshop opened with the moderator asking participants to stand and gather along an imagined line on the floor in the room based on the extent to which they agreed or disagreed with the following statement: "Disinformation is undermining democratic political participation". The moderator then walked around the room and asked people to share their views and why they agreed/disagreed or stood somewhere in the middle. They were encouraged to shift their position if the discussion led to them rethinking their initial opinion.

Views in the room were diverse.  Almost all participants stood in the area signifying agreement with the statement.  Several offered examples from their countries and larger experiences that they believed demonstrated a strong causal link between disinformation and democratic erosion.  Two people, including one of the speakers, stood in an intermediate position and argued that a nuanced and contextualized approach is needed in examining cases so a binary choice between “is/not causing” was not adequate.  One person stood in the area signifying no impact of disinformation.

The moderator also asked the panelists to share their perspectives, and, in doing so, to respond to the question: “What is disinformation, is it a serious problem, and if so, why (or why not, if you believe it is not a serious problem)?”

Interactive discussion on this question between participants and the panelists continued for about 25 minutes. One of the panelists responded by asking what impact of disinformation we care about. He also suggested that disinformation is an umbrella term that is too broad as a basis for regulation. A person from the audience added that disinformation is not new and that each media has been abused for purposes of propaganda. One panelist pointed out that there is a lack of empirical evidence about the impact of disinformation. Most of what we know concerns the production and dissemination of disinformation while its effect on people’s worldviews and voting behaviour is mostly taken for granted. Recent research suggests that disinformation amplifies extremist beliefs rather than instigating them. As a closing question, the moderator asked participants if any of them lived in

contexts where disinformation does not have a major impact. Two people responded to say that in their countries disinformation does not appear to be causing much harm due to the presence of a serious and legitimized mass media and other factors. A panelist concluded that high quality journalism is the best way to combat disinformation.

Part 2

The second question put to the panel and the participants was: “Can disinformation be regulated internationally? How strong and clear a baseline do existing international instruments provide for the governance of disinformation? What are the implications for rights to access information and freedom of expression?

There was no common view on whether disinformation can be regulated internationally. Panelists doubted whether there can be one solution for all the different forms of disinformation. There was agreement on the need for a common set of principles to guide how we think of and act upon disinformation. Human rights, particularly Article 19, which protects freedom of expression and information must be front and center of such principles.

One speaker briefly flagged three examples of efforts to devise international Internet governance responses to disinformation.  These included some problematic proposals for binding treaty commitments among governments that have been floated in the UN cybersecurity and cybercrime discussions; the European Union’s Code of Practice on Disinformation; and the UN Secretary General’s proposed Code of Conduct for Information Integrity on Digital Platforms.  It was pointed out that while the first example involved efforts to devise constraints on state behavior that would never be agreed in geopolitically divided UN negotiations, the second two involve codes of practice pertaining mostly to the providers and users of digital platforms.  It was noted that while platforms certainly have responsibilities, focusing largely on them rather than on the governments that produce or support the production of a lot of disinformation is quite a limitation.  There are also open questions around the reliance on codes and guidelines varyingly interpreted and implemented at the national level.

The next question was: “Concerning new governance initiatives, what sort of consultation and decision-making process is best suited to the governance of disinformation, and can the IGF assume a role in the process?

This provoked a very interesting discussion. Participants involved in the Christchurch Call shared how they put multistakeholder consultation at the centre of their efforts to combat online extremism. The key lessons they shared that are relevant to responding to disinformation was (1) the multistakeholder approach has been critical to create trust among the actors involved, (2) the need to take time and form partnerships with diverse actors involved, (3) to keep the scope and focus really tight and (4) not to rush into regulatory intervention.

Part 4 - Closing

In closing, panelists offered their main take-aways, including things they did and did not want to see.  There were calls for better empirical research and evidence about the effects of disinformation; for more nuanced policy responses, including avoidance of governments using “moral panics” on disinformation to justify restrictions of human rights; for multistakeholder participation in crafting governance responses; and for hefty fines on Elon Musk’s X for violations of the EU’s rules.

IGF 2023 WS #500 Connecting open code with policymakers to development

Updated:
Data Governance & Trust
Key Takeaways:

Interest on data sets such as GitHub's Innovation Graph: https://innovationgraph.github.com/ as a way to approach private sector data for public sector research.

,

Discussion on challenges with skills technical staff within government to implement open source tools and how to tackle the myths some may have about open source.

Calls to Action

Some topics were too broad and could be narrowed down for more in depth discussion.

,

There was interest in the process of simplifying the process of using private sector data for policy making

Session Report

Connecting open code with policymakers to development

 

This session built on the work of numerous different agencies with speakers from the Government of France Digital Affairs office, GitHub Inc., and LIRNEasia. This session focused on the theme of ‘Data Governance & Trust’ and how private sector data in general and technology platform metrics in particular can inform research and policy on technology maturity, innovation ecosystems, digital literacy and the monitoring of progress towards SDGs at the country level. GitHub is the world's largest platform for collaborative software development, with over 100 million users. GitHub is also used extensively for open data collaboration, hosting more than 800 million open data files, totaling 142 terabytes of data. Their work highlights the potential of open data on GitHub and also demonstrates how it can accelerate AI research. They have done work to analyze the existing landscape of open data on GitHub and the patterns of how users share datasets. GitHub is one of the largest hosts of open data in the world and has experienced an accelerated growth of open data assets over the past four years and ultimately contributing to the ongoing AI revolution to help address complex societal issues. LIRNEasia  is a pro-poor, pro-market think tank. Their mission is to catalyze policy change and solutions through research to improve the lives of people in the Asia and Pacific using knowledge, information and technology. Joining the panel was also Henri Verdier, the French Ambassador for Digital Affairs within the French Ministry for Europe and Foreign Affairs. Since 2018, he leads and coordinates the French Digital Diplomacy.  He previously was the inter-ministerial director for digital information and communications systems (DG DINUM) of France; and he was the director of Etalab, the French agency for public open data.

The session opened with an overview of what connecting open code with policymakers and the previous efforts made on this topic. There has been research done on this and the panel highlighted GitHub’s work on Partnering with EU policymakers to ensure the Cyber Resilience Act works for developers. While in France, there has been policies on the implementation of “open source software expertise center” set up in the Etalab which is a part of the interministerial digital department DINUM. It is a part of an effort of setting up open source offices in governments that can be observed throughout the public administrations in Europe. The expertise center will be supported by other initiatives of the government such as projects within the TECH.GOUV programme aimed at accelerating digital transformation of the public service. Other efforts such as the French government’s roadmap for developing open source to make it a vector of digital sovereignty and a guarantee of “democratic confidence” is part of the conversation. Leading to the topic on the challenges from unmet data needs that can be supported by private sector data for development purposes which GitHub announced the Innovation Graph. The GitHub Innovation Graph dataset contains data on (1) public activity (2) on GitHub (3) aggregated by economy (4) on a quarterly basis on GitHub public data.

 

Finally, the panel session concluded with discussion on data privacy & consent as well as efforts to promote and support open code initiatives globally. There was extensive interest by attendees on how to encourage participation and capacity building locally, and encourage more open source development within governments.

 

IGF 2023 DC-Gender Disability, Gender, and Digital Self-Determination

Updated:
Digital Divides & Inclusion
Key Takeaways:

Accessible design: not an afterthought, mobile phone-friendly, with easy interfaces. A multistakeholder approach to digital accessibility where the onus is not just on people with disabilities to fix the accessibility problems. Involving persons with disabilities in technology design and development processes - learning from experiences across genders, sexualities, class, caste locations. Integrating digital accessibility in formal education.

,

Thinking about how accessible and affordable technology is for people with disabilities across caste and class locations. Accessibility barriers are also defined by who builds tech and who it is built for. What an inclusive policy framework can look like: ideas of inclusiveness that aren’t homogenised but are representative of a spectrum of disabled experiences.

Calls to Action

A paradigmatic shift in how technologies are designed and developed. Instead of developing them at scale, accounting for nuanced and individual use experiences, and creating customised tech centred around layered and individualised experiences, rather than a one-size-fit-all approach.

,

Involving persons with disabilities in developing technologies as well as policies - recognising people with diverse disabilities as part of the digital ecosystem and digital spaces. Developing technologies and policies taking into account the diverse experiences of persons with physical and psychosocial disabilities and different layers of accessibility barriers when it comes to inhabiting and occupying digital spaces.

Session Report

 

Lived experiences

Vidhya Y:

  • Digital space is huge - when we say tech, that’s the only way as a blind person I can communicate with the world. It opens up opportunities. Growing up in a village, I didn’t have access to tech and missed out on a lot. But when I got on to online platforms, there was so much I could do. I could access the news, know what time it is, communicate via emails. Most people don’t understand braille. 
  • Taking help from someone to type messages would mean I don’t have privacy over messages I want to say. Digital platforms have enabled many disabled people to have privacy and more autonomy over their choices.
  • Websites aren’t designed in a way all can access. There are a lot of images that aren’t labeled. 
  • For women with disabilities, the barriers are too many! It’s an irony. Digital platforms have given a lot of privacy but at the same time, you have to be so careful. When Covid happened and people were trying to get on online platforms, video calls were a must. I’d adjust my screen to point a bit downwards so people are not able to see much of me. But my sister observed and told me that the camera is actually at the top of the monitor and if you put it down, people can see you more clearly. 
  • I feel I have to take second opinion about a lot of things in the digital space. New things are coming up all the time.
  • When you’re using a screen reader, if you’re in a crowded place, you tend to misread content. Voice messages also have privacy issues: eg. in conferences I’m unable to use voice message.
  • Typing maybe easier if you have some other disability, but it’s a huge issue for visually impaired people. 

Gunela Astbrink:

  • A young woman in Africa, a wheelchair user, has speech impairments, limited use of one hand. She was determined to study IT and went to school, vocational college, and now she sometimes tutors other students. The way she uses smartphone/laptop is with her knuckles. That’s how she communicates with her digital tools.
  • When a person with a disability is online, there’s often a sense that we are all digital beings, and there’s an assumption that we’re all on the same level and will be able to use all tools. However, this isn’t the case. Tools, websites, platforms need to be made accessible. Important for tools and learning platforms etc. to be developed along with PwDs. 
  • Nothing about us without us - so that PwDs are able to be part of development and part of the digital community.

Privacy and security concerns

Vidhya Y:

  • Digital tools enable you to do a lot of things yourself, which wasn’t possible earlier. There are color recognisers, apps to tell you which currency you’re using, apps where sighted people sign up as volunteers for solving captchas etc. Captchas are designed as not being designed for machines so privacy isn’t compromised, but this is a barrier for many persons with visual impairments, if audio captchas are not enabled. Even if you can use a computer. If I want to get help in Kannada, local language, I won’t get help at night. But if you need help in English, there will be someone to assist you.
  • I conducted digital literacy trainings with school teachers. Guided them to installing these tools - we found really good uses: you can call them and the volunteer who picks up the phone, they’ll tell you to point your camera at the captcha on the computer. And guide you accordingly. People have used these technologies to even take support in matching their sarees with their bangles.
  • But you’re forced to depend on others at certain times. You’re also wary about where you’re pointing camera - what the other person can see - what data is being collected. At the end of banking transactions, if you have to enter captcha, you have to enter all other details beforehand, which means the person supporting you can see what all you have typed. It’s a huge privacy compromise.
  • Privacy concerns around how much of you should be visible to the other person: apart from your voice you aren’t sure what else is visible. A concern for women with disabilities.
  • For FB, IG etc.: If I were to upload photos I’ve taken during this conference to FB, my cousin will give me the photos with captions. But I don’t know if I’m missing anything in the photos - as I’m relying on the captions. Sometimes people have told me, only half your face is visible, or this photo shouldn’t have been taken.

 

Padmini Ray Murray:

  • Every device we use is compromised by some form of surveillance, and it’s very difficult for non-disabled people to wrap their heads around being online, use these devices and think about how to maintain their privacy.
  • Most devices or apps - even if they’re made for disabled users, might not be taking these considerations into account - while they’re being designed.
  • While there are accessibility guidelines, those are often just the baseline, and there’s much more nuanced requirements of disabled users that need to be taken into account.

 

Imagining inclusive tech

Manique Gunaratne:

  • Through assistive devices and tech, we’re able to work in an equally capable manner with non-disabled people.
  • The problem is often the cost factor in accessing technologies. Eg. for hearing impaired persons, they cannot hear if someone rings the bell. But they can access a picture of doorbell ringing through a smartphone.
  • For visually impaired people, smart glasses can identify what’s around us and provide a description of the surroundings.
  • For people with mobility difficulty, apps and technologies can help them find spaces they can access - restaurants, movie theater etc. Through hand gestures or facial expression if they can operate computers, they can also be employed and economically active.
  • Tech operating through brain functions.
  • Entertainment is not only for people without disabilities. Games, etc. need to be accessible. 
  • Technologies to give emotional recognition, especially for autistic people or those with intellectual disability.
  • Smart homes: PwDs can cook food of their choice, make domestic choices etc.

Judy Okite

  • For a long time, we’ve been advocating for physical accessibility at the IGF - hope it’s better this year. 
  • One of the things we did with KICTANet this year: Evaluated 46 govt websites, just to see how accessible information is for PwDs. Unfortunately, the highest they got was 80%. The feedback from the govt was interesting: people felt if you’re at 80% you’re at a good space. But actually it means 20% of your content is not accessible to PwDs.
  • From research we did: more emphasis is placed on persons who are blind when it comes to digital content. But persons with cognitive disability are more disadvantaged. If the content is not understandable/perceivable, then you’ve lost this person - they will not be able to interact with your content.
  • In Kenya, only about 2 years ago, cognitive disability was recognised as a disability. So we can see how far we are on inclusion. 
  • How do we ensure that PwDs are part of our change - not just because they want to, but because they have to be a part of the process.
  • Forum for Freedom in Jerusalem - in Tanzania - they know my needs on physical platforms - worked with them before. There was a ramp, but I still needed to be lifted up to reach the ramp. They had an accessible room but very small cubicles for washrooms - so I called the guy from the reception who came with a wheelchair and I requested him to push it into the washroom. He asked how can I do that? I asked him back, how do you expect me to get in the washroom then?
  • If they had included a PwD to be a part of this process, the ramp or the washroom wouldn’t have been this bad. Being deliberate in having PwDs as part of the process, the change.

Nirmita Narasimhan

On policy and regulatory processes

  • Important to have policies - ensures that people are aware there’s a need. Mandated. Recognised by law. The fact that there’s a legal and social requirement and responsibility to comply with standards is important in ensuring that accessibility is there. Countries that have policies are better placed in terms of how accessibility is implemented.
  • A lot of countries have implemented the CRPDA - domain specific policies need to come as well. Depends on different strategies and situation.
  • Eg. In India when we had to lobby for the copyright law, we had to do a lot of research on what are the legal models available everywhere. We ran campaigns, meetings, signature campaigns etc. On the other hand, when we look at electronic accessibility, we had meetings with electronics and IT departments, and that’s how we worked with them to develop a policy. While developing the procurement standard in India, we worked with agencies, industries, academic groups etc. on what the standards should be and how they will be implemented. The idea is to get different stakeholders involved and be responsible for this.

Concluding thoughts

Padmini Ray Murray

  • The biggest challenge we struggle with is when we design/develop technologies, we try to do it at scale, which means more nuanced and individual use experiences become harder to provide. This requires a paradigmatic shift in how tech is built - creating customised products. More layered and nuanced. More individualised and personalised experiences rather than one-size-fits-all.
IGF 2023 Open Forum #59 Whose Internet? Towards a Feminist Digital Future for Africa

Updated:
Data Governance & Trust
Key Takeaways:

It might have become progressively easier for women to participate meaningfully in policymaking related to digitisation (including Internet governance) over the past twenty years, but there are still barriers to overcome and to address in order to make women’s voices heard and needs met in a comprehensive and not tokenistic manner.

,

There is a need for diversifying and deepening conversations, perspectives, terminology, and research about feminist priorities in the Internet space in order to move beyond a common focus on challenges pertaining to online gender-based violence and related issues, to broader dimensions that shape socio-digital inequalities that continue to impact women’s experiences in Africa.

Calls to Action

Invest in developing more meaningfully and diverse research and advocacy agendas pertaining to women and feminist that extend beyond online gender-based violence.

,

Stakeholders are encouraged to continue investing in capacity-building for African women. Women who are currently actively engaging in digital policymaking and Internet governance platforms should continue to actively open up spaces for new and young women leaders who can actively participate in these conversations and discussions in the future.

Session Report

Session Summary Report

 

As part of the 2023 UN Internet Governance Forum, held in Kyoto, Japan from October 9th to October 12, the African Union Development Agency (AUDA-NEPAD) organized an open forum on Whose Internet? Towards a Feminist Digital Future for Africa, on October 12. The session invited experts from the digital and policy sector to a panel discussion on opportunities and challenges faced by women, working in Africa’s digital economy and their role in shaping Africa’s digital transformation.

 

The session was hosted and moderated by Dr Towela Nyirenda-Jere of AUDA-NEPAD’s Economic Integration Division, supported by Alice Munyua, the Senior Director for Africa Mradi at Mozilla Corporation on-site.

 

Alice Munyua from Mozilla Corporation and Liz Orembo from Research ICT Africa (RIA) opened the discussion by sharing powerful personal testimonies, illustrating their experiences as women and female leaders in Africa’s digital sphere. Their reports highlighted the (mis)perception of female expertise and importance of female role models in digital spaces. Building on their reports, Bonnita Nyamwire from Pollicy and Dr. Nnenna Ifeanyi-Ajufo, Professor of Technology Law, shared and discussed research findings on threats of online gender-based violence, barriers faced by women in Africa’s digital economy and learnings on good practices and policy implications for ensuring safe digital spaces and socio-digital equality for women on the continent. Dr. Tobias Thiel from GIZ concluded the discussion by emphasizing Germany’s commitment towards feminist development policies and its continuous efforts to eliminate discriminatory structures for women, girls, and marginalized groups within the African Digitalization and Data sphere. All panelists highlighted the barriers women remain to face when working in digital sectors and emphasized the need to leverage women’s opportunities and participation to ensure an inclusive African Digital Transformation.

 

Participants off- and online actively engaged in the discussion and emphasized panelists’ statements by sharing their own experiences as leading female experts in the field. The interactive discussion underlined the importance of creating safe spaces and called for policymakers to ensure the inclusion of female voices in shaping policies that ensure a fair and just digital transformation in Africa. 

 

Panelists and the audience called for investing in developing more meaningfully and diverse research and advocacy agendas pertaining to women and feminist that extend beyond online gender-based violence. Panelists and audience also encouraged stakeholders to continue investing in capacity-building for African women. Women who are currently actively engaging in digital policymaking and Internet governance platforms should continue to actively open up spaces for new and young women leaders who can actively participate in these conversations and discussions in the future. Finally, the panel-discussion called on every person to consider their own unique commitment towards advocating for advancing socio-digital equality for women on the continent and beyond and take tangible steps towards realizing these goals.

 

In conclusion, the session identified several key takeaways from the panel discussion and subsequent round of contributions from the audience: While it might have become progressively easier for women to participate meaningfully in policymaking related to digitalization (including Internet governance) over the past twenty years, there are still many barriers to overcome and to address in order to make women’s voices heard and needs met in a comprehensive and not tokenistic manner. In addition, the discussion identified a need for diversifying and deepening conversations, perspectives, terminology, and research about feminist priorities in the Internet space in order to move beyond a common focus on challenges pertaining to online gender-based violence and related issues, to broader dimensions that shape socio-digital inequalities that continue to impact women’s experiences in Africa.

 

 

 

IGF 2023 Lightning Talk #97 Combating information pollution with digital public goods

Updated:
Global Digital Governance & Cooperation
Key Takeaways:

Highlight tools that are lesser known on misinformation and disinformation.

,

There was interest in what digital public goods were and how they could be implemented

Calls to Action

Provide more hands on opportunities to interact with tools and perhaps a demo could be effective.

,

A broader understanding on what digital public goods are is needed to ensure we can support the prevention of disinformation and misinformation

Session Report

Combating information pollution with digital public goods report

This lighting talk opened with an overview of the Digital Public Goods Alliance (DPGA) which is a multi-stakeholder initiative to accelerate attainment of the sustainable development goals by facilitating the discovery, development, use of and investment in digital public goods. The DPGA “defines digital public goods as open source software, open data, open AI models, open standards and open content that adhere to privacy and other applicable laws and best practices, do no harm, and help attain the Sustainable Development Goals (SDGs).” An example of a DPG is District Health Information System 2 (DHIS2), is the world's largest health management information system platform. This was followed by an overview of GitHub. GitHub is a complete software developer platform to build, scale, and deliver secure software with 100+ million software developers and used by 4+ million organizations from governments to international development organizations. Open source software like digital public goods are built on GitHub. 

 

This session focused on how digital technologies are essential parts of our lives and provide solutions to some of the world’s greatest challenges, we must urgently recognize, and help solve their downsides. This is particularly true regarding online information pollution, which has grown to be a cause of distrust and obfuscation. During this session the speakers provided an overview on how policies are needed to combate deep fakes, analyze online news media, verifying crowdsourced data, monitor technology companies’ legal terms, improve access to government policies and lastly, gain insights into the influence of digital technologies on societal conflict.

Mis- and disinformation are typically addressed through reactive measures against specific attacks or proactive prevention efforts. While these approaches are necessary and valuable, they are inherently endless and fail to address the root of the problem. Exploiting vulnerabilities for political gains will always attract malign actors, outnumbering those interested in prevention.

The issue of disinformation arises from vulnerabilities in the tools that mediate the information environment. These vulnerabilities persist because fixing them conflicts with the economic incentives of large platforms. Therefore, it is crucial to increase the costs associated with leaving these vulnerabilities open and provide incentives for their resolution. Alternatively, obligations should be imposed on actors to compel them to address these vulnerabilities.

The session provided two examples with Open Terms Archive publicly records every version of the terms of digital services to enable democratic oversight. They address a critical gap in the ability of activists, journalists, researchers, lawmakers and regulators to analyse and influence the rules of online services. Open Terms Archive enables safety by equipping actors who are already engaged in addressing these vulnerabilities. It amplifies their capabilities and facilitates connections for mutual reinforcement, ultimately enabling more effective action.

The second example is Querido Diario, developed by Open Knowledge Brazil, it addresses the challenge of accessing and analyses official decision-making acts throughout Brazil’s cities. With no centralised platform available, the only reliable source of information is in the closed and unstructured PDF files of official gazettes where they are published. To tackle this information gap, Querido Diario’s robots help collect, process, and openly share these acts. Launched over a year ago, it has grown into a comprehensive repository with more than 180,000 files, continuously updated with daily collections. Querido Diario helps combat information pollution by providing a transparent and reliable source of data that can be used to fact-check and counter false narratives, enabling informed analysis and promoting accountability. The primary users are researchers, journalists, scientists, and public policy makers and it helps benefit various sectors including environmental researchers and journalists, education NGOs, and scientists working with public data. Today, Querido Diario’s coverage reaches 67 cities, where 47 million people live. The next steps involve scaling up to include all 26 Brazilian states and at least 250 cities. The project aspires to incorporate Natural Language Processing models and integrate its data with other public datasets, helping users contextualise information even more.

Finally we closed with a discussion on a gradient approach to AI openness. The DPGA developed an exploratory framework to assess this uses cases of AI where full openness was not possible or not desirable. The audience were interested in the use of AI and preventing misinformation and disinformation which we aim to explore in future sessions.
 

IGF 2023 Day 0 Event #182 Digital Public Goods and the Challenges with Discoverability

Updated:
Digital Divides & Inclusion
Key Takeaways:

Take away 1: Attendees asked thoughtful questions on how to ensure digital public goods will not be misused by bad actors. This is a challenged would be a great next session on how to explore ways to encourage proper use of open source tools.

,

Take away 2: There was extensive conversation on capacity building on not just hard technical skills but also on soft policies that impact the implementation of digital public goods within a region.

Calls to Action

There is extensive interest to explore ways how digital public goods is used and how to prevent actors from using the tools that create harm.

,

Explore a way for simplified implementation process and a way for software developers to contribute.

Session Report

Digital Public Goods and the Challenges with Discoverability report

Summary of session

This session focused on the challenges of discoverability for digital public goods (DPGs) for governments and civil society to understand and implement. The talk opened with an overview of the Digital Public Goods Alliance (DPGA) which is a multi-stakeholder initiative to accelerate attainment of the sustainable development goals by facilitating the discovery, development, use of and investment in digital public goods. The DPGA “defines digital public goods as open source software, open data, open AI models, open standards and open content that adhere to privacy and other applicable laws and best practices, do no harm, and help attain the Sustainable Development Goals (SDGs).” An example of a DPG is District Health Information System 2 (DHIS2), is the world's largest health management information system platform. This was followed by an overview of GitHub. GitHub is a complete software developer platform to build, scale, and deliver secure software with 100+ million software developers and used by 4+ million organizations from governments to international development organizations. Open source software like digital public goods are built on GitHub. 

One key element of this session was to provide more background on what open source in the social sector means. Open source refers to software whose source code is freely available to the public, allowing anyone to view, use, modify, and distribute it. This means that the software can be improved and customized by anyone who has the necessary skills, and that it can be used for a variety of purposes without any restrictions. Open source software is often developed collaboratively by a community, and is typically distributed under a license that ensures that it remains open and free to use. Open source in the social sector is defined as software built with relevance to Sustainable Development Goals that do no harm by design and driven by a desire to increase transparency, accountability, and participation, and to empower individuals and organizations to work together to address social and environmental challenges.

This led us to discuss policies that can help improve discoverability and tools: Public & Private sector partnerships; Collaborative Platforms; Metadata Standards; Long-Term Sustainability Plans; Feedback and Improvement Loops; Interoperability Standards. 

Finally the session concludes with five simple rules for improving discovery:

  • Rule 1: Decide what level of access you can provide for partners
  • Rule 2: Deposit your DPGs in multiple trusted repositories for access, preservation, and reuse. 
  • Rule 3: Create thoughtful and rich metadata - consider the FAIR Data Principles
  • Rule 4: Localize the tools for cross-domain integration 
  • Rule 5: Ensure accessibility and inclusion for ease of access

In conclusion, this was a great session that encouraged roundtable discussions and attendees raised questions on ensuring security of open source; issues on preventing bad actors in using the open source digital public good tools and the challenges in capacity building. As a result of this session, GitHub has launched a microsite to encourage software developers to contribute to DPGs here: https://forgoodfirstissue.dev/.

 

IGF 2023 WS #494 Strengthening Worker Autonomy in the Modern Workplace

Updated:
Global Digital Governance & Cooperation
Key Takeaways:
  • Exploitation and Inequality: Emerging technologies like AI intensify labor exploitation and escalate global inequality. The business models of companies using these tools can compromise social protection rights, as they often fail to offer decent working conditions. Vulnerable groups, including refugees, are increasingly exploited to refine AI datasets.
,
  • Policy and Regulation Concerns: Urgent policy reform is needed to ensure adequate transparency between employers and workers regarding technology use in workplaces. Strong workplace privacy regulations are essential to prevent unwarranted data collection, protect personal information, and to guard against the deployment of unsound analytical tools.
Calls to Action
  • Establish and Enforce Robust Regulatory Frameworks for Worker Protection and Privacy: Develop and enforce detailed, internationally-harmonized workplace data protection and privacy regulation to protect workers, including low-paid workers, vulnerable workers, and hidden labor in the gig economy.
,
  • Foster Industry Accountability Initiatives: Establish frameworks and bodies that scrutinize and shine a light on corporate actions, ensuring that employers across all sectors adhere to high ethical, socio-economic, and environmental standards.
Session Report

The speakers presented insights into the gig economy, the future of work, the impact of Artificial Intelligence on labor rights, and corporate accountability in the context of achieving Sustainable Development Goal 8 (Decent Work and Economic Growth).

Gig Economy:

  • Globally, platform-mediated gig workers face challenges including: low pay, long hours, lack of job security, and the absence of social protections. Case studies were presented from India and Paraguay. 
  • Gig workers face exacerbated problems due to the lack of data protection laws and regulations which apply in the workplace, and a lack of meaningful anti-discrimination regulations safeguarding independent contractors and freelance workers.

Labor Rights and Corporate Accountability:

  • While there are supportive measures for labor rights in some jurisdictions, implementation issues and challenges persist. The Covid-19 pandemic revealed the inadequacy of support for gig workers, highlighting the need for a better safety net.
  • Data protection laws and regulations are crucial to preventing the potential misuse of data collected in the workplace. At the same time, there is a need for worker autonomy in the digital age, especially in surveillance-heavy environments.
  • The concentration of power in the data brokerage industry, market dynamics, and acquisitions raise concerns about transparency, competition, and data privacy.
  • There were calls for greater accountability in venture capital and early-stage interventions in private markets. There is a need for more transparency in companies' developmental stages and more consultation with impacted workers.

Venture Capital and Economic Growth:

  • The venture capital ecosystem remains insular, favoring established networks. Only 7% of female founders globally receive backing from VC firms, pointing to a significant gender disparity in entrepreneurial support, and many problematic workplace surveillance technologies are being developed by men.
  • Platform cooperativism is a potential solution. Governments should promote the creation of fairer work platforms by the workers themselves.

Global Initiatives:

  • UN instruments like the Global Digital Compact, and the WSIS+20 Review, are positioned as tools that could aid in achieving the objectives of SDG 8.
IGF 2023 DC-SIG Involving Schools of Internet Governance in achieving SDGs

Updated:
Key Takeaways:

- Issues involving SDGs are considered in many schools. This meeting heard reports on: SDG 5 on Gender, SDG 7 on access to energy, SDG 16 on Pearce and Justice. In follow up discussions, 1.8 in terms of economic aspects and 9.5 in terms of access were also discussed.

,

- SIGs are becoming reference resources on IG in many countries on topics such as: cybersecurity and regulatory frameworks. These can serve to bring clarity to the IG understanding in a country among citizens and government officials.

Calls to Action

- While SIGs discuss topics concerning SDGs, they do not always do so explicitly. While each of the schools decides on its own curricula and modalities, doing so explicitly could be considered in future courses.

,

- While the SIGs can have a well established curricula they can also adapt the content to special target groups to produce flexible and adaptable content. The SIGs can share their resources on the DC SIGs wiki and website provided by the Dynamic Coalition to help others and to promote their own efforts and achievements.

Session Report

 

Session Presentation

Schools on Internet Governance (SIGs) are an important initiative that help with creating and strengthening capacity in Internet Governance. Regional SIGs have been operating in all the regions of the world, while national SIGs exist in many, but not all, countries. The DC-SIG provides a common platform where SIGs can discuss matters of their interest, share information, share innovations and discuss adaptive mechanisms as they evolve. While the global pandemic did adversely impact many SIGs, most are now back in a fully functional manner.

This session took stock of the current status of SIGs, support community members who want to establish SIGs in countries that do not have them, and examined how SIGs can improve themselves by adapting new programmes and courses.

As part of each yearly meeting, the DC-SIG takes on a topic of specific interest for discussion and further development of plans. This year, the DC looked at how the DC SIG can contribute to developing curricula in support of SDGs as the focus.

1- Slideshow of existing SIGs was shown and a presentation of the recently formed Japan SIG. New schools were given a chance to describe their schools.

2- Schools on Internet Governance (SIGs) and their impact to achieve the Sustainable Development Goals (SDGs)SDG 5,7 and 16)

SDG 5 on gender equality. 

  • Ms Sandra Hoferichter (EuroSSIG)

Schools on Internet Governance (SIGs) contribute to this SDG because they are inclusive and the thematics are various. SIGs are a good effort  to fill the gender gap in education and to help promote women in leadership positions. For many years the application numbers of the EuroSSIG  show that more women are interested in these  topics.

  • Anriette Esterhuysen: AfriSIG addresses SDG 5 through developing women as leaders in IG and by including gender specific topics in the programme. Examples would be sessions on online gender-based violence and on the gender digital divide and how to respond.
     
  • Ashrafur Rahman Piaus (bdSIG)
    Bangladesh SIG works with the rural people on the SDG 5 and 9 by having women in their school and helping them achieve including transgender and many other marginalized community also 

SDG 7 on access to energy 

  • Ms Olga Cavalli (South SIG and Argentina SIG)

Access to energy has a great link with climate change. So in this SIG they have a few panels discussing the impact of consuming energy. The other aspect of energy, it’s important to notice that there is a gap between some areas which have access to energy and others don’t. In the SIG, they talked with different experts and panelists about this issue.

Other SDGs

  • Mr Alexander Isavnin (Russia SIG) speaks on SDG about peace and justiceThe SIGs can help build new standards. Help enforce the multistakeholder process like in ICANN. Also enforces inclusion and effectiveness.
  • SDG 8.6 Pakistan SIG conducts a session on digital entrepreneurship inspiring the youth to capitalize on the economic opportunities on  the internet.  For the SDG 9.5 (c), Access to the internet, they organize sessions on Access and Inclusion where Government and private sector brief the audience about their plans for expansion of ICT services and state of infrastructure in that city/area where school is being held (pkSIG is held at a different city every year). 
  • Some SIGs sometimes discuss topics about SDGs but not all the timeSo it is a good point to dive in after this session to see how the SIGs are promoted and present in Japan  for example.
  • Abdeldjalil Bachar Bong for Chad SIG point is that  every SIG in their own and specific way already contributes  to the SDG topics  

Roundtable  Discussion on the evolution of SIGs

  • SIGs are becoming references on IG in many countries on different topics : cybersecurity, regulations, and need to bring clarity to the IG understandings
  • The SIGs can have a root in a solid curriculum and then adapt the content to a special target group to produce flexible and adaptable content. 
  • The SIGs  can share their resources on the SIGs wiki and website to help others and promote their own achievements. This may align with the concept of open education. 
  • There are different types of SIGs who cater for different groups of people.

 

IGF 2023 Lightning Talk #116 Canada’s Approach to Regulating Online Safety

Updated:
Cybersecurity, Cybercrime & Online Safety
Key Takeaways:

In Canada, there is significant interest to regulate serious harms that results from online interaction, with many recognizing a need for a systems approach, as opposed to one just focused on individual-level content

,

The direction of legislative design seen in various governments are, in many cases, reflective of the legislative context (existing legislation, constitutional provision) that creates legislative constraint, than differences in fundamental opinions

Calls to Action

For regulators creating regulations on online harm to be clear of legislative intent, and focus on solving that specific legislative intent as opposed to other potential unachievable goals

,

For conversations to be clearly centered on the experiences of harm as experienced by people living within that jurisdiction

Session Report

In the session on online harms in Canada, we started by discussing the Canadian definition surrounding online harm, reminding participants that the talk was centered on Canadian usage of terms, which may differ from how the same term is used in other jurisdictions, inviting participants to stop the presenter and ask questions if there were any points that were unclear. We then defined online harms to mean financial, physical, psychological, and emotion harm that results from interactions that take place through the internet, whether they respect local, regional, or national borders. We then listed a number of examples of online harm, making clear that some instances of it (such as child sexual exploitation material) was illegal under existing legal framework, while some (such as misinformation) was harmful but legal.

We then moved to a discussion of the results arising from the survey of Canadians’ experience in online harm, demonstrating a significant number of Canadians are exposed to harmful content frequently. In particular, we noted that while many Canadians saw individuals as being largely responsible for generating harmful contents, they did not see individuals as being primarily responsible for reducing the amount of harmful content online, instead seeing a larger role played by online platform and the government in solving such. This particular finding was discussed in detail, in particular as informing public policy conversation on the topic.

We then moved to a discussion of the current legislative creation process taking place in Canada to tackle online harms, situating the potential legislation within a slew of legislative activity that has occurred in the past 3 years that concerns internet governance and digital economy broadly, stressing the fact that efforts to tackle online harms in Canada cannot be understood in isolation. From that point, a deeper exploration of regulatory tension surrounding online harms legislation followed, focusing particular on how it interacts with public sentiment held in Canada, as well as the law’s potential impacts on the preferred economic system, as well as other existing legislation (including constitutional law - in Canada in the form of the Charter of Rights and Freedoms) as directing the potential direction the legislation might take. The formal presentation finished with situating the Canadian conversation in a global context, stressing that while there are no unified approach to tackling online harm, many deviations seen globally likely may not reflect irreconcilable fundamental differences in definitions of online harm, but are much more likely to reflect the legislative constraints different country faces, and the possible regulatory action (both from a legal and political perspective) one can take.

After the talk, a number of questions were asked by the participants. One surrounded how legislative action can incorporate the idea of “benign exposure” to less harmful content, as a training to inoculate a user against being exposed to more harmful content. The presenter discussed at length current thinking on that topic in areas of policy approaches to tackling mis and disinformation, including approaches to increase digital media literacy amongst different groups.

IGF 2023 Open Forum #52 RITEC: Prioritizing Child Well-Being in Digital Design

Updated:
Key Takeaways:
In addition to the clear and urgent need to identify and address online risks and harms for children associated with the digital environment, sustained multisectoral efforts that prioritize child participation, including research, are required to adequately understand and leverage the positive value that digital experiences can deliver for children’s well-being in a digital age.
Calls to Action
1. To designers of digital play: consider the Responsible Innovation in Technology for Children (RITEC) project outputs, in particular the children’s well-being framework, in your decision-making processes. 2. To governments: consider how to create an enabling environment for businesses to prioritize children’s well-being in digital design.
Session Report

RITEC: Prioritizing Child Well-Being in Digital Design

Open Forum #52 - Session Summary

Speakers

  • Adam Ingle, The LEGO Group
  • Aditi Singh, Young Advocate, Dream Esports India and Esports Monk
  • Professor Amanda Third, Western Sydney University
  • Sabrina Vorbau, EUN
  • Shuli Gilutz, PhD, UNICEF 

Purpose: The session introduced the concept of well-being for children in the digital age before going on to examine its importance when we consider the centrality of digital technologies in children’s lives and the rapidly growing concerns around online harms. 

Part 1: Setting the scene on child safety and well-being in a digital age

This part commenced with Aditi Singh, Young Advocate, describing her own experiences with online gaming and how, from a young age, games pushed her critical thinking and collaboration skills and enabled her to grow intellectually and socially. However, Aditi also described the harms, particularly those related to being a young woman online, associated with gaming. This includes how she, and other children, often don’t understand the risks of sharing personal information and prevalence of gender-based harassment.

Aditi then discussed how forums, like the UNICEF Game Changers Coalition, has helped her and others reimagine the role of women in online gaming and drive the design of games to make them more age-appropriate spaces. Aditi called for governments and other bodies to incentivize private firms to build experiences with children at their core and how platforms themselves need to realize that their choices can unlock the benefits of games while minimizing the risk.

Sabrina Vorbau from European Schoolnet followed Aditi, discussing the EU’s revised Better Internet for Kids (BIK) strategy and how the revision process ensured the new BIK onboarded diverse views, including those of children which were instrumental to shaping the strategy. Ultimately this ensured the strategy adopted a more modern approach to promoting protection, empowerment and participation of children online. Sarbina highlighted how young voices also helped inform the Safer Internet Forum conference, informing important matters like topics, speakers and themes. Sabrina reinforced the need to educate with young people, not simply to them or for them.

Shuli Gilutz began to discuss how design philosophies within industry are critical to embedding digital well-being into online play. Shuli unpacked the concept for ‘well-being’, noting that it’s about the subjective experiences of children and includes not just safety but also outcomes like empowerment and creativity. Shuli described how RITEC is working with designers to develop a guide for business, giving them the tools to create positive digital experiences that are safe, private but also advance well-being.

Part 2: the RITEC project

Adam Ingle provided an industry perspective of why designing for children’s experiences is critical, discussing how the LEGO Group is embedding the concept in its own online play products. Adam highlighted that the RITEC project is about developing an empirical basis for understanding what digital well-being looks like while also creating the tools to proliferate responsible design throughout industry. Adam discussed the LEGO Group’s internal processes that helped the company implement best practice, this includes incorporating the views of child rights experts in product development processes, adopting clear digital design principles built around well-being as well as ensuring business metrics and KPIs also measure success against well-being. Adam concluded by noting that it’s not just about equipping businesses with design tools, but that cultural change is also needed to lift industry standards.

Amanda Third introduced the RITEC project itself, based on engagement of almost 400 children (predominately from the global south) and driven by their own views on digital play. Crucially, the project revealed that digital play brings joy and satisfaction and that children experienced many benefits – particularly through fostering social connection and promoting creativity. They are however conscious of the dangers and expect governments and firms to protect them.

Amanda noted how the perspectives of children informed design of a well-being framework with eight components (competence, emotional regulation, empowerment, social connection, creativity, safety and security, diversity, equity and inclusion and self-actualization). The project has also developed metrics to determine whether digital play experiences are meeting the above eight components of well-being, so it’s a practical, measurable framework and not just an abstract one. Amanda concluded by reinforcing the benefits of online play for children but also the criticality of involving children in research.

Shuli noted the next steps for the RITEC project, which includes the guide for business that summarizes the research and makes the findings actionable. Project managers are building the guidance with feedback from designers to ensure the tools speak design language and can be adopted with relative ease.

Panelists were asked to each note a critical action for embedding responsible digital design. Sabrina highlighted the importance of youth participation and including young voices in policy design. Adam emphasized the need for policymakers to adopt a holistic approach to online regulation, that balanced both harms and benefits and incentivizes firms to design for well-being. Shuli stated that industry needs to pivot towards more holistic design philosophies, including empowerment rather than just engagement. Amanda cautioned that we should also recognize the limits of design and how it’s one part of a wider solution that includes cultural change and education.

QUESTIONS AND DISCUSSION:

How do we reach a true representational group of young people? Amanda noted that it’s important to reach to partner organizations who have expertise in engaging vulnerable and diverse perspectives but also there isn’t a perfect research method for participation, and we all need to move forward consciously.

How do we design for the evolving capacities of children? It was noted that regulatory frameworks require firms to consider the different capacities of children and Adam discussed how clever technical design can ensure that, for example, social settings are more limited for younger ages but expand for older ages who can engage with strangers in a more mature way (and with less risk).

What is the role of parents and educators and how does the framework include them? Shuli noted that the main recommendations for parents are (1) play with your kids - once you play with your kids you understand the benefits and risks and that helps the discussion happen, (2) also talk to children what you, as a parent, are worried about. Sabrina noted the conversations between parents and children about online safety is critical.

 

IGF 2023 Open Forum #163 Technology and Human Rights Due Diligence at the UN

Updated:
Global Digital Governance & Cooperation
Key Takeaways:

- Access to effective remedy is crucial, noting the impact of technologies on marginalized and vulnerable populations. There is a need to build in elements of independent assessment for oversight and accountability reasons. Transparency on the process and practice and continued engagement with civil society are key. Effective enforcement is also a key element to the success of this guidance.

Calls to Action

- Emphasize the need to take seriously the questions raised in the discussion on transparency, independent assessments, and enforcement for the HRDD Policy Working Group to take on as they implement next stages on the policy guidance.

Session Report

The UN is developing guidance note on human rights due diligence guidance for its use of digital technology. This process has included consultations with internal and external partners, helping mainstream human rights due diligence and align approaches across the UN system. The guidance, undergoing multiple drafts, aims to be inclusive and address different impacts, especially on gender and intersectionality. It will be considered for implementation across the UN system following feedback and endorsement.

UNHCR is actively applying human rights due diligence in its digital technology use, focusing on complex settings. They have a range of policies and are working on a formal framework to align with international human rights and ethical standards. They have been involved in developing the guidance through case studies and strategic partnerships, and the guidance has evolved to become more implementable. UNHCR plans to incorporate the guidance into their digital strategies.

The World Bank commends the principles-based approach but emphasizes the need to consider different levels of development and maturity among member states, stressing the importance of adapting the guidance to each country's specific context while maintaining universal principles.

Access Now highlights that access to effective remedy is crucial, noting the impact of technologies on marginalized and vulnerable populations. There is a need to build in elements of independent assessment for oversight and accountability reasons. Transparency on the process and practice and continued engagement with civil society are key. Effective enforcement is also a key element to the success of this guidance, as well as transparency in private-public partnerships.

The session concluded with OHCHR emphasizing the need to take seriously the questions raised in the discussion on transparency, independent assessments, and enforcement for the HRDD Policy Working Group to take on board as they implement next stages on the policy guidance.

IGF 2023 Networking Session #78 Governing Tech for Peace: a Multistakeholder Approach

Updated:
Global Digital Governance & Cooperation
Key Takeaways:

While some perceive technology as a threat to peace (cyber vulnerabilities, privacy and discrimination issues, disinformation and polarisation on digital platforms, trust in information and data undermined by AI), digital technology should also be seen as a peace-enhancing factor, if properly governed by avoiding "tech-solutionism" and adopting an inclusive, multistakeholder approach to implementing PeaceTech initiatives.

,

We need to move from "coercive peace" (tech for security and stability) to "persuasive peace" (tech and data to promote social cohesion). We need human rights due diligence for the procurement process of tech solutions: tech that violates human rights, dignity and freedom should not be called PeaceTech. To enhance social trust, we should regulate processes rather than content, so that the Internet can become truly transparent and accountable.

Calls to Action

To bring together different stakeholders (governments, tech-companies, NGOs, academia) to discuss the potentials and challenges of PeaceTech, define key areas of intervention, and implement collaborative projects to enhance peace and social cohesion via the safe and responsible use of frontier technologies.

Session Report

The Networking Session started with a round of introduction, the participants were from different sectors, but their common thread was using technology for peace and sustainable development. In the beginning of the discussion, the participants tackled the definition of peace, as an important first step in determining the role of technology in its enhancement. Human rights were mentioned as a necessary, but not sufficient condition for peace, along with other criteria such as the positive definition of peace according to which peace implies attitudes, institutions and structures that create and sustain peaceful societies, rather than mere absence of violence. When it comes to the relationship between technology and peace, the participants identified both positive and negative impacts of tech to peace. As PeaceTech advocates using technology as a tool to achieve peace, it should be avoided to associate PeaceTech with any technology that violates human rights and dignity and endangers people’s freedom. In line with that, the participants commented on the need for moving from coercive peace, which entails using tech centrally to obtain security and stability, to persuasive peace, in which technology and the collected data can be used to advance peace and social cohesion. Building trust and creating a safer space without compromising on freedom of expression was identified as another crucial mission. Having in mind people’s tendency to behave responsibly when they are held accountable for their words and actions, the participants mentioned the need for raising transparency and accountability in the digital environment. An example that came up was the social scoring system in China, relevant both for the trust-building issue and for defining areas that PeaceTech includes. The participants agreed on the importance of bringing together stakeholders from various fields, such as governments, tech-companies, NGOs and academia, as well as from different parts of the world and perspectives. Through this multistakeholder approach, the actors would discuss the potentials and challenges of PeaceTech, areas of possible intervention and implement collaborative projects that would be a contribution to using technology safely and responsibly to improve peace and social cohesion.

IGF 2023 Day 0 Event #189 Women IGF Summit

Updated:
AI & Emerging Technologies
Calls to Action

Women IGF should study what are the cost of women exclusion in the digital leadership and spaces, the cost of women’s lack of internet access

,

Women in IGF be recognized as an NRI and inclusive and representative of the global issues.

Session Report

 A call to action is to promote Women IGF globally, to identify and work with ambassadors or champions of internet governance to push for national actions required to empower women and give opportunity to participate as leaders in the Internet Governance and Policy formulation and to be recognized as an NRI at IGF global level. Secondly to support the Feministic Principles inclusion in the Global Digital Compact. 

IGF 2023 Open Forum #98 CGI.br’s Collection on Internet Governance: 5 years later

Updated:
Global Digital Governance & Cooperation
Key Takeaways:

Libraries play an important role in providing access to knowledge. CGI.br has been working on implementing a library and many outreach initiatives that can inspire other organizations to make information on Internet governance more accessible.

,

Controlled vocabularies are essential resources for organizing and retrieving information and data on Internet Governance. Regarding this, artificial Intelligence and machine learning tools can be used in order to automatizing taxonomies.

Calls to Action

The IGF space for experts and stakeholders to share insights, best practices, and challenges related to building and maintaining collections in Internet governance.

,

Stakeholders need to cooperate more on building collections on Internet Governance. One essential area of collaboration is the development of taxonomies and vocabularies specific to Internet Governance.

Session Report

The Open Forum "CGI.br’s Collection on Internet Governance: 5 years later" was presented at IGF-2023 in order to continue the discussion that began in 2017 with the Open Forum titled "Memory and documentation in Internet Governance: The challenge of building collections". It had an audience of 12 people and saw five interactions with the audience.

The moderator Vinicius W.O. Santos provided context by explaining that the earlier open forum was co-organized with the Internet Corporation for Assigned Names and Numbers (ICANN) and focused on documentation and preserving institutional information. Additionally, the Brazilian Internet Steering Committee (CGI.br) team shared its initial efforts to create a specialized library in Internet governance.

The Speaker Jean Carlos Ferreira reported on the main activities and progress made since the last Open Forum about the CGI.br collection. He highlighted actions taken within the Brazilian Internet Steering Committee (CGI.br) and The Brazilian Network Information Center (NIC.br) related to producing and sharing information on Internet governance in Brazil.

The presentation mentioned the wide range of materials produced by CGI.br and NIC.br, including books, guides, reports, CGI.br meeting minutes, resolutions, technical notes, and other promotional materials. 

Ferreira described the main pillars of CGI.br's collection:  1) Documentation of CGI.br activities; 2) Publications; and 3) Specialized Physical Library. The project also includes the development of a digital repository that will include all materials from the Brazilian IGF.

Regarding the initiative's challenges, the presentation raised the need to build a multilingual Internet Governance vocabulary for standardized document indexing. Another highlighted challenge referred to implementing and maintaining robust, though complex, open-source tools that facilitate integration with other collections and collaboration with other organizations.

The moderator emphasized the importance of the session, as information organization and dissemination in the Internet Governance area are seldom discussed but vital.

Comments from the audience pointed out the significance of CGI.br's collections as a fundamental role in strengthening the community and knowledge development on Internet Governance in Brazil. One participant drew attention to artificial intelligence and machine learning in document indexing and designing taxonomies. Another participant also mentioned the possibility of using "language models" for term extraction to build a taxonomy. A third participant inquired about lessons learned during the project and tips for institutions interested in implementing similar initiatives. 

The speaker and the audience discussed the need to build an Internet Governance taxonomy for better information organization. Developing this taxonomy is a challenge faced by the Internet Governance community due to the diversity of topics and specializations within this field. Therefore, it is essential to bring together the librarian community, the Internet technical community, and other stakeholders to discuss and create an adequate vocabulary and taxonomy for the Internet Governance area.

The session featured comments from Mr. Winston Roberts, representing the International Federation of Library Associations (IFLA), who mentioned that IFLA is involved in the Internet Governance process, participating as one of the multistakeholder communities. He pointed out the critical role that Internet Governance plays in delivering library services and disseminating information. He emphasized the importance of collaboration and cooperation between libraries and the Internet technical community. He discussed the update of IFLA's Internet Manifesto, encouraging participants to reach out to IFLA and its regional representations in Latin America and the Caribbean for more information.

In conclusion, the open forum fostered an important discussion on the need for collaboration and dialogue within the Internet Governance community to create a taxonomy that addresses Internet Governance topics. It underscored the importance of CGI.br's collections in strengthening knowledge development within the Internet Governance community.

IGF 2023 Town Hall #162 How prevent external interferences to EU Election 2024 - v.2

Updated:
Cybersecurity, Cybercrime & Online Safety
Key Takeaways:

An efficient fight against disinformation at elections times requires a framework for a large cooperative between the different stakeholders, a continuous monitoring of the phenomena, and rules for transparency in the different processes. A “big stick” against those who don’t want to play along the rules is also very useful. In case of non-respect of the rules, EU Commission can issue warning letter to fines up to 6% of the global turnover.

,

Concerning the coming European elections, EDMO set up a specific task force which has three areas of activity: - the past, i.e. reviewing old electoral campaigns to identify the different strategies - the present, i.e. an evaluation of the main risks, country by country - the future, i.e. how to better prepare the network for the coming campaign.

Calls to Action

Under the guidance of the Commission, EDMO has created a task-force covering all EU countries and all EU languages with the involvement of a broad set of stakeholders to carry out a risk assessment, monitor and report on mis/disinformation trends, and increase cooperation between the stakeholders.

,

One of the new challenges is generative artificial intelligence, which can amplify intentional disinformation campaigns: a human centric approach needs to clearly separate human from artificial output. Therefore, AI production will not have copyright or have free speech rights, and will need to be clearly identified (watermarking).

Session Report

IGF 2023 Town Hall #162 How prevent external interferences to EU Election 2024

Esteve Sanz in Kyoto and Albin Birger from Brussels, the representatives of the European Commission, stressed that disinformation is false or misleading content that is spread with an intention to deceive or secure economic or political gain and which may cause public harm. It is not the Commission’s aim to create a ministry of Truth, but to make the online environment more transparent and its actors accountable, to empower citizens, and to foster open democratic debate. One of the new challenges is generative Artificial Intelligence, which can amplify disinformation campaigns: a human centric approach needs to clearly separate human from artificial output. Therefore, AI production will not have copyright or have free sch rights, and should be clearly identifiable (identifying the effective way to do so, for instance through watermarking, remains a challenge).

 

They also presented the articulation of the different EU’s initiatives (regulatory and other) and institutional set up to fight against disinformation:

  • under the DG CNECT, Regulations have been developed and are now in place at EU level (Digital Services Act and Digital Markets Act); The 2022 Code of Practice on Disinformation was strengthened in 2022, empowering industry to adhere to self-regulatory standards to combat disinformation. The Code of Practice aims to be transformed into a Code of Conduct under the DSA (to constitute a risk mitigation tool for Very Large Online Platforms, while remaining voluntary); and the European Digital Media Observatory (EDMO) has been set up to support the creation of a cross-border and multidisciplinary community of independent fact-checkers and academic researchers.
  • under the European External Action Service different strands of work aim to foster international cooperation, to increase situational awareness and coordinate response to Foreign Information Manipulation & Interference (FIMI), including with partner countries, e.g. a Rapid Alert System between EU Member States’ administrations the creation of the EUvsDisinfo database or that  of a FIMI “toolbox”.
  • the DG COMM, provides for internal Commission coordination and factual communication on EU policies, through monitoring  and analysis of related areas,, with an accent on debunking false narratives (e.g. climate change disinformation),  and through the promotion of media literacy initiatives.

 

Specific situations also call for targeted and coordinated actions, e.g. the imposition of EU sanctions on state owned outlets suspending RT and Sputnik’s broadcasting in the EU.

In view of the coming 2024 elections, specific initiatives have been put in place to further cooperation between the different actors:

- within the framework of the Code of Practice there is a Working Group on Elections, with a focus on the activities of the signatories and the facilitation of exchange of information between them

- under the guidance of the Commission, EDMO also has created a task-force covering all EU countries and all EU languages with the involvement of a broad set of stakeholders to carry out a risk assessment, monitor and report on mis/disinformation trends, and increase cooperation between the stakeholders.

Stanislav Matejka, representative of the ERGA, explained that the European Regulators Group for Audiovisual Media Services functions as an expert body, which is also tasked to provide the Commission with essential evaluation of the local implementation of the Code of Conduct, the local respect of the transparency obligations. It coordinates the work of the local authorities to monitor the effective implementation of the European policies in these matters (e.g. the access to data), and handles the repository of political adverts.

Paula Gori, Secretary General of EDMO stressed the necessity of a multidisciplinary approach of the phenomena of disinformation, which required expertise in numerous fields, from emotion analysis to computing, etc. In that sense, EDMO should be considered as a platform offering tools to the experts from the different fields, from fact-checkers to academic research, without forgetting the fundamental promotion of media literacy.

Giovanni Zagni, representative of a member of the network of fact-checkers and chair of the EDMO task force on elections, explained how their work has evolved from the sole analysis of content (which nevertheless remains an important part). For example, they set up a specific task force on Ukraine which led to 10 recommendations to policy makers; they produce a monthly brief on the tactics of disinformation.

Concerning the coming European elections, EDMO set up a specific task force which has three areas of activity:

- the past, i.e. reviewing old electoral campaigns to identify the different strategies

- the present, i.e. an evaluation of the main risks, country by country

- the future, i.e. how to better prepare the network for the coming campaign.

Caroline Greer, representative for TikTok, expressed the support of the company for fact-checking.

Concerning the coming elections, TikTok has a global election integrity program, with a template that is applied to local circumstances. This includes:

- specific election policies

- community guidelines

- a full prohibition of political advertising (at all times)

- a restriction of certain political activities such as funding campaigns

- local “election hubs” that inform citizens about - for example - where to vote, ecc.

Eril Lambert, from Eurovisioni in Rome, express appreciation for the role attributed by the European Union to civil society in the mechanisms to fight disinformation and raised several questions to the representatives of the EU and of the platforms. In response to different questions on line and in the room, it was precised that the voluntary Code of Conduct was only one tool to demonstrate compliance with European rules. The objective is to bring disinformation into light, through transparency – the Commission often launches investigations, and the DSA has now added an auditing layer to the instruments at its disposal. Take downs by platforms with their motivation and eventual appeal; have to be sent to a Commission database.

In case of non-respect of the rules, the Commission has several means available such as warning letters and imposing (large) fines up to 6% of the global turnover.

It was also indicated that what is important to improve collaboration between platforms, authorities, and institutions such as EDMO, e.g. to facilitate access to platform data on behalf of researchers.

Transparency of recommending algorithms systems is also an issue. TikTok for example allows the user to reset its recommendations to avoid to remain locked in a filter bubble, or to refuse a personalized feed.

The conclusion was that an efficient fight against disinformation requires a framework for a large cooperative between the different stake-holders, a continuous monitoring of the phenomena, and rules for transparency in the different processes.

A “big stick” against those who don’t want to play along the rules is also very useful.

IGF 2023 Networking Session #172 Networking for Information Integrity in Asia and Globally

Updated:
Cybersecurity, Cybercrime & Online Safety
Key Takeaways:

The process of negotiating internet governance issues is opaque and confusing to ordinary people, particularly in less developed, global majority contexts. There needs to be a multistakeholder approach (public sector, private sector, media, academia, civil society, tech companies) to address internet governance specifically focusing on information integrity issues.

,

Civil society engagement with the private sector has gotten more difficult as tech companies disinvest in trust and safety teams, certain platforms such as TikTok have become more responsive such as to physical threats of violence or violent images, while others such as X have been challenging to engage.

Calls to Action

All stakeholders should work with and pressure private sector technology companies to have clear and robust escalation paths that are not based on personal relationships or single employees committing to action.

,

Civil society should form regional networks so that similar closing contexts can share resources and strategies. Through networks, CSOs should look to share information to get a more holistic view of current data sets, engagement experiences, and historical data around closing societies and other contexts.

Session Report

Major themes:

This session brought together stakeholders from civil society across Asia and Globally to discuss the challenges facing CSOs when trying to build a resilient information space, especially in closed or closing societies. NDI discussed its Info/tegrity network and other means of connecting with groups across civil society to develop capacity to address information integrity issues and contribute to internet governance discussions. Experts from Pakistan and Taiwan shared the challenges associated with engaging social media platforms to gather data for critical research, support an open, democratic and free information environment during elections, and escalate cases of online harassment and abuse. The session then split into four break-out groups to share both existing challenges and potential solutions across the major themes on this issue.

Group 1: Challenges of working online in closed societies

  • This group discussed the feasibility of creating a global network of CSOs for groups or individuals working in closed societies. They agreed that while a network of support is an important component of successfully navigating a closed space as a CSO, regional-level networks make more sense than global networks. Closed societies face unique challenges within their larger classification and allowing convergence at the regional level would allow groups to take a narrower, deeper approach to networking than a broad, shallow global network would achieve. They cited current work in Asia around protecting journalists in closed societies as an existing model of their proposal.

Group 2: Social media data access for research

  • This group discussed current methods of monitoring social media platform information and what resources would make their work easier. They focused on ways CSOs can support each other’s work in addition to talking about recent API changes that have made research more difficult. 
  • They highlighted that to continue the important work of researching the information landscape using social media data, they recommend that CSOs build regional networks to share their experiences across similar contexts and share their current data sets and historical data sets to bolster the total amount of data and enrich everyone’s data sources. 

Group 3: Coordination with technology platforms around trust and safety concerns

  • This group discussed the varying roles specific social media platforms play across Asia and the World. They also emphasized that platforms’ gutting of trust and safety teams across the boards has resulted in a delay or lack of response when online harm is reported and an uptick in attacks on activists and human rights defenders.
  • Their main point was that while programs like Meta’s Trusted Partner Program are effective in providing an escalation path, it is not equitable and relies on personal relationships or individual tech platform employees prioritizing trust and safety. A system fix is needed, especially with the 2024 elections around the corner. The recommendation from this group is that all stakeholders should work with and pressure private sector technology companies to have clear and robust escalation paths that are not based on personal relationships or single employees committing to action.

Group 4: Internet governance for information integrity

  • This group recommended several strategies to improve coordination at the global level around local, national, and/or regional Internet governance and policy best practices. These include adopting a multistakeholder (public sector, private sector, media, academia, civil society, tech companies) approach to Internet governance to make the process more accessible, prioritizing tools that enable access for people with disabilities and other marginalized groups, and developing regional and local strategies for Internet governance as well as a global perspective.
  • They also suggested that a human rights approach can be incorporated into technology platform policy by applying the multistakeholder framework to implement better interaction, information sharing and policies with the private sector. This would have impacts such as more robust privacy and data protection procedures, simplifying the language that platforms use to communicate their policies (including expanding available languages), and creating quantifiable measures for tracking online harms.
IGF 2023 Lightning Talk #37 Open Data Evaluation Model in Brazilian Governmental Portals

Updated:
Data Governance & Trust
Key Takeaways:

Takeaway 2: Brazil has begun implementing such tool

,

Takeaway 1: Tools for automated evaluation of open data portals and open data best practices can help to improve open data quality

Calls to Action

Call to action 2: The Civil Society that is involved with open data should become aware of the existence and workings of such evaluation tools

,

Call to action 1: Governments around the world should follow Brazil's example and implement evaluation models.

Session Report

Report on Lightning Talk #37: "Open Data Evaluation Model in Brazilian Governmental Portals" 

Introduction

The lightning talk "Open Data Evaluation Model in Brazilian Governmental Portals" was presented at the Internet Governance Forum, shedding light on the critical issue of data standardization and the efforts made by the Brazilian Network Information Center (NIC.br) to address this challenge. The talk emphasized the importance of open data quality, presented an automated evaluation model under development for the Brazilian Open Data Governmental portals, and issued two key takeaways and call-to-action messages.

Key Takeaway Messages

The presentation by the speaker highlighted two primary takeaway messages:

1. Tools for Automated Evaluation of Open Data Portals Enhance Data Quality

The first crucial takeaway from the talk was the significance of tools for automated evaluation in enhancing the quality of open data. Open data portals often need more standardized information structures, an improvement that impacts efficient data access and utilization. The speaker stressed the need for standardized principles and best practices for publishing open data. Tools designed to evaluate open data portals and ensure adherence to these principles can play a vital role in improving the overall quality of open data.

2. Brazil's Implementation of Evaluation Tools

The second takeaway message revealed that Brazil has initiated the implementation of such tools for evaluating and improving open data quality. The Brazilian government has recognized the importance of standardization and best practices in data publication and is taking proactive steps to address these issues.

Call-to-Action Messages

The talk concluded with two call-to-action messages aimed at governments and civil society:

1. Governments Worldwide Should Emulate Brazil's Example

The first call to action implores governments across the globe to follow Brazil's lead and implement open data evaluation models. Given the benefits of standardization and best practices in data publication, the speaker urges governments to prioritize developing and deploying tools for automated evaluation in their own open data initiatives. This step would improve data governance and lead to more efficient data sharing and utilization.

2. Raise Awareness among Civil Society

The second call to action aims at civil society organizations and advocates involved in open data. It encourages these stakeholders to become aware of the existence and workings of open data evaluation tools. By increasing awareness and understanding of these tools, civil society can actively participate in the process, supporting the implementation of standardized data practices and advocating for open data quality in their respective regions.

Conclusion

The lightning talk on "Open Data Evaluation Model in Brazilian Governmental Portals" at the Internet Governance Forum highlighted the critical need for standardized data publication practices and the role of automated evaluation tools in achieving this goal. The Brazilian Network Information Center's proactive efforts in implementing such tools serve as an inspiring example for other nations. The call-to-action messages emphasize the importance of global adoption and civil society involvement in furthering the cause of open data quality and standardization.

In an age where data drives innovation and policy decisions, standardization and evaluation tools ensure that open data fulfills its potential as a valuable resource for governments, organizations, and individuals worldwide. The lessons from this talk must be acknowledged and acted upon, setting a higher standard for open data globally.

IGF 2023 Open Forum #58 Child online safety: Industry engagement and regulation

Updated:
Cybersecurity, Cybercrime & Online Safety
Key Takeaways:

Online child sexual exploitation is a grave violation of human and child rights. Threats are continuously escalating and changing.

,

Self-regulatory measures are broadly perceived as inadequate. Significant regulatory and cultural changes are on the horizon, demanding greater responsibility and action from businesses.

Calls to Action

Governments and companies must remain vigilant and responsive to the ever-evolving threat landscape. Continued exchange of learning and experience in collaborative and co-regulatory models across different jurisdictions is necessary.

,

Companies should embed online child sexual abuse and exploitation into broader human rights due diligence, including impact assessments.

Session Report

IGF 2023 Open Forum #58: Child online safety – Industry engagement and regulation


Key Takeaways


1.    

Online child sexual exploitation is a grave violation of human and child rights. Threats are continuously escalating and changing.


2.    

Self-regulatory measures are broadly perceived as inadequate. Significant regulatory and cultural changes are on the horizon, demanding greater responsibility and action from businesses.

Call to Action


1.    

Governments and companies must remain vigilant and responsive to the ever-evolving threat landscape. Continued exchange of learning and experience in collaborative and co-regulatory models across different jurisdictions is necessary.


2.    

Companies should embed online child sexual abuse and exploitation into broader human rights due diligence, including impact assessments.

Context

This hybrid session facilitated in-person by Ms Afrooz Kaviani Johnson – and online by Ms Josianne Galea Baronexplored different models of industry engagement and regulation to address online child sexual abuse and exploitation (CSEA). 

Panel discussion

Ms Julie Inman Grant, eSafety Commissioner, Australia, discussed the suite of regulatory tools her office uses to combat online CSEA. Key among Australia’s tools is its complaints schemes, which facilitate the removal of harmful content to prevent re-traumatization and allow trend analysis to influence systemic change. Additionally, the Basic Online Safety Expectations, which detail the steps that social media and other online service providers must take to keep Australians safe, enable the Commissioner to demand transparency, complete with penalties. Australia’s tools also include mandatory codes for various sections of the online industry in relation to illegal and restricted content, including CSAM. The Commissioner emphasized that even the largest companies are not doing enough and stressed the need for global pressure on companies to enhance safety measures. ‘Safety by Design’ was highlighted as a fundamental systemic initiative to support industry to better protect and safeguard citizens online.

Mr Tatsuya Suzuki, Director, Child Safety Division of the Children and Families Agency, Japan, presented how the newly formed Children and Families Agency is working with the private sector to combat online CSEA. The national framework acknowledges the essential role of private sector voluntary actions to ensure children’s safety online. It respects the balance between eradicating harmful content and ensuring freedom of expression. The Agency’s strategies, detailed in the 2022 National Plan for the Prevention of Sex Crimes against Children, involve public-private collaborations. The Plan for Measures Concerning Child Sexual Exploitation 2022 outlines these government-led actions. In July 2023, a prevention package was presented to the Cabinet Office, emphasizing joint efforts with relevant ministries to address child exploitation. 

Mr Toshiaki Tateishi, Japan Internet Provider Association/ Internet Contents Safety Association, discussed Japan’s private sector initiatives against online CSEA. The Internet Content Safety Association (ICSA) compiles a list of websites known for child abuse material based on data from the National Police Agency and the Internet Hotline Centre. An independent committee reviews this data, and upon confirmation, the ICSA distributes a blocking list to ISPs and mobile network operators, preventing access to these sites. The Safer Internet Association (SIA) contributes by operating a hotline for reporting illegal content, conducting research, advising on policy, and leading educational initiatives. These associations coordinate with providers, both domestic and international, to reduce and remove illegal and harmful content.

Dr Albert Antwi-Boasiako, Director-General, Cyber Security Authority Republic of Ghana, emphasized Ghana’s approach to championing industry responsibility and innovation. Recognizing that self-regulation is insufficient, Ghana advocates for ‘collaborative regulation’ rather than traditional top-down mandates. This strategy acknowledges that companies often overlook the risks children face online. Ghana’s Cybersecurity Act mandates industry action to protect children, encompassing content blocking, removal, and filtering. This law requires further specification through a legislative instrument, which is currently being crafted in consultation with the private sector and civil society. The Act includes administrative and criminal penalties, crucial for enforcement in developing nations, and allows for fines to fund the regulatory institutions. Dr Antwi-Boasiako noted that success hinges on widespread awareness and understanding of the issues at stake.  

Mr Dunstan Allison-Hope, Vice President, Human Rights, BSR (Business for Social Responsibility) highlighted the critical role of human rights due diligence (HRDD), including impact assessments, in combating online CSEA. HRDD based on the UN Guiding Principles on Business and Human Rights (UNGPs) can form a key part of a company’s obligations to address online CSEA. The benefits of this approach include a comprehensive review of human rights impacts, special attention to vulnerable groups like children, and a structured framework for action, tailored to each company’s position in the technology stack. With regulations now echoing the UNGPs, voluntary measures are shifting to mandatory. He urged companies to embed children’s rights into their broader HRDD processes. While this significant regulatory change is especially prominent in Europe, he encouraged companies to take a global approach to achieve the desired child rights outcomes.

Interactive discussion

The discussion started on balancing children’s right to protection with their right to access information, especially age-appropriate and accurate sexual and reproductive health information. The conversation took cues from the UN Committee on the Rights of the Child, General comment No. 25 (2021). Although the internet was not built for children, they are significant users, leading to a call for both minimizing harm and amplifying benefits. Australia’s consultations on approaches to age-assurance spotlighted this need, pushing companies to look beyond age-gating. A human rights-based approach was emphasized to navigate tensions between human rights. Strategies like DNS blocking alone were deemed inadequate, emphasizing holistic approaches, like Australia’s ‘3Ps’ model of Prevention, Protection, and Proactive, systemic change, are crucial. One significant challenge lies in raising awareness and promoting help-seeking behaviours among children and young people.

Conclusion

Both regulators and companies, along with civil society, are currently navigating extremely challenging dilemmas. Whether through regulation, self-regulation, or ‘collaborative regulation’, there is a significant shift happening in the regulatory landscape. This shift presents an opportunity to firmly integrate the issue of online CSEA into these evolving processes.

Further resources

United Nations Children’s Fund (2022) ‘Legislating for the digital age: Global guide on improving legislative frameworks to protect children from online sexual exploitation and abuse’ UNICEF, New York.

 

IGF 2023 YCIG Advancing Youth Participation in IG: results from case study

Updated:
Key Takeaways:

Value of Inclusivity: The discussion also emphasized the importance of not just engaging youth who are already part of the community, but also newcomers and the benefits of involving a wider and more diverse youth population in shaping these sessions and discussions.

,

Collaborative Efforts: Collaboration and partnerships seem to be key themes. The discussion highlights collaborative efforts across various groups, such as the Internet Society Youth Standing Group and the Youth Coalition on Internet Governance.

Calls to Action

The Role of Youth: While youth are at the decision table, there is a need to move beyond this and consider them as co-collaborators and co-creators in Internet governance discussions.

,

Growing Youth Engagement: The conversation underscored a growing trend where young people are becoming increasingly involved in these discussions. African governments, in particular, are beginning to engage more with the youth, but there is a call for deeper involvement beyond just Day 0 events.

Session Report

The session captured a discussion related to Internet Governance Forums (IGFs) and youth participation, specifically in different regions like Africa and Latin America. Following are some insights and takeaways:

1. Diverse Regional Perspectives: The session presented various regional perspectives, from Latin America to Africa, on the state of youth engagement in Internet Governance.

2. Growing Youth Engagement: The conversation underscored a growing trend where young people are becoming increasingly involved in these discussions. African governments, in particular, are beginning to engage more with the youth, but there is a call for deeper involvement beyond just Day 0 events.

3. Collaborative Efforts: Collaboration and partnerships seem to be key themes. The discussion highlights collaborative efforts across various groups, such as the Internet Society Youth Standing Group and the Youth Coalition on Internet Governance.

4. Case Studies: Various case studies from different regions, such as Latin America and Africa, were discussed to illustrate the state of youth engagement in these areas. For example, how Youth IGF operates differently across various regions due to cultural, logistical, and governmental factors.

5. Challenges and Solutions: Challenges such as the need for a common reporting tool and the disparity between youth discussions and main session topics were brought up. Solutions like creating a common platform for reporting were suggested.

6. Youth-Led Initiatives: There are emerging youth-led IGF initiatives, such as the Youth IGF in Ethiopia. These initiatives highlight the growing momentum and importance of youth voices in Internet Governance discussions.

7. The Role of Youth: While youth are at the decision table, there is a need to move beyond this and consider them as co-collaborators and co-creators in Internet governance discussions.

8. Value of Inclusivity: The discussion also emphasized the importance of not just engaging youth who are already part of the community, but also newcomers and the benefits of involving a wider and more diverse youth population in shaping these sessions and discussions.

In summary, the session provided a glimpse into the dynamic and evolving role of youth in Internet Governance across different regions. There's a clear call for deeper youth involvement, collaborative efforts, and the creation of systems that ensure their voices are effectively incorporated into broader discussions and decisions.

IGF 2023 DC-BAS A Maturity Model to Support Trust in Blockchain Solutions

Updated:
AI & Emerging Technologies
Key Takeaways:
Benefits of a maturity model: Provides a common framework for assessing blockchains. Support trust in blockchain solutions. Helps organizations identify areas for improvement. Facilitates communication and collaboration between stakeholders. Use cases: Digital identity Banking & Finance Digital Assets Voting/elections Legal Supply chain Other (e.g., healthcare, education Interest from parliamentarians, non-governmental organizations, academia, Reported benefits from those who have conducted assessments of blockchain solutions based on a set of shared criteria: Ensure that blockchain solutions meet the needs of all stakeholders. Reduce the risk of selecting inappropriate or inadequate blockchain solutions for their specific use cases. Promote the adoption of best practices in blockchain design and implementation. Rationale for using a maturity model
Calls to Action

Provide more details about opportunities for training and awareness on the Blockchain Maturity Model and the corresponding assessment methodology. Share lessons learned and best practices. Involve key stakeholders and interested new parties in the: Periodic meetings of the IGF-BAS Collection of input/requirements/suggestions from representatives of multi-stakeholder groups. Develop and validate sector-specific supplements. Simulate the assessment

Session Report

Dynamic Coalition on Blockchain Assurance and Standardization

 

Sessional Report: The IGF-Blockchain Assurance & Standardization, Panel Discussion on “A Maturity Model to Support Trust in Blockchain Solutions”.

Date of Session: 18 October 2023

Kyoto Conference Center, Room: WS 10 – Room I

Online Link: https://intgovforum.zoom.us/meeting/register/tJEucuihrT4pE9VXFZ6GWP2gQNOjl19VqgLQ

 

Introduction

The Dynamic Coalition on Blockchain Assurance and Standardization (IGF-DC-BAS), was established to connect, communicate, and collaborate with government leaders and stakeholders to use blockchain technology to improve public services.

More specifically, with the support of the Government Blockchain Association (GBA), the IGF-DC-BAS established a working group for International Organizations & Standards, supporting the UN-SG Global Digital Compact goals, including:

  • Ensure that everyone has access to the digital world.
  • Promote the use of digital technologies to achieve the Sustainable Development Goals.
  • Protect human rights and fundamental freedoms in the digital age.
  • Build trust in the digital world.

Outcome of the Session

Takeaways:

  • Benefits of a maturity model:
    • Provides a common framework for assessing blockchains.
    • Support trust in blockchain solutions.
    • Helps organizations identify areas for improvement.
    • Facilitates communication and collaboration between stakeholders.

 

  • Use cases:
    • Digital identity
    • Banking & Finance
    • Digital Assets
    • Voting/elections
    • Legal
    • Supply chain
    • Other (e.g., healthcare, education)

 

  • Interest from parliamentarians, non-governmental organizations, and academia:
    • Demonstrate the growing awareness on the importance of blockchain assessments.
    • Create opportunities for collaboration and knowledge sharing.

 

  • Reported benefits from those who have conducted assessments of blockchain solutions based on a set of shared criteria:
    • Ensure that blockchain solutions meet the needs of all stakeholders.
    • Reduce the risk of selecting inappropriate or inadequate blockchain solutions for their specific use cases.
    • Promote the adoption of best practices in blockchain design and implementation.

 

  • Rationale for using a maturity model:
    • A maturity model provides a structured, objective, repeatable, and technologically agnostic approach to assess blockchain solutions.
    • It helps organizations identify their current state of maturity and track their progress over time.
    • It can be used to benchmark blockchain solutions.

 

Plan of action:

  • Provide more details about opportunities for training and awareness on the Blockchain Maturity Model and the corresponding assessment methodology.
  • Share lessons learned and best practices.
  • Involve key stakeholders and interested new parties in the:
    • Periodic meetings of the IGF-BAS
    • Collection of input/requirements/suggestions from representatives of multi-stakeholder groups.
    • Develop and validate sector-specific supplements.
    • Simulate the assessments of blockchains.

 Additional Activities of the IGF-DC-BAS

In addition to the DC Session, representatives of the IGF-DC-BAS participated in the “Free and Fair Voting Panel”, “Blockchain Assurance Panel”, “Internet for All Panel”, and “Blockchain in Healthcare Panel”.

 

During the 4 days of the conference, the IGF-DC-BAS Team held 24 individual meetings with Government Officials (Parliamentarians form Uganda, Kenya, and Ghana), and representatives from media (Bloomberg), law firms, private sector, and educational institutions.  

 

The topics discussed included newly available functionalities in scalability of networks, secure identification, CBDC, voting, software supply chain security and general governance using zero knowledge, AI and blockchain technology.

 

 

IGF 2023 DC-Blockchain Implementation of the DAO Model Law:Challenges & Way Forward

Updated:
Key Takeaways:

DAOs are a global technology that knows no borders and there should not be a rush to regulating this technology and stifling its growth. We are also starting to see the emergence of case law in relation to key issues such as liability, tort, fidiciary duties etc. What would be needed s the use of sandboxing to allow them to grow and deliver on its promise.

,

When looking at the development of legal frameworks in relation to DAOs some of the prcedural requirements already fits under existing legislation. For certain issues there may not be the need to develop de novo frameworks, but to address the key issues such as the impact of joint and several liability on DAOs which can stifle their development.

Calls to Action

There needs to be greater sensitization and discourse taking place between DAO practicioners and governmental policy and law makers in order to remove misapprehensions about DAOs and also clarify how the technology works and its benefits. In addition to which, DAOs could and should be used as an effective tool in promoting global participatory democrarcy by instutions such as the IGF and this should be definitely explored further.

Session Report

 

Session Report IGF 2023 DC-Blockchain Implementation of the DAO Model Law: Challenges & Way Forward


  1. Session Details

The DAO Model Law is a multistakeholder effort led by COALA (Coalition of Legal Automated Applications) to provide legal certainty for so-called ‘unregistered’ DAOs (i.e., DAOs that are not wrapped in a legal entity form) and their participants, and unlike other regulatory frameworks, accommodate flexibility for their unique features and further innovation. Since its development the Model Law has served as a precedential source in the development of legislation such as the DAO Acts in both Utah and New Hampshire, parliamentary discussions in Australia, and has also been referenced in the recent call for evidence by the UK Law Commission. The session seeks to take the discussion further from the session hosted at IGF 2022, to analyse how different legislators and policy makers are approaching the development of legal frameworks to govern DAOs and also outline lessons learnt as well as recommendations for the way forward as more jurisdictions express interest in regulating unregistered DAOs. The session will have great benefit for policy makers, governmental representatives, law makers, practitioners as well as DAOs in navigating the course of granting legal recognition and certainty and will address the critical aspects of inter alia governance, functional and regulatory equivalence, liability attribution and taxation of unregistered DAOs.

It is intended that the workshop will be conducted in hybrid format to accommodate onsite participation at IGF 2023 as well as online attendees within various jurisdictions who wish to contribute to the discussion on the implementation on the DAO Model Law. In this regard it is anticipated that the official IGF Online meeting platform will be utilized, and online participants will be able to post comments and also ask questions in relation to the content of the discussion.

  1.  Panel Discussion

The Presentation made during the Panel Discussion and ensuing conversation centred around why is there a necessity to develop a DAO Model Law, the inherent advantages of DAOs, the primary principles of the DAO Model Law (viz. functional and regulatory equivalence) as well as the outline of the fundamental sections of the DAO Model Law.

The discussion then focussed on what are the next steps and the progression being made by various jurisdictions towards the implementation of regulatory frameworks. This therefore involved taking a close look at the jurisdictions that have instituted incorporation options such as Wyoming, Vermont as well as the Marshall Islands as well as countries where the Model Law have been considered/reviewed/(partially)transposed, such as Australia (Bragg report, Senate of Australia), United Kingdom (UK Law Commission DAO Consultations), St. Helena, New Hampshire and Utah.

During the session the Panel then focussed on what are some of the challenges faced in garnering adoption by countries which centred around the key sensitive issues of regulatory equivalence, privacy rights (incl. privacy of remuneration) recognised by law as well as taxation.

  1. Next Steps/Way Ahead

It as identified that there is further work that can be undertaken to refine the DAO Model Law, based on developments within the global sphere. As such new taskforces will be convened to work on the key areas of Identity and Limited Liability, Privacy/Transparency, Taxation as well as Technical Guarantees for Functional & Regulatory Equivalence and Updates.

  1. Key Session Takeaways

DAOs are a global technology that knows no borders and there should not be a rush to regulating this technology and stifling its growth. We are also starting to see the emergence of case law in relation to key issues such as liability, tort, fiduciary duties etc. What would be needed s the use of sandboxing to allow them to grow and deliver on its promise.

When looking at the development of legal frameworks in relation to DAOs some of the procedural requirements already fits under existing legislation. For certain issues there may not be the need to develop de novo frameworks, but to address the key issues such as the impact of joint and several liability on DAOs which can stifle their development.

There needs to be greater sensitization and discourse taking place between DAO practitioners and governmental policy and law makers in order to remove misapprehensions about DAOs and also clarify how the technology works and its benefits. In addition to which, DAOs could and should be used as an effective tool in promoting global participatory democracy by institutions such as the IGF and this should be definitely explored further.

---oOo---

IGF 2023 Lightning Talk #122 AI in the courts an opportunity for economic proceedings?

Updated:
AI & Emerging Technologies
Key Takeaways:
  • The use of AI in alternative dispute resolution will be of great benefit to business. Being aware of the chances of winning a dispute and therefore receiving a predicted outcome and/or an assessment of the strength of a party's arguments and position from AI will reduce the burden on the courts. We should use AI to issue non-binding resolutions that will guide a party whether to take the case to court or, for example, to settle.
  • ,
  • The implementation of AI in the judiciary is a universal and global issue. The differences between legal systems remain in the background. We should develop postulates and international legal and ethical standards for the use of AI in the judiciary.
  • Calls to Action
  • We expect from the local governance to support jurisdiction to fulfill the tech gap between the business needs and justice. We should aspire to cooperation between business and public authorities, but at the same time create clear and transparent rules for such cooperation. We must be aware of the temptation of private entities gaining access to citizens' data and attempting to manipulate court rulings using AI systems.
  • ,
  • The implementation of AI in the courts should be progressive, in the first step we should start by using AI to perform routine, repetitive and time-consuming activities. As a second step, it would be good to implement solutions based on hybrid intelligence.While implementing the AI driven solutions we have to review carefully every activity that is processed in the court and analyze what can be replace in a first place.
  • Session Report

    The panel discussion titled "AI in the courts an opportunity for economic proceedings?" brought together industry experts who explored the implications, advantages, and challenges of integrating Artificial Intelligence (AI) into the judiciary. The session was moderated by Rafał Wieczerzak.

    Panelists and their Key Points:

    In her remarks, Anna Pietruszka primarily focused on how artificial intelligence can impact the efficiency of court proceedings, especially from a business perspective. She pointed out that introducing AI-based tools for straightforward, routine matters, such as making minor changes in business registers, could significantly speed up and simplify procedures. Anna also emphasized the need for modernizing communication within the judiciary. She suggested that while courts are an integral part of our system, their current communication methods are not aligned with modern realities. In her view, technologies like artificial intelligence can play a pivotal role in transforming these mechanisms to be more accessible and understandable to today's society.

    Gabriela Bar and Robert Sowiński highlighted the complexity of introducing AI into the judicial system. Gabriela focused on the ethical aspects of implementing AI. She underscored that trust in the system is crucial and that people need to believe that the technology is used fairly and transparently. Therefore, as she suggested, the optimal model would be Explainable Artificial Intelligence (XAI), which would be able to provide people with a logical justification for its decisions. Robert, on the other hand, cited the example of the Chinese judicial system where AI is already in use and pointed to the successes in the realm of alternative dispute resolution in the UK. However, he noted that this technology is not without risks, and we need to be aware of the potential consequences of its misuse.

    From a judge's perspective, Konrad Wasik shared his unique insights into the impact of artificial intelligence on the judiciary. He expressed concern over the burden of numerous administrative tasks that divert judges from their primary duty of adjudicating. In his opinion, artificial intelligence could significantly alleviate courts from these routine tasks, allowing them to concentrate on more complex cases that require human judgment. Konrad also identified potential areas of AI application, suggesting that its integration into the judiciary holds immense potential, as long as it's introduced with due caution and an understanding of its limitations.Post-panel Activities:

    The session was not just an opportunity to gain insights from the panelists but also a platform for attendees to ask questions. The face-to-face interaction allowed for lively debates and provided a chance for legal professionals from various countries and continents to network, exchange experiences, and establish valuable contacts.

    Conclusion:

    The panel successfully addressed the multidimensional aspects of integrating AI into the judiciary, from efficiency and modernization to ethical considerations. The consensus was that while AI offers great potential, its implementation needs to be done thoughtfully, ethically, and in a phased manner.

    The panel concluded with the following recommendations and recommendations:

    The implementation of AI in the judiciary is a universal and global issue. The differences between legal systems remain in the background. We should develop postulates and international legal and ethical standards for the use of AI in the judiciary.

    The use of AI in alternative dispute resolution will be of great benefit to business. Being aware of the chances of winning a dispute and therefore receiving a predicted outcome and/or an assessment of the strength of a party's arguments and position from AI will reduce the burden on the courts. We should use AI to issue non-binding resolutions that will guide a party whether to take the case to court or, for example, to settle.

    The implementation of AI in the courts should be progressive, in the first step we should start by using AI to perform routine, repetitive and time-consuming activities. As a second step, it would be good to implement solutions based on hybrid intelligence.While implementing the AI driven solutions we have to review carefully every activity that is processed in the court and analyze what can be replace in a first place.

    We expect from the local governance to support jurisdiction to fulfill the tech gap between the business needs and justice. We should aspire to cooperation between business and public authorities, but at the same time create clear and transparent rules for such cooperation. We must be aware of the temptation of private entities gaining access to citizens' data and attempting to manipulate court rulings using AI systems.

     

    IGF 2023 WS #279 Sandboxes for Data Governance: Global Responsible Innovation

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    No sandbox will be the same and depending on who you ask the definition of a sandbox is different. This shouldn’t alarm stakeholders but rather fuel openness and enable sandboxes to be used as an anchor for policy prototyping

    ,

    Sandboxing is a spirit and can help actors share and understand a problem. This can clarify policy challenges or new tech applications and how to develop user safeguards.

    Calls to Action

    Regulators need to listen different point of views. Building an effective sandbox is less about the skills and maturity of a regulatory but rather about regulators being allowed to engage purposefully with stakeholders.

    ,

    More experimentation and sharing of experiences need to be done in order to help unpack the opportunities and challenges of setting up sandboxes for data in a particular sector or regulatory environment.

    Session Report

    Mr. Axel Klaphake, GIZ Director, Economic and Social Development, Digitalisation, opened the panel by briefly introducing the topic, emphasizing the benefits of data for economic growth and social development, and then introducing the speakers present at the table as well as those who would be attending online. 

    The on-site moderator, Armando Guio, then gave a presentation on the current state of regulatory sandboxes to offer context for the upcoming conversation. He defined the regulatory sandbox as "a regulatory approach, typically summarized in writing and published, that allows live, time-bound testing of innovations under a regulator's oversight. Novel financial products, technologies, and business models can be tested under a set of rules, supervision requirements, and appropriate safeguards." This concept was attributed to the U.N. Secretary-General's for special advocate for inclusive finance for development. Mr. Guio also dealt with examples of uses, such as Brazil, Colombia, Ethiopia, Germany, Kenya, and Lithuania. 

    As the first panelist's speech, a video from ANPD, the Brazilian Data Protection Authority which co-organized the panel, was broadcasted, in which Thiago Moraes emphasized the importance of fostering a dynamic discussion among all relevant stakeholders in order to deliberate strategies that can pave the way for the development of Sandbox's initiatives. He also announced the beginning of the call for contributions for ANPD’s regulatory sandbox on AI and data protection, which is a crucial step forward in Brazil's journey toward responsible innovation. 

    Agne Vaiciukeviciute, Vice Minister of Transport and Communication of the Republic of Lithuania, highlighted her country's experience with regulatory sandboxes. The outcome has been considered a success, and this has generated more interest and investments in this area. They are currently exploring 5G technology and its capabilities in depth. 

    Denise Wong, from the Singapore Data Protection Authority, IMDA, highlighted their experience and spoke about unlocking the potential of data through policy mechanisms in collaboration with industry as a method to support it and also assist them discover suitable safeguards and protections. She cited one of the key advantages of employing sandboxes as the ability to reduce the time and effort required for technologies to be deployed, allowing enterprises to securely experiment with cutting-edge technologies that provide them a competitive advantage, among further benefits. 

    Lorrayne Porciuncula, from the DataSphere Initiative, addressed the fact that the aspects required for governments to follow in order to successfully establish a regulatory sandbox vary depending on the national jurisdiction in which it is located, the institutional framework, and the time frame, among other factors. Therefore, it is important to demystify what sandboxes are and to show that they are not for exclusive application of sophisticated regulators. In fact, it is a way of engaging purposefully with stakeholders from the design phase onward and building institutional trust with the private sector. 

    Kari Laumann, from the Norwegian DPA, presented the benefits of using sandboxes in her country. She listed as a good practice the experience of bringing firms into the dialogue prior to the installation of the sandbox with questions about what they were interested to build when it comes to AI and data protection, algorithm fairness, and minimizing data. 

    Ololade Shyllon, from Meta, shared the private sector's perspective, saying that while the benefits of using sandboxes vary depending on the unique context of each project, in general, they help to reduce regulatory uncertainty, create a safe space for innovation, make adaptation faster, and build trust between regulators and the private sector. 

    The panel then proceeded with an online and in-person Q&A session. 

    Overall, the session brought out the following takeaways: 

    • It is critical to establish objective criteria and clear advantages for participants, such as certifications. Set highly specific use-case objectives as well. 

    • The sandbox is vital for mapping common problems that the public and the private sector would face when developing or deploying a technology. 

    • Bringing many stakeholders into the conversation can help to reduce regulatory capture. 

    • The resources needed to implement sandbox may vary according to its goals and the skills and maturity of the regulator. 

    • Sharing experiences between countries is a great approach to learn about the many models available. 

    • Sandboxes can promote responsible data governance and AI innovation, creating a space where innovative ideas can flourish while respecting human rights, such as privacy and data protection. 

    IGF 2023 Networking Session #168 Advancing Open Science Globally: Challenges and Opportunitie

    Updated:
    Data Governance & Trust
    Key Takeaways:
    During the discussion, two distinct perspectives on open science emerged. One emphasized the need to enhance the organization and standardization of scientific production, aiming at maximizing the value that can be derived from it. The second perspective highlighted the importance of broadening access to scientific discoveries and derived products, and of involving a broader range of individuals in defining scientific processes.,

    It's essential to outline specific actions that can drive progress toward these goals, and the appropriate actions vary depending on which perspective is adopted.

    Calls to Action

    To maximize the value derived from scientific research, there should be a concerted effort by the private sector to standardize data related to scientific research and make this data widely available on the internet.

    ,

    To enhance accessibility to scientific results and resources and enhance their social impact, it is crucial that government reconsider existing intellectual property and patent models.

    Session Report

    Report on the Networking Session #168: "Advancing Open Science Globally: Challenges and Opportunities"

    The session was fascinating as it contrasted two different perspectives on the goals and paths of Open Science. While researchers and advocates from Latin America highlighted the importance of involving a broader range of individuals in the governance of science and of broadening free and open access to scientific discoveries and derived products in order to maximize its social impact, participants from the private sector and the global north emphasized the need to enhance the organization and standardization of scientific production, aiming at maximizing the value that can be derived from it.

    Henrique Xavier highlighted the persistent issue of paywalls to scientific publications. Moreover, while government and academic data are often open, data from private companies in areas like social media and artificial intelligence remain closed. Opening such data sources is essential for research on misinformation and AI governance, both discussed at the Internet Governance Forum.

    Sarita Albagli reinforced that paywalls hinder access to knowledge, particularly in the global south. She highlighted that Open Science is not only a more cost-effective model than closed science but also addresses the issue of knowledge access, preventing the loss of valuable resources. As a concrete example of a successful program, she mentioned the Brazilian bibliographic database SciELO.

    She raised the requirement for Open Science to address citizens' needs and the importance of involving citizens in research about issues that affect them. She also mentioned the risk of Open Washing, where companies direct Open Science to practices that allow them to profit, which could disproportionately affect the global south by making its research subordinated to private foreign interests.

    Carolina Botero emphasized that Open Science should grant access to publications and the knowledge generated by scientific research, such as vaccines during the pandemic. Rethinking patent laws is crucial to achieving this. Carolina emphasized the importance of addressing power imbalances, ensuring that all countries can utilize data for research purposes by adjusting legal frameworks to support global access.

    Kazuhiro Hayashi emphasized that Open Science goes beyond Open Access. It encompasses providing access to both data and research methods. He stressed the importance of international cooperation in making this data and knowledge accessible to everyone. He said Japan was implementing Open Access and Open Data policies for publicly funded research.

    Vint Cerf (present in the audience) mentioned Google Scholar and Schema.org as tools that help organize and standardize scientific knowledge. He raised the need to document experiment designs and the challenge of accessing old data, methods, and analyses after computer systems evolved. He questioned who should fund Open Science infrastructure and suggested we design a viable business model that could encourage companies to invest in these initiatives.

    Vint Cerf highlighted the importance of creating a document stating the desirable properties of an Open Science ecosystem. He suggested creating a vast database to ease data processing and analysis. Cerf emphasized the importance of its interoperability so the database could migrate in case of a lack of support from the host institution. He recommended organizations such as UNESCO and the International Science Council as potential allies in advancing Open Science.

    Two practical conclusions surfaced from the discussion. In order to maximize the value derived from scientific research, there should be a concerted effort by the global community, including the private sector, to standardize data and metadata related to scientific research and make this data widely available on the internet. To enhance accessibility to scientific results and resources and enhance their social impact, governments must reconsider existing intellectual property, copyright, and patent models.

    IGF 2023 Town Hall #170 Multistakeholder platform regulation and the Global South

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    Multistakeholderism is still largely considered the best way to construct consensus, ensuring results that encompass different stakeholders. However, it was highlighted that it needs improvements to guarantee meaningful participation from all stakeholders, especially within the civil society and technical community that many times have difficulties in participating in national or international forums, due to lack of resources and time.

    Calls to Action

    Guarantee more resources to civil society and technical community to increase participation in international governance forums Adopt bottoms-up regulation, specially in technical standards, such as AI, ensuring global south countries participation, involving the technical community and private sector in rule formulation. Private sector to ensure openness and access to data in order to ensure meaningful participation from other sectors

    Session Report

    Organized by the Brazilian Internet Steering Committee (CGI.br), the Town Hall focused on delving into different digital platform regulation governance models through the exchange of global south countries’ practices and discuss the role of State and non-State stakeholders vis a vis the value of the Internet Governance multistakeholder model. The session was moderated by Henrique Faulhaber, counselor of the Brazilian Internet Steering Committee, representative of the private sector, who opened the session by exposing the role of multistakeholderism in Brazil Internet Governance, as the role it may have on platform regulation, highlighting the particularities of regulation and institutional difficulties that may occur in global south countries. 

    Marielza Oliveira, from Unesco, presented a more general approach to the multistakeholderism model, highlighting its importance to build consensus evolving multiple stakeholders, however the model must overcome challenges to be inclusive, diverse and human rights based as well as to account power imbalances from big techs.

    Sunil Abraham, from Facebook India, on the other hand, highlighted the importance of coordinating all the forms of regulation – from the estate, co-regulation and self regulation with standards setting organizations. This could be seized in platform regulation by  giving room to bottom up knowledge and norm settling, especially with global south participation in a way that would ensure future-proof regulation. 

    Miriam Wimmer, director from the brazilian DPA, also agreed on the importance of coregulation, highlighting the complex institucional set in Brazil with the difficulties in defining the regulation scope and which authorities would be evolved in a broad theme such is platform regulation. The director also emphasized that multistakeholder isn’t incompatible with multilateralism. 

    Joanne D Cunha, researcher from the Centre For Communication Governance at NLU Delhi, pointed out the challenges for global south countries in platform regulations and participating in global forums and international processes, especially due to difficulties with resources. 

    At last, Renata Ávila from Open Knowledge Foundation stressed out the inequalities between different realities, in particular considering small global south countries that may lack not only platform regulation laws but also data protection laws. She also highlighted the importance of platforms not taking advantage of that situation, ensuring transparency and a general frame to be replicated. 

    The Q&A session stressed out the arrangements between the different regulation models that may be applied to platform regulation, and the challenges in cooperation between multiple authorities. It was also pointed out how platforms with transnational reach keep track of many jurisdictions and may replicate new mechanisms to different countries. At last, the speakers highlighted the importance of south-south cooperation, holding platforms accountable and an expanded multistakeholder model with more diverse participation. 

    We can highlight two key takeaways. Multistakeholderism is still largely considered the best way to construct consensus, ensuring results that encompass different stakeholders. However, it was pointed out that it needs improvements to guarantee meaningful participation from all stakeholders, especially within civil society and technical community that many times have difficulties in participating in national or international forums, due to, among other reasons, lack of resources and time. Therefore, governance of platform regulation needs to consider the differences of institutional arrangements and the necessity to equalize the power imbalances that large platforms may cause.  

    Call to actions mentioned: 

    • Guarantee more resources to civil society and technical community to increase participation in international governance forums 
    • Adopt bottoms-up regulation, specially in technical standards, such as AI, and ensuring global south countries participation. 
    • Ensuring openness and access to data in order to ensure meaningful participation. 
    IGF 2023 WS #311 Global Digital Value Chain: Africa’s Status and Way Forward

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    The session outlined that the GDVC has become increasingly complex and interconnected, with organizations and industries across the world collaborating and competing in the digital space which has transformed the way businesses operate, and how consumers access goods and services. Also, Africa is deficient at the GDVC as a result of low capacity, and technology to harness the available resources and poor taste for indigenous solutions.

    ,

    There is an issue of indigenous funds availability, the bulk of the available funding are from foreign venture capitalists with certain conditions and interests that keep Africa dependent digitally. Hence, the need for indigenous funding for digital independence in Africa Countries. In the same vein, speakers also commented on new approaches to digital infrastructure in the area of electricity, telecommunications, and data centers.

    Calls to Action

    Government with the support of other stakeholders should develop clear and supportive policies and regulations that prioritize local content and promote its integration into various sectors, such as energy, mining, manufacturing, and technology. The African government (Nigeria Communications Commission - NCC, National IT Development Agency-NITDA and other replicants in Africa) should explore massive investment in digital infrastructure.

    ,

    Private sector and other stakeholder groups should develop a crowdfunding mechanism to which indigenous investors and individuals could contribute. This would allow Africans to provide certain digital interventions that are controlled and benefiting Africa. A decisive and deliberate decision should be made to enhance capacity and positively engage the populace for the invention of solutions to our unique problems.

    Session Report

    AfICTA- Africa ICT Alliance Workshop Report

    IGF 2023 WS #311 Global Digital Value Chain: Africa’s Status and Way Forward, Thursday, 12th October, 2023, KYOTO, JAPAN

    Organized by: AfICTA-Africa ICT Alliance

    Overview of the Session: The discussion underscored the intricate nature of the Global Digital Value Chain (GDVC), where global organizations collaborate and compete digitally, reshaping businesses and consumer access to goods and services. Africa's lag in GDVC was attributed to limited capacity, inadequate technology to utilize available resources, and a preference for non-indigenous solutions. Challenges regarding GDVC's impact on Africa were discussed, emphasizing the continent's rich mineral and human resources for internet infrastructure. However, concerns were raised about retaining value within Africa. The session questioned Africa's exclusion in the value chain, emphasizing the need for increased value, consensus building, policy development, and active engagement in Internet Governance Forums. It highlighted Africa's consumption-centric approach and stressed the urgency of transitioning to a production-based economy. Critical questions were posed about Africa's ability to achieve sustainable development goals, accompanied by strategies to shift from consumption to production. The session emphasized the importance of creating a roadmap for capacity development, establishing production facilities, and enabling active participation in the global digital value chain.

    The onsite moderator, Dr. Jimson Olufuye, Principal Consultant at Kontemporary Konsulting Ltd, Nigeria, Founder/Fmr. Chair and Chair of the Advisory Council, AfICTA, provided background information about AfICTA, an advocacy group for African ICT-driven businesses. AfICTA was founded in 2012 with six (6) member nations and has now grown to over 40 member African nations. Underscoring the importance of the theme of the workshop concerning Africa's participation in the global value chain, he introduced the panelists, the online moderator and facilitators, and the Chair of AfICTA, Mr. Thabo Mashegoane for opening remarks.

    Speakers

    1. Mr. Bimbo Abioye, President of the Institute of Software Professionals, ISPON and Group Managing Director of Fintrak Software, Nigeria (Private Sector, Africa)
    2. Dr. Kossi Amessinou, Chief of the World Bank Division, Benin Republic (Government, Africa)
    3. Dr. Melissa Sassi, representing the private sector in North Africa and serving as the Partner & Chief Evangelist, P3 Network (Private sector, North America)
    4. Mrs. Mary Uduma, West Africa IGF Coordinator (Civil society)
    5. Professor Kulesza Joanna from the University of North Poland (Academic community, Europe)
    6. Ms. Rachael Shitanda, AfICTA Vice-Chair, East Africa and Executive Member of Computer Society of Kenya (Private sector, Africa)
    7. Chief Toyin Oloniteru, CEO, DAPT - Data Analytics Privacy Technology; (Private sector, Africa)
    8. Dr. Chidi Diugwu, Deputy Director, New Media and Information Security, Nigeria Communications Commission (Government, Africa)
    9. Dr. Ben Ewah, Director of e-Government, NITDA - National IT Development Agency; (Government, Africa)

     Moderators

    1. Dr. Jimson Olufuye, Principal Consultant at Kontemporary Konsulting Ltd, Nigeria, and Founder/Fmr chair and chair of the advisory council, AfICTA. (Onsite Moderator)
    2. Mr. Inye Kemabonta, National Coordinator of AfICTA and CEO of Tech Law Development; and the Chair of AfICTA, Mr. Thabo Mashegoane for his opening remark. (Online Facilitator)

    Policy Questions to the Speakers

    The moderators posed the following questions to the speakers for their responses

    1. Considering that Africa is rated as a continent with the least contribution to the GDVC as evident through the dilemma experienced in the advent of the COVID-19: a. How inclusive is the GDVC and as a concerned stakeholder, what are the initiatives or actions required to take to amend the abnormal trend? b. Identify soft areas through which Africa could penetrate the GDVC and the benefits the continent would derive?
    2. Africa being home to major raw materials of production is yet with little or no contribution to the GDVC, what could have gone wrong, what are the remedies?

    Mr. Bimbo Abioye President of the Institute of Software Practitioners of Nigeria addressed the questions by highlighting the challenges faced by Africa in the Global Digital Value Chain (GDVC). He pointed out the lack of ownership and digital slavery in the continent's ecosystem. To address these issues, he emphasized the importance of enhancing policy frameworks, skills development, capacity development, research and development, and access to finance. Additionally, he stressed the need for infrastructural development and the creation of an enabling business environment across Africa. In his final submission, he envisaged the government leveraging existing solutions and existing capacity.

    Dr. Kossi Amessinou, Chief of the World Bank Division in Benin Republic highlighted the significant internet consumption from foreign countries but acknowledged a growing collective awareness in Africa, especially post-COVID. Despite this, challenges persist in the region. He proposed several solutions:

    Massive Investment in Digital Infrastructure: Dr. Kossi emphasized the need for substantial investments in digital infrastructure, especially from the private sector. He stressed the importance of broadband expansion into rural areas and advocated for new approaches to infrastructural development, including discussions on establishing data centers in Africa. Internet Exchange Points: He suggested building Internet exchange points across Africa to enhance local networks. Regulation: Dr. Kossi stressed the necessity of regulating the digital sector in Africa to ensure its growth and stability. Digital Literacy: Addressing the challenge of digital illiteracy, he recommended initiatives focused on enhancing digital literacy skills in the population. In his final submission, he envisaged capacity development and harnessing solar energy for Africa's own power.

    Dr. Ben Ewah, NITDA, emphasized the importance of understanding the existing structure of the labor market, especially the significant informal sector. He highlighted the need to identify specific areas where technology can address existing needs effectively. Focusing on interventions that cater for the majority of these needs will yield quick results for African markets. He stressed the government's role in recognizing the shift in resource utilization and harnessing of these changes for national development.

    Dr. Chidi Diugwu from NCC emphasized the vital role of Human Capacity Development, particularly concerning the inclusion of raw materials. He highlighted NCC's commitment to promoting research and development in the academic realm, with a focus on strengthening research grants for students in the field of artificial intelligence, given the transformative nature of the digital age. Dr. Chidi stressed the importance of identifying young talents, fostering their development, and increasing the number of skilled individuals to enhance the Human Development Index.

    Ms. Mary Uduma, West Africa IGF Coordinator representing the civil society emphasized the importance of Africa's grassroots participation in the Global Digital Value Chain (GDVC). She highlighted the discussions held at the IGF, both regionally and nationally, and stressed the need for Africa to be actively engaged in the value chain. Mary Uduma expressed concerns about Africa's dependence on the Western world during the COVID-19 pandemic and advocated for developing local businesses and voices within the continent. She praised Africa's achievements in the fin-tech sector, citing examples like Konga and Jumia. Mary Uduma called for the protection of human rights, advocating for standards and data safety. She questioned the location of data and emphasized the importance of housing data within Africa rather than relying solely on cloud services.

    Dr. Melissa Sassi from the Private Sector in North America highlighted the significance of tech entrepreneurship for Africa's economic growth. She emphasized the need to foster a culture of digital entrepreneurship, which plays a crucial role in Africa's capacity and economic development. Dr. Sassi stressed the importance of encouraging innovation, financial stability, practical skills, collaboration, and engagement. She advocated for integrating entrepreneurship culture into tertiary education and scaling up capacity-development efforts.

    Chief Toyin Oloniteru, CEO D.A.P.T, highlighted the importance of unbiased self-appraisal regarding Africa's strengths and progress. He emphasized the need to build on existing strengths and advance further. Chief Toyin pointed out the significant business expansions in Africa, citing examples like MTN and the banking sector, which have expanded beyond the continent. He stressed the need for behavioral modification, advocating for crowdfunding and crowdsourcing within Africa's resources. Chief Toyin emphasized the value of funding initiatives through crowdsourcing, promoting self-reliance and reducing dependency on external sources. The younger generation needs to be structured and guided to be focused on diverse opportunities available for skills development towards sustainable growth and development in Africa.

    Ms. Rachael Shitanda, Executive Member of Computer Society of Kenya, highlighted the need for Africa to leverage its resources for economic development and internet inclusivity. She emphasized the importance of developing local content, focusing on government initiatives. She shared perspectives with Mr. Bimbo Abioye on finance, creating enabling environments, local networks, and policy regulation. Ms. Shitanda stressed the importance of breaking silos, merging skills, and strengthening capital investment. She urged the continent to safeguard its data and collaborate effectively for growth and development.
     
    Prof. Joanna Kulesza, representing the Academia, emphasized the need for comprehensive and well-aligned regulations, coordinated and reliable capacity development, addressing policy challenges in Africa's global value chain, and aligning policies with sustainable development goals. She stressed the importance of civil society engagement, consistent policy development, raising awareness about broadband satellite, and resolving data-related questions. Prof. Kulesza highlighted the role of governments in ensuring increased African participation in the digital chain.

    She further emphasized the need to address policy challenges within the digital value chain, particularly in the African region. She highlighted the importance of aligning with the sustainable development goals with secure and stable internet access, enabling the development of technology based on accessible opportunities. Prof Coffin stressed the importance of awareness and recommended strengthening civil society engagement. She advocated for policy development through a multistakeholder approach, emphasizing that Internet access is a human right. Prof Coffin urged governments to consider jurisdiction, equipment ownership, and internet shutdown protocols during crises. Regarding data collection processes, she underscored the necessity for government involvement to enhance Africa's participation in the global value chain.

    Summary Recommendations 

    1. Governments, along with the support from various stakeholders, should formulate clear and supportive policies prioritizing local content integration in sectors like energy, mining, manufacturing, and technology. African governments, including entities like Nigeria Communications Commission (NCC) and National IT Development Agency (NITDA), should invest significantly in digital infrastructure.
    2. The private sector and other stakeholders should establish a crowdfunding mechanism where indigenous investors and individuals can contribute. This approach enables Africans to create digital interventions that are locally controlled and beneficial to the continent. A deliberate effort should be made to enhance capacity and engage the public in inventing solutions for our unique challenges.
    3. Africa needs a holistic approach to enhance its participation in the Global Digital Value Chain (GDVC). This includes investing in digital infrastructure, promoting indigenous solutions, and fostering digital entrepreneurship. Governments and private sectors should collaborate to develop clear policies, encourage local content integration, and invest in digital infrastructure. Additionally, there should be a focus on human capacity development, especially in emerging technologies like artificial intelligence. Identifying and nurturing talents among the youth is crucial for long-term sustainable growth.
    4. It's essential to mentor and empower the younger generation in the rapidly evolving digital landscape.
    5. African nations must enhance capacity development comprehensively across various sectors.
    IGF 2023 WS #495 Next-Gen Education: Harnessing Generative AI

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    Digital empowerment is a priority and especially GenAI has a lot of potential in academic curriculum for young minds. By enabling acess via audio inputs, translation tools etc, GenAI can amplify an individual’s potential and increase the learning outcomes. But there are some academic concerns like the accepted levels of plagiarism, the impact on critical thinking etc.

    ,

    There is a strong need for strong cybersecurity measures: Use of GenAI by Youth and school students will require strong security and data privacy measures as it is prone to misuse. Privacy is a quintessential concern for a young person. By setting standards, sharing best global practices etc, we can successfully merge GenAI in education. It is a multifaceted challenge but the benefits outweigh the challenge.

    Calls to Action

    Policymakers need to take on an inclusive approach which can make use of GenAI more global diverse and inclusive of ethnicities, races, and local contexts. Diverse datasets, newer user-centric approaches that go beyond Euro-centric models with privacy in design is welcome.

    ,

    Educators need to collaborate with technical community, app developer, cybersecurity experts etc. to ideate on more inclusive GenAI

    Session Report

    Link to the report (PDF Version): https://drive.google.com/file/d/16QC9suOkn4ZBNzpkta8xZZl-Gg5KW5dM/view?…

    IGF 2023 WS #495 Next-Gen Education: Harnessing Generative AI

    Ihita G. welcomed everyone and set the context by highlighting the relevance of Generative (Gen) AI in education, underlined it’s use in personalized learning. She added that use of Gen AI further increases the importance on critical thinking and digital literacy and invited interventions from audience which primarily constituted on concerns around plagiarism in academic pieces.

    She introduced the speakers and invited Ms. Dunola Oladapo, a representative of inter-governmental organization to explore GenAI’s role in education. Ms. Oladapo argued that Digital empowerment is a priority for the youth. Covid 19 was a definitive moment for the history. The digital access is not uniform and about 55% of youth in Africa don’t have access to Internet. It has a multifold impact – lack of affordable devices, high internet costs etc. are some challenges which restricts young people to participate in a connected future with others.

    She shared ITU’s Generation Connect platform’s work on AI for Good– that it focusses on how young people are connecting with AI and explores different ways by which power of technology can be harnessed for a connected digital future.

    Ihita asked Connie Man Hei Siu and Osei Manu Kagyah (civil society) their opinions on responsible and ethical use of generative AI technologies in educational settings including algorithms, and the gaps that need to be addressed. Osei said that it’s an important conversation that’s long due as AI has impacted given that the industry is racing ahead of academia. He emphasized on the need for a human-centric approach and a mutual platform to address issues of accountability, bias and security of Generative AI in formal education.

    Connie shared her insights and highlighted the importance to explore as GenAI as can knockdown long-standing barriers like language and make learning more inclusive via translations, audio inputs etc. GenAI can also help student by help in managing schedules, increase learning outputs, connect with peers and reduce the stress of multi-tasking. She explored the challenges by emphasizing on the misuse of the tech:

    • higher degrees of reliance could hinder critical thinking skills of students
    • given that it required a lot of data, it can compromise user’s privacy
    • AI systems can inherit biases

    She underlined the need to promote responsible usage and vigilance since technology isn’t inherently good or bad.

    Ihita invited audience for interventions which featured concerns on the right kind of regulation, the need for an academic dialogue amongst PhD scholars and mentors on the extent of use of GenAI. Educators in the audience used example of calculator as a case wherein there was a possibility of hinderance of critical abilities but it further amplified mathematical models. 

    Ihita posed another question to the speakers - How can policymakers collaborate with relevant stakeholders to ensure that teaching and learning processes are enhanced while sustaining creativity, critical thinking, and problem-solving?

    Connie responded that policymakers required a thorough understanding on technology especially on leveraging GenAI’s power and safeguarding it. She suggested that it is important for policymakers to collaborate with stakeholders like students, teachers, academic institutions to understand the challenges. Further, to address challenges of data protection, and security infrastructure, the educators can team up with Teacher training institutes and tech companies. She highlighted that setting standards, sharing best practices globally can lead to successfully merging GenAI in education. It is a multifaceted challenge but the benefits outweigh the challenge.

    Online moderator Adisa agreed with Connie and contributed that the curriculum needs to evolve to address real world challenge. Ihita said need to do more assessment with respect to the models and asked Osei to respond. He argued that there is a need to uncolonized the designs since the deployment of AI tools have reflected a bias.

    Ihita posed the final policy question- How can policymakers ensure that the use of generative AI by youth in education is inclusive, age-appropriate and aligned with their developmental needs and abilities? She invited audience interventions on how can it be approached as a concern.

    A teacher from Finland expressed concern on who will create an inclusive model for children – as the approach of educators is different than that of a profit-earning company as the goal of inclusivity and protection needs to be aligned with learning. Another teacher from Japan added that GenAI model is US centric and there is a need to explore the local contexts.  Another audience member added that it’s not just about access to technology but also about knowledge e.g., the domestic context becomes important to understand what kind of data pool is being referred to. He referred to UNESCO’s open data report wherein the open science recommendation underlines knowledge sharing in sense of Global Commons.

    Ihita approached speakers for their final comments. Osei emphasized on the need more of interventions from different languages to move away from Euro-centric approaches. Connie suggested the need for stronger data protection laws and added with critical digital literacy skills, young people will have better skills to navigate the digital spaces. Policymakers need to take inclusivity driven approach like Personalized learning experience, accommodating the linguistic diversity etc. into consideration. Ihita concluded that young people need to take a stand themselves and contribute in the decision-making processes themselves to make the best of GenAI. She thanked everyone for joining.

     
       

     

    IGF 2023 Town Hall #61 Beyond development: connectivity as human rights enabler

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    It was highlighted by Robert Pepper that it’s possible to identify a shift in the lack of connectivity that went from a coverage gap to a usage gap. This means that, recently, the there was a improvement in the Internet coverage, and the main issue, now, relies in the Internet use by people who lives in regions that have Internet coverage

    ,

    Promises of universalizing Internet access through the 5G haven’t been materialized yet, and some sectors are already discussing the 6G technology. Internet fees, such as the fair share proposal, which may lead to a context of fragmentation, considering that a few companies would be able to provide a globally connected infrastructure. Zero rating agreements give an unfair advantage to large companies

    Calls to Action

    We call governments and intergovernmental agencies to reinforce the relevance of universal and meaningful connectivity as a fundamental enabler of human rights and elaborate on this relevance for the protection, promotion, and enjoyment of civil and political rights, in addition to economic and social development

    , We ask policy makers and govts to stand against imposition of direct payment obligations to the benefit of a few telecom operators. Current system has proven resilience and ability to evolve alongside the Internet. Considering roles of small, community and nonprofit operators in providing complementary connectivity for rural areas and minorities beyond sole reliance on incumbent infrastructure providers will sustainably address the digital divide
    Session Report

    Beyond development: connectivity as human rights enabler

    October 2023

    by session orgs: Raquel Rennó, Lucs Teixeira, and Nathan Paschoalini

    Introduction

    The 2030 Agenda for Sustainable Development explicitly recognises that the spread of information and communication technologies has the power to bridge the digital divide; as such, governments are increasingly addressing connectivity expansion as part of their efforts to meet the Sustainable Development Goals. However, framing connectivity solely as a facilitator for social and economic growth is limiting. These approaches ultimately privilege the most powerful telecommunication industries that can afford international agreements; if all connectivity is provided by the same few global incumbent telecommunication operators, there will be very little diversity in technologies, content, and little space for dissident voices.

    To expand on this issue and bring in different views, ARTICLE 19 organized a Town Hall session during the 18th edition of the Internet Governance Forum (IGF2023) in Kyoto, Japan. It brought together regulators, members from the private sector, the technical community and civil society to discuss the following questions:

    • Would it be possible to re-center connectivity as a human rights enabler, moving away from the development-only approach?
    • How can PPP and cross-national agreements help solve the digital divide while allowing the diversity in the ISP technologies, improving innovative policies and techniques to spectrum management instead of just promoting one specific industry?

    Moderated by ARTICLE 19 Program Officer Raquel Renno Nunes, the session included Jane Coffin (civil society), Thomas Lohninger (epicenter.works, civil society), Robert Pepper (Meta, private sector) and Nathalia Lobo (Ministry of Communication of Brazil, public sector). As online moderator, Lucs Teixeira (ARTICLE 19 Internet of Rights fellow, civil society) coordinated participants in the Zoom room; Nathan Paschoalini (Data Privacy Brazil, civil society) was the rapporteur.

    The full recording of the Town Hall session, with captions, is available at IGF’s YouTube channel: https://www.youtube.com/watch?v=MwlgWVXYFuo

    Discussion

    Before the discussion, the on-site moderator, Raquel Renno, stated that this Town Hall should be a space for an open discussion on connectivity issues, that enables different views on this subject, considering its importance as a human rights enabler. Then, invited speakers exposed their views on the questions raised above, with the opportunity for participation extended both to the on-site audience and to remote participants.

    After the panellists’ intervention, there was a open mic round, in which members of the audience and the panellists could debate the topics covered at the beginning of the panel.

    We split the points raised between three interrelated main problems.

    Problem 1: Building infrastructure

    Robert Pepper highlighted the fact that in the last few years, it was possible to identify a shift from a “coverage gap” to a “usage gap”. In this sense, more than 2 billion people could be online, but aren’t. He mentioned a project they conducted within sub-Saharan countries to understand the reasons why the majority of the population in this region are not online. In this study, they identified three main reasons for the issue, being a) affordability of devices and of monthly services; b) lack of digital literacy; and c) lack of local relevant content online. Another issue identified was related to the lack of electricity. He questions how to make people online, considering  that Internet access should be recognized as a human right and a human rights enabler. 

    Jane Coffin in her turn told us about how difficult it was to take fiber from Zambia to South Africa, mentioning negotiations between the countries borders, the presence of an historical bridge in the way, and a swarm of bees as obstacles in the more than 1 year period of deployment. This example serves the purpose of highlighting the difficulties related to Internet infrastructure and the barriers related to building Internet infrastructure in a cross border region. According to Coffin, it takes a multistakeholder approach to improve Internet access and to strengthen the dialogue with governments, so they can understand what has to be done to speed up Internet connectivity.

    She also mentioned that community networks come from a diversification of perspectives to bring in a last mile connectivity. In this sense, such networks can provide a type of Internet connection that is different from the ones provided by bigger ISPs, which don’t always have the economical interest to connect people in remote or otherwise impractical places. She states that building network infrastructure is usually very expensive, but there are alternative ways to build Internet infrastructure, especially if focused on smaller networks, and that different organizations can work together to achieve and improve Internet connectivity for those underserved publics.

    Thomas Lohninger acknowledges that all promises related to 5G, especially regarding connectivity, have not yet materialized; and despite this, discussions about 6G can already be identified.

    Nathalia Lobo presented the Brazilian context on the issues related to the universalization of Internet access in the country, due to the continental dimensions of Brazil. She mentioned that the Brazilian 5G auction was an opportunity to establish obligations related to the universalization of Internet access to the companies that won the process.

    She also presented a Brazilian public policy named Connected North, that was designed to strengthen connectivity to the northern region of Brazil through eight information highways composed of twelve thousand kilometers of optical fiber laid in the bed of the Amazon River. Lobo also mentioned that the public-private partnerships play a key role in the accomplishment and maintenance of the Connect North project. 

    Problem 2: Fair share proposals

    Thomas Lohninger address issues related to network fees, such as the fair share debate, which is not new, dating  back to the telephony era. According to Thomas, in this context, small ISPs have revealed that they are afraid of their ability to compete and to connect to other networks if such a proposal is approved, due to economical barriers. This, Thomas said, might lead to a fragmented Internet, where only large ISPs would have the financial resources to remain connected to the global network.

    Robert Pepper reinforces this critical view on network fees, explaining that the whole rationale behind them is based upon the architecture and economics of “Telecom Termination Monopoly”. With past network architectures, the distance and duration of connections increased costs substantially; after 4G arrived, with “essentially flat IP networks even in mobile”, the cost for connection is a step function and the duration or data exchange of that connection does not increase costs for the telcos unless they peak simultaneously.

    Problem 3: Zero-rating practices and Net Neutrality

    Thomas Lohninger mentioned issues related to zero rating such as Meta’s Free Basics, taking the Colombian Constitutional Court case as an example. He stated that zero rating contracts violate net neutrality, whose defense is deeply associated with accomplishing meaningful connectivity.

    Regarding this, Robert Pepper mentions a Meta project called “Discover”, which he describes as an evolution from Free Basics and instead of limiting the access to a selection of allowed websites, it limits all web pages to text, filtering images and video. Pepper mentions this as a solution that is not perfect but may serve as an “introduction to the Internet”, and as a way for people in prepaid packages to keep using the network even if degraded after the data package is over.

    Key takeaways

    1. Some of the panelists see a shift in the lack of connectivity that went from a coverage gap to a usage gap. This means that, recently, the there was a improvement in the Internet coverage, and the main issue, now, relies in the Internet use by people who lives in regions that have Internet coverage;
       
    2. On the other hand, some consider the lack of infrastructure still an important issue to address. It is conventional wisdom that building infrastructure is expensive, however there are strategies to lower this cost, which need a strong  multi stakeholder approach to address it.
       
    3. The mismatch between a business model aiming for continuous improvement for the fastest and the better, clash with the reality faced by many in the Global Majority. The promises of universalizing Internet access through the 5G haven’t been materialized yet, and some sectors are already discussing the 6G technology. New regulatory proposals, such as the fair share or tech toll, may lead to a context of Internet Fragmentation, where only the largest content providers would be able to accommodate the demands, still, only in the strongest markets where the investment would meet some return. The second concern was related to zero rating agreements, which give an unfair advantage to large companies.

    Next steps:

    Based on the discussion, the organizers of the session see there are different interests from the private sector and pressure on the public sector, which in some cases can overcome the needs of the people in most fragile condition. It would be important to:

    • Have governments and intergovernmental agencies to reinforce the relevance of universal and meaningful connectivity as a fundamental enabler of human rights and elaborate on this relevance for the protection, promotion, and enjoyment of civil and political rights, in addition to economic and social development.
       
    • We also call for regulators to adopt a human rights-based approach to national, regional, and local connectivity expansion and improvement plans. Considering the roles of small, community, and non-profit operators in providing complementary connectivity for rural areas and minorities beyond the sole reliance on incumbent telecom infrastructure providers to sustainably address the digital divide.
    IGF 2023 Open Forum #54 The Challenges of Data Governance in a Multilateral World

    Updated:
    Data Governance & Trust
    Key Takeaways:

    Multilateralism and international organizations evolve as answers to cross-border challenges, enabling interstate cooperation to discover appropriate solutions

    ,

    A greater understand of national points of view might be an extremely helpful tool for international conversations

    Calls to Action

    It is a vital step for multilateral spaces to explore potential paths ahead in order to get an agreement on standard vocabulary for fundamental issues connected to internet and data governance.

    ,

    Further experience sharing may be beneficial in finding effective approaches that others might replicate.

    Session Report

    Organized by the Laboratory of Public Policy and Internet (LAPIN) and the Brazilian Data Protection Authority (ANPD), the panel focused on debating how the Data Governance theme has been discussed in the G7 and G20 forums. The session was moderated by José Renato (LAPIN).

     

    Mr. Yoichi Iida, from the Ministry of Internal Affairs and Communications of Japan, started the presentation declaring their active participation defending inclusivity and a multistakeholder approach in those forums. Iida exhibited how they focused on the subject of free flow of information across borders, proposing in 2019 their Data Free Flow with Trust (DFFT) project in 2019, on G20. Mr. Yoichi also affirmed that in G20, they use "human centricity" as the main terminology, as opposed to G7, which uses "democracy". He believes that data flow and IA governance as the most important themes in their agenda, with the challenges of privacy protection, interoperability and human rights protection. The Japan Government recognizes the diversity in jurisdiction and approaches, but the frameworks should be as coherent and interoperable as possible. 

    In the second part, Gaurav Sharma brings the Indian perspective of embracing technology and digitalization in G7 and G20 and the Data Protection Bill in India. He affirmed the need to focus on norms able to interact between sectors. For him, the digital strategies should be transparent, inclusive, secure and conducive to the sustainable development goals. To finish his presentation, he defended more participation from the Global South.

    Alexandre talked about the labor behind data production and microworkers. He mentioned the need for attention on Cloud Economy and the so-called gatekeepers, and how the multi-level approach can benefit the discussions around data governance. To conclude, Alexandre mentioned the importance of merging digital rights organizations with traditional social movements, even in G20 negotiations.

    Veronica Arroyo mentioned that the discussion about data governance depends where it is located, because of the difference between jurisdictions. Some of them have a very strong enforcement mechanism but in other cases, the country follows a more flexible approach. Thus, the design of data governance depends heavily on the policies and the priorities that the country has. She presented how the SDGs can be the core issue and how the commonalities help to meet those goals in different frameworks.

    Luciano Mazza specified how there are different approaches to data protection and data flows, and within the G20 there is a certain stabilization in conceptual terms on the discussion of data free flow with trust. Every country tries to bring their own issues and reality towards the theme and weight which ones are higher up in the terms of importance. Mazza explains how one of the reasons why this was not discussed more directly is that when we started this discussion on data governance in the G20, there was a way to balance a little bit the debate on free flow of data and potential concerns or constraints in terms of a more development oriented perspective. He speaks how there are two different approaches included that are complementary in a way but not fully articulated in the G20 debate, which are the data free flow with trust and data for development. From a developing country perspective, it may feel like the subject is not fully mature, but in Brazil's case, they recognize that the issue is crucial and of utmost importance. He presented four priorities: universal and meaningful connectivity, artificial intelligence, e-government and information integrity. He affirmed how he does not envision the discussion on data governance as a full front debate in those forums.

    Miriam Wimmer addressed some of the challenges posed by data governance and how it has been explored, the different approaches that have been proposed by multilateral organizations such as G7, G20, the UN and the OECD over the past years. She points out that we have been observing lots of discussions in many different proposals manifested through declarations, roadmaps, agendas based on concepts such as data free flow with trust. One of the main challenges, in her perspective, is to understand how these different proposals interconnect with each other, in which aspects they complement each other and in which cases they create tensions or gaps. Wimmer affirmed that another relevant aspect is how to make sure that all important stakeholders participate in these discussions, understanding that when we discuss the flow of data across borders, we are not only debating the interests of companies or states, but the rights of individuals. The discussion should take into account multiple perspectives, based not only on different approaches that the countries may have towards data protection, but also on the different interests of the various stakeholders affected by this discussion. She mentioned the actual debate in Brazil on international data transfers, in which the authority is facing the challenges of making sure that the mechanisms that are going to be established are interoperable and will allow for the protection of the fundamental right to data protection, regardless of where the data is actually located.

     

    IGF 2023 WS #198 All hands on deck to connect the next billions

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:
    The session underscored that connectivity holds immense transformative potential, yet it grapples with several impediments. These include the challenges of rural connectivity and the staggering negative impact of the digital gender gap. Technological and policy innovation is pivotal in closing the connectivity gap. Better metrics, realistic policy targets and collaborative, whole-of society approaches are essential to overcome the challenges.
    Calls to Action
    Governments must invest in infrastructure, accommodate rising demand for connectivity, and prioritize skills development. Stakeholders should work together to address barriers and bridge the digital divide through a partnership-driven approach. Other stakeholders should continue innovating to expand connectivity and form partnerships. A collaborative, whole-of-society approach is essential to achieve and reap the benefits of global connectivity.
    Session Report

    Introduction and key takeaways

    Meaningful connectivity fuels innovation, competitiveness, and sustainable growth for all, but despite numerous private ventures, intergovernmental agreements and multistakeholder commitments to advance universal meaningful connectivity, 2.6 billion of the world’s population remains unconnected. To bridge this gap, it is important to go beyond traditional approaches and encourage innovation, cooperation, and flexible solutions to connect the next billions.

    The session brought together policy and technology experts to discuss concrete approaches to scale up innovative solutions for universal meaningful connectivity, while fostering investment and cross-sector partnerships to unlock the potential of ICTs and digital technologies.

    Against this backdrop, the speakers addressed the multifaceted challenges that impede widespread connectivity, including the challenges of bringing remote rural areas online and addressing the digital gender gap, which if closed, is estimated to have the potential to add 3 trillion USD to global GDP annually. Furthermore, the discussions underscored the need for the development of more robust metrics for measuring inclusivity and the setting of realistic policy targets to connect underserved populations effectively. The crucial role of governments in enabling meaningful connectivity took centre stage, with a call for greater investment, readiness to meet the burgeoning demand for connectivity, and the imperative of nurturing the required skills.

    The speakers delved deep into strategies aimed at dismantling the various barriers obstructing universal connectivity, with a particular emphasis on the role of the private sector in bridging the coverage and usage gaps through innovative approaches.

    Overall, consensus emerged among the speakers, who highlighted the importance of a collaborative partnership approach in delivering universal, meaningful connectivity. It was emphasised that all stakeholders, including governments, private sector, civil society and academia, must come together to collectively address the challenge of connecting the next billions through a holistic whole-of-society, or ecosystem, approach. This unified effort is seen as the most effective means of ensuring that no one is left behind in the global digital transformation.

    Call to action

    Governments have a significant role to play by dedicating resources to develop essential infrastructure, catering to the rising demand for connectivity, and helping people to build digital skills. Collaboration among diverse stakeholders is vital to effectively bridge the digital divide, with a strong emphasis on fostering partnerships. Realising the benefits of global connectivity requires a comprehensive, whole-of-society approach. Moreover, it's imperative that all stakeholders can continue to innovate, which requires an enabling policy environment. When developing policy and regulatory frameworks, it is essential to recognise the value of the entire communication and digital services landscape. These frameworks should be unbiased, adaptable to different technologies, and supportive of innovative business models, diverse technologies, standards, and system architectures.

    Further reading

    International Chamber of Commerce (ICC), White Paper on Delivering Universal Meaningful Connectivity

    International Chamber of Commerce (ICC), Paper on Digitalisation for People, Planet and Prosperity

    IGF 2023 WS #197 Operationalizing data free flow with trust

    Updated:
    Avoiding Internet Fragmentation
    Key Takeaways:
    The session highlighted the importance of horizontal, interoperable, and technologically neutral policy frameworks. Specific policy measures discussed included impact assessments, stakeholder commitments to prevent data fragmentation, support for encryption, and clear guidelines for government access to private sector data.
    Calls to Action
    Panellists emphasized that all stakeholders should prioritize international cooperation, common principles, and inclusive discussions to operationalize data free flow with trust and preserve the open, unfragmented essence of the Internet. Panellists called on policymakers to take a global approach that transcends regional boundaries, fostering trust, security, respecting human rights, and promoting innovation and economic growth.
    Session Report

    Introduction and key takeaways

    Global data flows are an essential engine for innovation, driving competitiveness, growth and enabling socioeconomic empowerment. However, mistrust in cross-border data transfers continues to grow due to concerns that national security, privacy or economic safety could be compromised if data transcends borders, leading to restrictive policies that deepen Internet fragmentation.

    The session touched on notable developments in promoting Data Free Flow with Trust (DFFT) through the OECD Declaration on Government Access to Personal Data Held by Private Sector Entities, and the G7 establishment of the Institutional Arrangement for Partnership (IAP). Against this background, speakers took stock of the innovative and empowering role of trusted global data flows, stressing that data only has value when it is accessible, useful, and able to be transferred. The discussion shifted to the challenges and risks posed by unilateral data governance policies and data localisation measures, including negative economic consequences and potential harm to human rights through surveillance.

    To address these challenges, participants highlighted the importance of horizontal, interoperable, and technologically neutral policy frameworks – noting that sound policies that enable data flows, while addressing legitimate concerns and aligning with international human rights law standards around security, privacy and commercially sensitive information, have the potential to create trust across the entire digital ecosystem. They emphasised the value of partnerships and inclusivity in policymaking, advocating for global approaches to data flows.

    Specific technical and policy measures referenced during the deliberations included data impact assessments, the commitment of stakeholders to prevent data fragmentation, support for encryption mechanisms as an enabler of trust and security, and the formulation of clear guidelines governing government access to private sector data. There was also the recognition that leveraging data for economic growth requires investment in wider infrastructures, including knowledge, skills and capacities to harness data. The session underscored the need for international cooperation and the creation of common principles, as well as inclusive dialogue on how to operationalise data free flow with trust and safeguard the open, unfragmented nature of the Internet.

    Call to action

    The session advocated for the establishment of universally accepted guiding principles concerning government access to personal data. Any principles should reflect and uphold international human rights law and standards. The panellists urged the adoption of a comprehensive global strategy that goes beyond regional confines, with a specific focus on building trust, enhancing security, upholding human rights and spurring innovation and economic growth. Common guiding principles could ultimately lead to effective collaborative efforts, involving all stakeholders and nations, to cultivate approaches that are interoperable and legally sound. This collaboration should promote the free exchange and responsible use of data in a manner that garners trust and upholds human rights, including high privacy standards. Policymakers were encouraged to endorse cross-border data flows while ensuring that users human rights are protected through transparent safeguards that are implemented in a manner that is non-discriminatory and does not create barriers to trade.

    Further reading

    ICC Policy Primer on Non-Personal Data

    ICC White Paper on Trusted Government Access to Personal Data Held by the Private Sector

    IGF 2023 Lightning Talk #102 The International Legal Dimension of ICTs

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    The legal aspects of ICT are only debated on the political level and not on the technical level.

    ,

    There is no mechanism for cooperation at the multistakeholder level that disputes this agenda.

    Calls to Action

    The unilateral standards need to be developed.

    ,

    The discussion on the development of a universal legally binding document needs to be continued.

    Session Report

    The final session was mainly focused on the effective application of international law in the telematics and ICT dimension. Due to the difference in the opinion on how the existing international laws need to be applied in cyberspace and if there is a need for the development of a legally binding international mechanism, the session speaker looked into this aspect through the lens of different regions of the world.It was noted that the discussion on this matter should certainly be continued. The problem is that the legal mechanisms never catch up with the latest technology. So the governments need to advance in the development of the appropriate legal dimension that, without doubt, will cover the 4th dimension.

    IGF 2023 WS #349 Searching for Standards: The Global Competition to Govern AI

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    Different jurisdictions and organizations are taking diverse approaches to AI governance. These regulatory processes are critically important insofar as they will likely establish our framework for engaging with AI’s risks and harms for the coming generation. There is a pressing need to move expeditiously, as well as to be careful and thoughtful in how new legal frameworks are set.

    ,

    Learning from previous internet governance experiences is crucial. While discussions around how to prevent AI from inflicting harm are important, they will meet with limited success if they are not accompanied by bold action to prevent a few firms from dominating the market.

    Calls to Action

    A global governance mechanism that coordinates and ensures compatibility and interoperability between different layers of regulation will be needed. But successful regulation at a national level is indispensable. National governments will be responsible for setting up the institutions and laws needed for AI governance. Regional organizations, state-level regulation, and industry associations are all influential components of this ecosystem.

    ,

    While industry standards are important, public-oriented regulation and a wider set of policy interventions are needed. As for self-assessment and risk-assessment mechanisms, while they may become critical components of some AI regulatory structures, they may not succeed without sufficient enforcement to ensure that they are treated as more than just a box-checking exercise.

    Session Report

    Searching for Standards: The Global Competition to Govern AI

    IGF session 349 Workshop Room 1

    A global competition to govern AI is underway as different jurisdictions and organizations are pursuing diverse approaches ranging from principles-based, soft law to formal regulations and hard law. While Global North governments have dominated the early debate around standards, the importance of inclusive governance necessitates that the Global Majority also assumes a role at the center of the discussion.

    A global survey reveals diverse approaches. The European Union's AI Act is the most prominent process, but it is far from the only model available. Singapore is amending existing laws and deploying tools to help companies police themselves, while Japan is combining soft-law mechanisms with some hard-law initiatives in specific sectors based on a risk-assessment approach. The US is considering a similar approach as it begins to create the frameworks for a future AI governance structure. In the Global South, several countries in Latin America and Africa are actively engaging in the AI discussion, with a growing interest in a hard-law approach in the latter.

    These regulatory processes are critically important insofar as they will likely establish our framework for engaging with AI’s risks and harms for the coming generation. There is a pressing need to move expeditiously, as well as to be careful and thoughtful in how new legal frameworks are set.

    Different layers of regulation and governance strategies will be critical for creating a framework that can address AI’s risks and harms. First, because AI is a cross-border form of human interaction, a global governance mechanism will be needed to coordinate and ensure compatibility and interoperability between different layers of regulation. While this global layer could take the form of a soft law (declaration or recommendation), a more binding document (e.g., a convention) could also be considered as an effective way to coordinate AI regulation globally. From UNESCO’s perspective, a UN-led effort is critical, not only because AI requires a global multi-lateral forum for governance but also because unregulated AI could undermine other priorities like sustainable development and gender equality.

    Despite the need for global governance, successful regulation at the national level is essential. Ultimately, national governments are responsible for setting up the institutions and enacting and enforcing laws and regulations needed for AI governance.  Regional organizations, state-level regulation, and industry associations are all influential components of this ecosystem.

    At the same time, industry standards may be the most common form of regulatory intervention in practice. In such a context, the industry should consider developing responsible AI as part of their corporate social responsibility or environmental social governance practices - including the implementation of guidelines or principles on AI’s uses and development, codes of conduct, or R&D guidelines, since the way in which companies develop and use AI will have a huge impact on society as a whole.

    However, while it is important to raise industry standards and to involve companies in the regulatory process, we need to understand the incentives that drive these companies to work with AI, which is primarily to monetize human attention and to replace human labor. For that reason, industry standards should be complemented by public-oriented regulation, and a wider set of policy interventions.

    As for self-assessment and risk-assessment mechanisms, while they may become critical components of some AI regulatory structures, they may not succeed without sufficient enforcement to ensure that they are treated as more than just a box-checking exercise. It is also important to keep in mind that different approaches may only be relevant to specific subsets of AI, such as generative or decision-making AI systems.

    Small countries will face unique challenges in implementing effective AI governance. Small nations that regulate too quickly could end up pushing innovation elsewhere. These countries could establish their role in AI governance if they strategize and work together with like-minded initiatives or systems. While the deployment and design of AI are happening in the largest countries, we should be aware that AI will also be heavily used in other parts of the world.  Focusing on regulating not only the creation but also the use of AI applications will be key to the success of AI regulatory and governance experiences in small countries. Over the past decades, machine learning research and application have moved from public to private hands. This may be a problem, especially for small countries, as it shortens the speed of deployment from an idea to an application while limiting the ability of governments to restrict potentially harmful behavior.

    Learning from previous Internet governance experiences is crucial to AI governance. While we usually think about AI as if it is a brand-new thing, we need to think about its components and break down what exactly we mean by AI, including infrastructure data, cloud computing, computational power, as well as decision-making.

    We need to consider the impact of market power on AI governance, given that AI trends towards monopoly (large data, lots of computational power, advanced chips, etc.). While the discussions around how to prevent AI from inflicting harm are important, and issues of preventing exploitation are necessary, they will meet with limited success if they are not accompanied by bold action to prevent a few firms from dominating the market and various parts of the AI tech stack. AI governance  should focus on breaking down the various components of AI - such as data, computation power, cloud services, and applications - to redress monopolistic practices  and crack down on anti-competitive practices. This includes confronting consolidation in the cloud market and exploring public options. Regulators could also examine the possibility of forcing the handful of big tech firms that are providing the leading AI models to divest cloud businesses or eliminate the conflict of interest that incentivizes them to self-preference their own AI models over those of rivals.

    Another valuable lesson comes from the early regulation of the Internet in terms of copyright and freedom of expression. We need to think about to what extent the modeling of personal data protection laws and the current debate on platform liability should influence the debate on the regulation of AI’s potential harms. The first generation of Internet regulation left us with much stricter enforcement of intellectual property rights than enforcement of privacy rights, a legacy of early prioritization of the harms that were deemed most urgent decades ago, but which persists to this day. This should be instructive about the need to be deliberate and careful in selecting how harms are understood and prioritized in the current phase of AI regulation, as these technologies continue to proliferate.

     

    IGF 2023 WS #196 Evolving AI, evolving governance: from principles to action

    Updated:
    AI & Emerging Technologies
    Key Takeaways:
    The session discussed existing AI guidelines, principles and policies. Speakers shared lessons learned from their development, adoption and implementation. They stressed the need for comprehensive, inclusive, interoperable and enabling policies that help harness AI’s developmental and socio-economic benefits, operationalize globally shared values and remain flexible enough to be adapted to local specificities and cultural contexts.
    Calls to Action
    Set comprehensive, inclusive and interoperable AI policies by meaningfully involving all stakeholders across all levels of the AI policy ecosystem: responsible development, governance, regulation and capacity building.
    Session Report

    Introduction and key takeaways

    AI, as a general-purpose technology, carries the potential to enhance productivity and foster innovative solutions across a wide spectrum of sectors, ranging from healthcare, transportation, education, and agriculture, among others. However, its design, development, and deployment introduce challenges, especially regarding the role of humans, transparency, and inclusivity. Left unaddressed, these risks can hamper innovation and progress, jeopardising the benefits of AI deployment, while undermining the crucial trust required for the widespread adoption of AI technologies.

    Against this context, the session convened a diverse panel of speakers who explored the current state of play in developing AI governance frameworks. The speakers recognised the progress of international efforts to guide the ethical and trustworthy development and deployment of AI. Notable examples referenced included the OECD's AI Principles, UNESCO's recommendations on AI ethics, declarations from the G7 and G20, the EU's AI Act, the NIST AI Risk Management Framework, ongoing efforts at the Council of Europe to draft a convention on AI with a focus on human rights, democracy, and the rule of law, the African Union's endeavours to draft an AI continental strategy for Africa, and a plethora of principles and guidelines advanced by various stakeholders.

    As AI continues to evolve, panellists suggested the need to harness its full potential for socioeconomic development, while ensuring alignment with globally shared values and principles that prioritise equality, transparency, accountability, fairness, reliability, privacy, and a human-centric approach. The panellists agreed that achieving this equilibrium will necessitate international cooperation on a multistakeholder and multilateral level. A key takeaway was the necessity for capacity building to enhance policymakers' awareness and understanding of how AI works and how it impacts society.

    The session recognised, among others, the merits of self-regulatory initiatives and voluntary commitments from industry, applauding their agility and effectiveness in advancing responsible AI development. The discussions advocated for interoperability of approaches to governance and suggested that any policy and regulatory framework must be adaptable and grounded in universally shared principles. This approach was seen as vital to navigate the ever-evolving technology landscape and to accommodate the unique demands of various local contexts and socio-cultural nuances.

    Overall, comprehensive, inclusive, and interoperable AI policies were recommended, involving all stakeholders across the AI policy ecosystem to promote responsible development, governance, regulation, and capacity building.

    Call to action

    There was a resounding call for comprehensive, inclusive, and interoperable AI policies. Such policies, drawing upon the collective expertise of all stakeholders within the AI policy ecosystem, can foster responsible development and effective governance of AI, as these technologies continue to evolve. This holistic approach would pave the way for a more responsible and sustainable AI landscape.

    IGF 2023 Day 0 Event #133 Aligning priorities for a shared vision on digital policy

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:
    This session emphasized linkages between different areas of digital policy, from the fundamentals of connectivity, to the governance of data, to understanding and mitigating the risks raised by AI. The panel stressed the importance of cross-ecosystem collaboration, with one panellist summarising the conversation by saying that all stakeholders are part of the same ecosystem, and the value of partnerships between the public and private sector.
    Session Report

    Introduction and key takeaways

    Our world has gone digital, transforming industries and economies globally. While this digital shift offers innovation and sustainable development opportunities, unilateral policies and governance can deepen inequalities and disrupt global economies, eroding trust in digital technologies. Numerous global organizations, involving multiple stakeholders, are dedicated to connecting divergent policy approaches and striving for worldwide, adaptable solutions. Their mission is to comprehend and regulate evolving technologies and leverage their potential for sustainable, inclusive socioeconomic progress.

    The session brought together government and industry representatives to discuss mutual priorities for advancing sustainable development through partnerships, as described in Goal 17 of the SDGs. It was arranged around three topics: AI and global AI governance, cross-border data flows and global data governance, and connectivity and digitalisation for development.

    The session started with panellists sharing views on the governance of AI. The conversation opened with the United States’ approach to addressing risks, both long-term risks (safety, security, threats to human existence) and short-term risks raised by current uses of AI (privacy, discrimination, disinformation, labour market). This includes securing voluntary commitments from leading companies. In turn, speakers stressed the need for domestic efforts to align with international initiatives like the Global Partnership on AI, the OECD, and the G7 Hiroshima Process – highlighting the importance of multistakeholder spaces of collaboration.

    The conversation then moved onto how the private sector is addressing those risks, with panellists highlighting the need for a robust governance framework, while running through some of the practical measures companies take to address AI risks. In addition, panellists suggested that the world is looking to both policymakers and businesses to respond to those risks, as action needs to be accelerated. In particular, panellists suggested that action was needed on three simultaneous fronts: global harmonised principles, standards and voluntary measures, and concrete regulation on a national level.

    The next segment of the session covered data governance, with panellists discussing how to create a world where data benefits everyone, including the challenge of aligning data governance with economic development. Other panellists highlighted private sector efforts for data free flow with trust and advocated for principles, privacy protection, and investment-friendly policies. The discussion underscored the importance of inclusive data governance to support a global digital economy.

    In the final segment of the session, speakers discussed connectivity and digitalisation for development. Government panellists emphasised the need for a multistakeholder approach in shaping the global connectivity policy agenda. Other panellists highlighted private sector efforts, suggesting that to meet ambitious connectivity goals we need greater investment and an enabling policy environment. Panellists also reflected on changing market dynamics and their impact on affordability and choice for consumers. The panel stressed the importance of cross-ecosystem collaboration, with one panellist summarising the conversation by saying that all stakeholders are part of the same ecosystem and rely on one another to connect everyone, everywhere.

    Ultimately, this workshop highlighted that there are many areas where governments and the private sector are forging great partnerships to resolve fundamental questions about how to govern digital technologies.

    Call to action

    The panellists underscored the need for cross-ecosystem development and an approach to policymaking which appreciates the interconnections and dependencies between different areas of digital policy. There was a consensus on the importance of securing voluntary commitments to address the risks associated with AI while ensuring the development of globally harmonised principles. With regards to data governance, the discussion emphasised the creation of data governance frameworks that are inclusive, transparent and build on trust.  The speakers also agreed on the importance of the multistakeholder approach in shaping global policy related to digital technologies. Finally, speakers underlined the need for increased investment in connectivity, the establishment of an enabling policy environment, and the promotion of cross-ecosystem collaboration to connect everyone, everywhere.

    IGF 2023 WS #465 International multistakeholder cooperation for AI standards

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    Initiatives like the AI Standards Hub highlight the importance of bringing together expertise from across academic institutions, national standards bodies, and national measurement institutes for unlocking the potential of standards as effective AI governance tools underpinned by multi-stakeholder processes. It is key for such initiatives to link up, identify synergies, and pursue opportunities to coordinate efforts across countries.

    ,

    Increased international networking across civil society, academia, the technical community, industry, and regulators/government is critical for addressing capacity building, promoting participation from all stakeholder groups, and advancing global alignment in the field of AI standardisation. Efforts aimed at individual stakeholder groups have an important role to address the needs of groups currently underrepresented in AI standardisation.

    Calls to Action

    MAG should actively consider what the IGF can do to advance the promotion and collaboration on globally recognised AI standards (including technical measurement standards).

    ,

    Civil society, academia, the technical community, industry, regulators, and government should actively engage with AI standards initiatives, such as the AI Standards Hub, designed to advance multi-stakeholder input in AI standardisation.

    Session Report

    The session was dedicated to exploring the role that multistakeholder participation and international cooperation must play to unlock the potential of standards as effective AI governance tools and innovation enablers around the world. The workshop followed a three-part structure. The first part presented the work of the AI Standards Hub, a UK initiative dedicated to building a diverse community around AI standards through knowledge sharing, capacity building, and world-leading research. The second segment featured a panel of four speakers, bringing in diverse perspectives from across different stakeholder groups and geographic regions. The final part of the workshop centred on gathering questions and comments from the audience participating in-person and remotely via online platforms.

    Segment 1: Introduction to the AI Standards Hub. The workshop started with the introduction of the AI Standards Hub, a joint UK initiative led The Alan Turing Institute, the British Standards Institution (BSI), and the National Physical Laboratory (NLB). Dr Matilda Rhode, the AI and Cyber Security Sector Lead at BSI, began by introducing the mission of the Hub, explained the significance of standards for the evolution of the AI ecosystem, and provided a brief overview of standards development processes. Dr Florian Ostmann, the Head of AI Governance and Regulatory Innovation at the Alan Turing Institute, addressed the importance of stakeholder diversity in AI standardisation and provided a snapshot of the Hub’s work across its four pillars – (1) AI standards observatory, (2) community and collaboration, (3) knowledge and training, and (4) research and analysis. Finally, Sundeep Bhandari, the Head of Digital Innovation at NPL, discussed international collaborations pursued by the Hub with organisations such as the OECD, NIST and SCC, and outlined future collaboration opportunities for building international stakeholder networks, conducting collaborative research, and developing shared resourced on AI standards.  

    Segment 2: Panel discussion. Nikita Bhangu, the Head of Digital Standards policy in the UK government's Department for Science, Innovation and Technology (DSIT), started off the panel discussion by providing an overview of the UK government’s policy approach to standards in the context of AI. Referring to the recent AI white paper, Ms Bhangu highlighted the important role that standards, and other non-regulatory governance mechanisms and assurance techniques, can play in creating a robust set of tools for advancing responsible AI. Elaborating on the complexity of the standardisation ecosystem, she noted there are many barriers that stakeholders face in meaningfully engaging with AI standards and that it is vital for governments to support diverse stakeholder participation in standards development processes. Reflecting on DSIT’s policy thinking that led to the creation of the AI Standards Hub, Ms Bhangu noted that key aims guiding the initiative were to increase adoption and awareness of standards, create synergies between AI governance and standards, and provide practical tools for stakeholders to engage with the AI standards ecosystem.

    Following this, the international panel took turns to discuss the most important initiatives in AI standardisation aimed at advancing multistakholder participation, addressed questions on emerging stakeholder needs and challenges in different parts of the world, and discussed the importance of international collaboration on AI standards.

    Ashley Casovan, the Executive Director of the Responsible AI Institute, provided insights on Canada’s AI and Data Governance Standardization Collaborative from the perspective of civil society. She explained that the initiative aims to bring together multiple stakeholders to reflect on AI standardisation needs across different contexts and use cases. Wan Sie Lee, the Director for Data-Driven Tech at Singapore’s Infocomm Media Development Authority (IMDA), stressed that there is a widespread recognition of the importance of international cooperation around AI standards in Singapore. This is exemplified by Singapore’s active engagement in ISO processes and close collaborations with other countries. Elaborating on the city-state’s efforts to achieve international alignment on AI standards, Ms Lee pointed to Singapore’s AI Verify initiative, which closely aligns with NIST’s recently published Risk Management Framework. Aurelie Jacquet, Principal Research Consultant on Responsible AI for CSIRO-Data61, highlighted several Australian initiatives centred on advancing responsible AI, including Australia’s AI Standards Roadmap, the work of the National AI Centre and Responsible AI Network, and the development of the NSW AI assurance framework. These initiatives are dedicated to developing education programmes around AI standards, strengthening the role of standards in AI governance, and leveraging existing standards to provide assurance of AI systems in the public sector and beyond.

    Moving on to the topic of stakeholder needs and challenges, Nikita Bhangu pointed to the lack of available resources and dedicated standards expertise within SMEs, civil society, and governments, which often leads to these groups being underrepresented in AI standards development processes. Ashley Casovan highlighted similar challenges in Canada, where lack of resources in government teams is hindering the process of analysing the information collected by the Collaborative. Ms Casovan also pointed to the efforts of the Canadian Collaborative to include perspectives from all domains of civil society, as well as indigenous groups, to ensure that their input is taken into consideration when finding solutions to harms posed by AI. Wan Sie Lee noted that Singaporean government is trying to address the challenge of limited resources by focusing on areas where they can provide the most value to the global conversation, such as tooling and testing. Furthermore, to improve stakeholder diversity Singapore is making an active effort to include voices from the industry in its policy approaches. Finally, Aurelie Jacquet addressed the complexity of the standardisation ecosystem and the challenges stakeholders face in understanding the standards development processes. To address this challenge, she added, experts in Australia have focused on drafting white papers and guidance documents to help organisations in understanding how these processes work.  

    Talking about priorities for international cooperation, the panellists stressed that understanding the approaches taken by other countries is essential to avoiding duplication of work, building synergies, and understanding what kinds of coordination efforts are required. For this reason, multilateral fora like the OECD and IGF make for very important platforms. Additionally, initiatives like as the AI Standards Hub, were highlighted as important avenues for building networks internationally, identifying shared goals and challenges across different stakeholder groups, and jointly devising strategies to build an inclusive environment around AI standards.

    Segment 3: Audience Q&A. The final segment of the workshop provided an opportunity for attendees to ask questions, share their perspectives, and get additional input from the speakers. The session heard from the Coordinator of the Internet Standards, Security and Safety Coalition at the IGF, who stressed the importance of using standards that are developed by the technical community outside of government-recognised standards development organisations to inform national policies on AI. They suggested reaching out to the technical community in places like IETF or IEEE and align on key areas of AI standardisation. One of the online participants highlighted the value of further exploring strategies for increasing SME engagement in AI standards development. They proposed that this subject could be considered as a potential topic for inclusion in EuroDig, Europe’s regional IGF, taking place in Vilnius on 17-19 June 2024. The session also heard from an audience member representing Consumers International, who emphasised the value of consumer organisations in ensuring responsible AI development, since they represent the end users of these products and services. They stressed that consumer organisations possess a wealth of evidence to support standards development and can help to ensure that standards are firmly rooted in the real-life experiences and needs of their end-users. The participant also highlighted the AI Standards Hub as an important resource for Consumers International to increase their engagement in AI standardisation.

    IGF 2023 Open Forum #30 Intelligent Society Governance Based on Experimentalism

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    1) The advent of AI is poised to lead to the evolution of social structures, economic production, and residential lifestyles, thereby giving rise to an intelligent society that empowers climate governance, healthcare advancements, educational management. It even aids in the realization of SDGs, thus infusing new impetus and resilience into societal development.

    ,

    2) The evolution of AI governance is transitioning from conceptualization to the phase of practical rulemaking and enforcement. Countries and regions need to strengthen their assessment of potential risks in AI development, establish robust legal frameworks to ensure healthy advancement of AI, and devise ethical norms for intelligent society governance.Global community should join hands to promote cooperation on intelligent society governance.

    Calls to Action

    1) The panelists agreed that, faced with the profound transformations brought about by AI technology, nations worldwide are exploring new values, concepts, and approaches for intelligent society governance. Constructing an intelligent society imbued with humanistic warmth has become the historical mission of contemporary humanity.

    ,

    2) The panelists advocate: the promotion of AI-empowered public services; the enhancement of assessment and prevention of potential risks in intelligent societies; the dissemination and application of governance principles and practices for intelligent societies; the active exploration of international standardization in the governance of intelligent societies; and the promotion of inclusive sharing and agile governance in intelligent societies.

    Session Report

    In today’s era, digital technologies represented by AI serve as the pioneering force in the global scientific revolution and industrial transformation, increasingly integrating into all aspects of economic and social development, thereby profoundly altering how society is governed. The iterative development of generative AI in 2023, with ChatGPT serving as a quintessential example, has once again fueled human apprehensions concerning the potential risks posed by an intelligent society.

    This open forum was hosted by the Bureau Information Technology Development Cyberspace Administration of China, with support from the Institute of Intelligent Society Governance of Tsinghua University and the Center for Science, Technology & Education Policy of Tsinghua University. Under the theme “Intelligent Society Governance Based on Experimentalism: Insights from Cross-Country Experiences”, the open forum invited six experts and scholars from government bodies, research institutions, and social organizations in China, the United Kingdom, and Brazil to engage in discussions on the practical impacts of AI applications in different countries and case studies about intelligent society governance, thereby fostering transnational knowledge exchange. The open forum is committed to identifying effective governance models, cooperation frameworks, and policy tools that will cultivate a sustainable and humanistic intelligent society.

    1. To further enhance the capacity building for intelligent society governance on a global scale, and to foster a humanistic intelligent society.

    The world is undergoing profound changes that are unprecedented in the past century, with issues such as wealth disparity, environmental and climate change, as well as regional conflicts, standing as common challenges faced by human society. Enhancing the capacity building for intelligent society governance, promoting the widespread application of intelligent technologies, and fostering a humanistic intelligent society are crucial measures to meet these challenges.

    Building a humanistic intelligent society requires vigorous efforts to empower public services with AI. Jiang Wang, Deputy Director of the Information Bureau of Cyberspace Administration of China, asserted that we should strengthen the integration of AI with public services such as elderly care, education, medical care, social security, and sports. Considering the need to safeguard and improve people’s livelihoods and create a better life for the public, AI should be utilized to enhance the public services and social governance levels of government departments. Simon Marvin, Professor at the Urban Institute of the University of Sheffield and Professor at the Sydney School of Architecture, Design and Planning, posited that, through examples such as Japan’s Society 5.0, Smart Dubai, and San Francisco, countries could actively explore how to construct regulatory systems while AI was rapidly developing, ensuring that AI better serves public domains such as healthcare and education.

    The panelists all agreed that, in the face of the tremendous changes brought about by AI, countries around the world are exploring new values, new concepts, and new directions for intelligent society governance. Constructing a humanistic intelligent society has become a historical mission for contemporary humanity, necessitating a collective effort to guide AI in a direction conducive to human society's development.

    1. To pay close attention to the social impact prompted by AI, strengthen the assessment of potential risks in AI development, improve laws and regulations to safeguard the healthy development of AI, and formulate ethical norms for intelligent society governance.

    The industrial revolution incited by AI will exert a significant influence on human production and lifestyle. Alessandro Golombiewski Teixeira, Special Advisor to the President of the BRICS New Development Bank, Distinguished Professor of the School of Public Policy and Management of Tsinghua University, and former Minister of Tourism of Brazil, emphasized that AI would lead to the evolution of social structures and alter the way humans interact socially. The resultant intelligent society will address a series of major challenges such as climate change and may facilitate the achievement of the Sustainable Development Goals (SDGs). Cui Huang, Professor at the School of Public Administration and Director of the Department of Information Resources Management of Zhejiang University, indicated that China’s practice of promoting the modernization of education and building a strong educational nation in the intelligent era, through the integration of digital technology and education, demonstrated that intelligent technology held immense potential in educational governance and could infuse new vitality and resilience into social development.

    The transformation into an intelligent society has brought about issues and challenges about legal privacy, moral ethics, and public governance in human society. Jun Su, Dean of the Institute of Intelligent Society Governance of Tsinghua University and Director of the Think Tank Center of Tsinghua University, believed that in the face of risks and challenges posed by new technologies represented by ChatGPT, we should adopt a prudent, confident, and proactive attitude, employ a scientific evidence-based approach to comprehensive assessments, and facilitate its benign development.

    Countries worldwide are actively conducting practices to utilize AI in addressing social problems and accumulating experience in the process, thereby forming a relatively complete regulatory system and ethical norms. As Zhiyuan Xu, Deputy Chief Engineer of China Academy of Information and Communications Technology, stated, the development of AI governance was transitioning from conceptualization to the actual rulemaking and implementation stage. Globally, countries are actively releasing AI rules and policies under governance objectives such as reliability, controllability, human-centeredness, fairness, and justice. The panelists call for refining global advanced experiences and practices, further promoting the concepts and practices of intelligent society governance, and actively exploring the international standardization construction of intelligent society governance.

    1. The international community should work together to promote exchange and cooperation in intelligent society governance, uphold the principle of technology for social good, and build a community with a shared future for mankind in the intelligent era.

    The panelists agreed that it was necessary to explore the path of intelligent society governance under the concept of building a community of human destiny, strengthening international exchanges and cooperation in intelligent society governance, and promoting inclusive sharing, agile governance, and the realization of differential development and win-win cooperation among countries. Jiang Wang, Deputy Director of the Information Bureau of Cyberspace Administration of China, pointed out that China is willing to exchange and share work experiences in intelligent society governance experiments with other countries, actively contribute to Chinese solutions, and learn from each other’s strengths and weaknesses. Jun Su, Dean of the Institute of Intelligent Society Governance of Tsinghua University and Director of the Think Tank Center of Tsinghua University, called on countries to strengthen academic cooperation and exchange in the field of intelligent society governance, widely hold open, diverse, and inclusive academic conferences, and publish related academic journals.

    The panelists all advocated that the international community should strengthen dialogue and exchange, calling on researchers and practitioners from different countries and academic fields around the world to join in the research and discussion of global intelligent society governance and make academic contributions to building a humanistic intelligent society. They hoped that countries could deepen pragmatic cooperation, jointly face the opportunities and challenges brought by intelligent technology, and work together towards a new stage of human civilization. They looked forward to everyone’s efforts to make the public in countries around the world pay more attention to the application and future of AI, and jointly construct a new chapter of a community with a shared future for mankind in the intelligent era.

    IGF 2023 DC-DNSI Closing the Governance Gaps: New Paradigms for a Safer DNS

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    There should be more coordination across the Internet ecosystem dealing with online harms, particularly to deliver proportionate responses that look beyond action at the DNS level. Asia and Latam offer good examples of (a)better coordination across the ecosystem and existing initiatives (e.g. operation of .kids by .Asia) and (b)capacity building between the DNS, content and tech communities, and policy makers, LEAs and the judiciary (LACTLD).

    Calls to Action

    The numbers and names communities as well as companies dealing with content need to actively build capacities with policy makers, LEAs and the judiciary to help them understand adequate and proportionate options for dealing with online abuse. The Internet ecosystem needs to have better coordination mechanisms in place that break away with industry silos, and build ecosystem-wide consensus and collaborations for addressing harmful content.

    Session Report

    The purpose of the session was to discuss governance gaps in achieving a safer DNS. There is a separation between structural layers of the Internet and content issues, and the ecosystem understands those lines. But when we talk about harmful content, sometimes those lines become blurred and governance gaps become evident.

    The conversation sought to discuss what to do about those gaps and how to be action-oriented.

    Keith Drazek from Verisign began by setting the scene. In his view, the ecosystem seeking to address DNS issues, security, and online harms has to recognise that each actor has different roles, responsibilities, and capabilities, whether that is a registrar, a registry, a CDN or an ISP. There are different governance models, for example the ICANN community and the gTLDs have governance by contract. ccTLDs, on the other hand, develop local governance models based on their relationships with local governments or local Internet communities. Hosting companies and providers are subject to the laws of their respective jurisdictions, and operate in response to that regulatory guidance. In the overlap of these various models, there are governance gaps still remaining. He believes there is an opportunity for better communication, collaboration, and good work across the various parts of the DNS ecosystem, up and down the stack. There is also a need for the technical operators, registrars, and registries to collaborate better together as a sector to mitigate online harms in a proactive way. This will help reduce costs and demonstrate to regulators that the industry is taking the initiative. Hopefully, this will also help avoid being regulated in a fragmented way when it comes to different jurisdictions.

    He also highlighted there are conversations to be had as well about the advent of blockchain, alternate identifiers and technologies, as there are governance gaps in that area as well that will require collective addressing by industry.

    Jia Rong Low from ICANN providing background on the role of ICANN supporting abuse mitigation. He explained how ICANN is governed by a multi stakeholder model, and how the community was adamant that the issue of DNS abuse needed addressing within the ICANN structure. At the time of the session, there is an open voting period to approve updates to the agreements between ICANN and contracted parties to incorporate specific actions to address DNS abuse as a contractual requirement. In his view, “sometimes with models such as ICANN’s, it can feel like things are not moving, but the community has come a long way.”

    Esteve Sans from the European Commission (EC) came in next with a perspective from a regulatory body. He started off by highlighting the bodies such as the EC are not just regulators, they are members of the multistakeholder community, and in the EC’s case, they are very active in ICANN.  The EC does not have any new regulation in mind, and is currently looking forward to supporting ICANN in what they see as “a moment of truth in dealing with abuse.” Esteve shared the view of the EC that amendments to the agreements of the contracted parties have not gone far enough, missing elements such as transparency or proactive measures. 

    Fiona Alexander from American University welcomed the reactivation of the DC as a safe place for conversation. She highlighted how DNS Abuse and what constitutes harm can mean different things to different stakeholder groups, especially governments. She went on to highlight jurisdictional differences in approaches. In some jurisdictions, there is preference for proactive approach (e.g. the EU), in others there is preference for having demonstrated harm versus preventive action (e.g the US). In addressing harm, governments also have to balance the important issues of free expression and human rights. Addressing online harm is a cross‑jurisdictional challenge that can be difficult to resolve. In her view, it is important to (a) have a shared understanding of terms, and a shared understanding of some of the challenges, (b) to look at the proportionality of the response, when do you take a small versus larger measure, and (c) who is best suited to take action. Voluntary commitments such as those reflected in the updated agreements of the contracted parties are good. They could also be more targeted, more rapid. When looking at voluntary action, it is really important to make sure there is transparency in those systems and that there is due process in those systems.

    Jen Chung from .Asia reflected on the contractual amendments with ICANN. She explained how the .Asia organisation is looking forward to using the trusted notifiers system in collaboration with APNIC and APCIRT and TWNIC to periodically identify risks and share lists. She pointed out how this type of collaboration with the ecosystem is important to tackle threats such as phishing, and highlighted how these are actions that go beyond their contractual obligations. She illustrated this point in connecting to the definition of DNS Abuse. “DNS abuse can mean different things to different people; for the the contracted parties it refers to malware, botnets, phishing and spam. But this is not intended to limit our sphere of work, we go above and beyond our contractual obligations.” Lastly, she concluded with call to action to include the CSIRTS in work related trusted notifiers, and mentioned that .Asia is discussing with APNIC and APCERT the possible set up of a South Asia CERT.She highlighted how in regions where there are no harmonised approaches –like in the European Union— the onus is on operators like .Asia and other organisations Internet organisations to step up to fill this gap.

    Rocio de la Fuente from LACTLD brought in a perspective from the ccTLD community. She explained how ccTLDs are not bound by the consensus policies formulated in ICANN, as their policies are based on local regulations established with their communities. She shared LACTLD’s experience organising workshops for dealing with illegal content and DNS abuse targeted at judges, prosecutors and Law Enforcement Agencies (LEAs) which has been co-organised with LACNIC, ICANN and the region’s technical community organisations.  The workshops have been successful in building cooperation networks with the judiciary and LEAs. “We see a positive impact when policy makers and LEAs can have direct conversations with their local ccTLD,” she explained. Private sector has also sometimes participated in the workshops to address issues related to illegal content on their platforms and services besides the DNS threats or DNS abuse issues.

    Jean Jacques Sahel from Google came in next to bring a perspective from the private sector dealing more broadly with content-related issues. Jean Jacques began by pointing out that from a Google perspective, the company is not trying not to be regulated; the internet has achieved a certain level of maturity and regulation is to be expected.It is rather a question of how, and understanding there is much of self regulation. He went on to share some lessons on how to tackle bad content, and take action on inappropriate behaviour. Google analyses content flagged to them by users or governments and follows content policies; in platforms like youtube, they demonetize bad content. They also seek to build out collaboration with relevant organisations. Honing in on APAC, there is a trend for increasing regulation –some omnibus regulation that concerns all intermediaries in APAC, some do “social media” regulation only. Back in the day they were copying regulations of other regulations, now it is changed — they add their own veneer. APAC is a very large market so this is bound to impact millions of users.

    Jean Jacques highlighted that the one thing he sees as lacking is policy makers seeking out for input from the multistakeholder community – the tech community, industry and civil society. “We can remind them, and some of us are raising concerns of collateral damage, massive collateral damage to the ecosystem, but it gets scant attention.” He concluded that regulation is coming, and that regulators will go for whoever can bring actions. From a DNS industry perspective, the Internet’s core has been spared, but not for long. He called for regulators to leave room for freedom of expression and not to over regulate.

    Esteve Sanz was invited to address Jean-Jacques’ points. Sanz said that the EU DSA offers an approach that strikes a good balance between users' fundamental rights and tackling abuse. In terms of coordination, he highlighted that the EC coordinated with the US on the declaration for the Future of the Internet, which he described as a straight jacket for the states not to regulate the internet in certain ways that are harmful. Lastly, he warned of digital authoritarianism in that authoritarian governments use the internet to control their populations. “We cannot think the internet is just a tool to promote freedom.”

    Keith Drazek elaborated on the point of industry collaboration. He finds industry does not collaborate sufficiently, especially when up and down the stack, or across the range of operators. There is an opportunity for registries, registrars, hosting companies, CDNs and ISPs to engage more constructively and proactively together, and to collaborate in identifying trends of bad actors and in devising mitigation strategies.

    He agreed that regulation is upon us, but urged for it to be informed, educated regulation and to take into account concerns by civil society. When we speak of content, it gets very complicated from a rights perspective. Registries and registrars have one option to address abuse, and that is to take the entire domain  out of the zone – the third level is with the hosting company. If it is a bit of offending content or harming content on a third‑level name or a website, the hosting company has to be involved in that conversation about how to mitigate those harms to ensure proportionality.

    He offered five points for consideration that apply to dealing with online harms through the DNS, and also for any trusted notifier schemes. These include consideration of:

    1. Provenance of a threat, to have the closest stakeholder take action
    2. Proportionality, to ensure actions do not impact users or other parts of the ecosystem disproportionately
    3. Transparency on how we use the DNS to mitigate online harms. What process was followed and what actions taken
    4. Due process
    5. Recourse –we need to offer recourse for the impacted party if you got it wrong,

    Fiona Alexander weighed in and said that what is unique about how the Internet operates is the multistakeholder model. So it is important that industry and governments do not broker on their own, but that conversations are held in reach of the multistakeholder community. “How you do something is just as important as what you do.”

    Connecting to regulatory efforts by the EU, Jia Rong Low highlighted the impacts of GDPR on the Whois database. He explained how he interacts with LEAs, and complaints by Interpol that Whois has become difficult to work with, highlighting how regulations have losers and winners.

    Jen Chung offered an example of how articulation can play out. .ASIA is the registry operator for .kids, which is one of the first gTLDs with a mechanism for restrictive content. She explained how downstream, there are the hosting providers, DNS resolvers; at every point, there could be abuse happening all the way to content. For DotKids, they rely on Google AI to look at the content, they have a policy that listens to child rights experts and online rights experts. They are also highly transparent and have the paper trail of how reports are dealt with. They also offer recourse. 

    Rocio de la Fuente came back with additional perspectives from the LAC region where there are no overarching regional regulations to harmonise approaches. She explained that while abuse in ccTLD communities is low, the ccTLDs have introduced actions to help mitigate abuse. For example, .co has a national hotline for reporting CSAM materials and mechanisms in place to review reports.  

    Some comments from the audience included Mark Dattysgeld who in response to the comments by the EC explained that the community came up with a technical definition of DNS Abuse which could be agreed upon as a baseline from which we can always build on. Kanesh from Nepal urged for capacity building. Andrew Campling asked whether new standards introduce new governance gaps, hinting at DoH.  

    In terms of what is required, the panel recommended

    More capacity building, as done in the LAC region, with governments, LEAs, operators, judges, policy-makers and other relevant stakeholders. Esteve warned that the conversation has been dominated by the global north, and that this will play badly in the WSIS+discussions.

    Bringing people together and having conversations. Continue the discussion about more coordinated action from the ecosystem, ensuring we get feedback from the multistakeholder community. Making it a sustained conversation.

    Clarity on what tools we have, what to scale. From a cooperation perspective, Jen Chung highlighted the need to Join the dots on the things we are doing, and scaling what works.

    More collaboration on DNS security, including involvement of CSIRTs.

    Measure DNS Abuse

    Be attentive to new standards being developed

    Takeaways

    There should be more coordination across the Internet ecosystem dealing with online harms, particularly to deliver proportionate responses that look beyond action at the DNS level.

    Asia and Latin America offer good examples of (a) better coordination across the ecosystem and efforts to build collaborations among existing initiatives (example of operation of .kids by .Asia) and (b) capacity building and networking between the DNS, content and technical communities on the one hand, and policy makers, law enforcement and the judiciary, on the other.

    Calls to action

    The numbers and names communities as well as companies dealing with content need to actively build capacities with policy makers, LEAs and the judiciary to help them understand adequate and proportionate options for dealing with online abuse.

    The Internet ecosystem needs to have better coordination mechanisms in place that break away with industry silos in dealing with online harms, and build ecosystem-wide consensus and collaborations for addressing harmful content on the Internet.

     

    IGF 2023 DC-OER The Transformative Role of OER in Digital Inclusion

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    To advance digital inclusion through OER, there is a need to go beyond awareness raising and digital skills to access, re-use, create and share OER, to focus on how to make OER more inclusive to the diverse needs of learners. It is important to not focus on a ‘one size fits all' strategy, but to have localized control of content to build a knowledge commons. Stakeholders need incentives to contribute and use this knowledge commons.

    ,

    The principle of the OER Recommendation that educational resources developed with public funds should be made available as OER is important. Investments should be in ensuring the quality of teaching and learning experiences, to ensure that OER is providing quality, accessible learning for all learners.

    Calls to Action

    Governments and Institutions need to support inclusive accessible OER initiatives that support the knowledge commons.

    ,

    Initiatives need to be led by the target communities, and the voices of those who will benefit from these initiatives have to be in the conversation. Best practices from other ‘Open Solutions’ – Open Access, Open Data can be useful for ensuring the interoperability of repositories and increased sharing of knowledge through OER.

    Session Report

    IGF 2023 DC-OER: The Transformative Role of OER in Digital Inclusion 

    Report 

    The IGF 2023 Dynamic Coalition on Open Educational Resources (DC-OER) convened a session under the theme "Digital Divides & Inclusion." In an increasingly interconnected world, access to quality education is paramount, but digital divide and inequalities persist. The IGF 2023 Dynamic Coalition on Open Educational Resources (DC-OER) addressed this pressing issue in a round table discussion, exploring the transformative role of Open Educational Resources (OER) in promoting digital inclusion. The session featured international experts and diverse stakeholders. 

    The UNESCO and IGF OER Dynamic Coalition showcased its dedication to promoting open educational content while respecting privacy, legal standards, and inclusivity. OER's potential to provide inclusive access to digital knowledge was a key highlight of the session. 

    The UNESCO OER Recommendation was the main focus of the session as the starting point for a common commitment and political will of Member States towards knowledge sharing through OER. This is the first international normative instrument to embrace the field of openly licensed educational materials and technologies in the UN System. 

    The Recommendation provides a clear definition of Open Educational Resources, namely that OER are learning, teaching and research materials in any format and medium that reside in the public domain or are under copyright that have been released under an open license, which allow no-cost access, re-use, re-purpose, adaptation and redistribution by others. 

    The Recommendation also underscored that an open license is one that respected the intellectual property rights of the copyright owner, while granting the public the rights to access, re-use, re-purpose, adapt and redistribute educational materials.   

    The five action areas of the 2019 Recommendation on OER were central to the discussion: capacity building, supportive policies, quality and accessibility, sustainability, and international collaboration.  

    The IGF 2023 session highlighted that OER is not merely a tool; it's a multifaceted solution that demands capacity building, supportive policies, quality, inclusion and accessibility, sustainability, and international collaboration to effectively bridge digital divides. These five pillars represent a collective commitment to unleashing the full potential of OER, empowering the digital era, fostering inclusion, and ensuring that equitable access to quality education is within reach for all. 

    Capacity building was emphasized as the foundation for effectively bridging digital divides by enabling educators and learners to create, access, and adapt OER. 

    Dr. Stephen Wyber of the International Federation of Library Associations and Institutions (IFLA) stressed the pivotal role of supportive OER policies, ensuring that educational resources are accessible to all, regardless of their digital environment. 

    Quality in OER was underlined as essential for meaningful learning experiences. OER should not contribute to inadequate learning, especially for marginalized individuals. Sustainability models for OER initiatives, including financial strategies, open procurement, inclusive policies, and ongoing community engagement, were highlighted as crucial for OER successes by Dr. Tel Amiel, UNESCO Chair in Distance Education at University of Brasilia (UnB). 

    For Dr. Patrick Paul Walsh, SDG Academy, UN Sustainable Solutions Network (UNSDSN), International cooperation emerged as a critical pillar of effective OER solutions, emphasizing the interconnected nature of digital education. 

    The session's insights and recommendations underscored the critical role that OER play in advancing digital inclusion, knowledge accessibility, and quality education for all. As the world continues its digital transformation, the power of OER is set to drive global change, ensuring that no one is left behind in the digital age. In line with the United Nations' Sustainable Development Goals, the session echoed the importance of implementing Member States-adopted standards for openly licensed digital education tools through the UNESCO Recommendation on Open Educational Resources (OER) 2019. It stressed the pivotal role of OER as a digital public good, ensuring an open, free, and secure digital future for all, aligning with the Global Digital Compact. 

    Key Takeaways: 

    • The importance of tailoring OER to diverse learners, avoiding a one-size-fits-all approach. 

    • The call for governments and institutions to support inclusive OER initiatives and promote the knowledge commons (Mr. Neil Butcher, OER Strategist, OER Africa). 

    • The necessity of community-led initiatives with input from those they aim to benefit. 

    • The session emphasized the role of governments, institutions, and communities in supporting inclusive and accessible OER initiatives, ensuring quality education for all. 

    • Capacity building is of paramount importance spanning from creating awareness and optimizing resource utilization to promotion of inclusivity. In alignment with the Recommendation on OER, addressing resource scarcity through the allocation of public funds for openly licensed educational materials and incentivizing educators to embrace OER is a crucial takeaway from the session has recalled Dr. Melinda Bandalaria, Chancellor and Professor, University of the Philippines Open University 

    Call to Actions: 

    The session emphasized the role of governments, institutions, and communities in supporting inclusive and accessible OER initiatives, ensuring quality education for all. To realize these objectives, active support for inclusive and accessible OER initiatives was urged. This support should extend to the diverse needs of learners, the promotion of the knowledge commons, and the assurance of quality education for all.  

    The IGF 2023 DC-OER session serves as a reminder that OER is a catalyst for bridging digital divides and fostering digital inclusion. It's a call for collective action to make digital education truly inclusive and accessible to all. 

    In conclusion, the IGF 2023 DC-OER session highlighted the transformative potential of OER in bridging digital divides and fostering digital inclusion. The insights and recommendations from the session provide a roadmap for achieving these vital goals in an ever-evolving digital landscape. 

    IGF 2023 WS #224 Opportunities of Cross-Border Data Flow-DFFT for Development

    Updated:
    Data Governance & Trust
    Key Takeaways:

    Two Key Takeaways: 1) To make the power of data work for development, we need to develop trusted and secure ways to share data across borders. 2) In order to create best practice or norms on cross border data transactions, it is critical that the developing countries need to participate in the discussions and create the norms together.

    Calls to Action

    Two key Call for actions 1) Invite developing countries into the discussion of norm creation and sharing of best practices to enable DFFT. 2) Use multistakeholder approach to create regulatory and non-regulatory approaches to allow for data flow while protecting privacy.

    Session Report

    IGF 2023 WS #224 Opportunities of Cross-Border Data Flow-DFFT for Development

    Monday, 9 October

    In-person moderator: ATSUSHI YAMANAKA, JICA
    Online moderator: Chrissy Martin Meier, Digital Impact Alliance

    Speakers:
    Jean-Jacques Sahel, Google
    Jean Philbert Nsengimana, Africa Center for Disease Control and Digital Impact Alliance
    Mayumi Miyata, JICA
    Kathleen McGowan, Digital Impact Alliance

    Gordon Kalema (Mr.), Director General, Ministry of ICT and Innovations, Rwanda

    Background

    • Since 2019, the Gov of Japan has been at the forefront of promoting digital free flows with trust (DFFT)
    • As the global community discusses “the internet we want,” there is a clear need to also discuss the “data ecosystem we want.”
    • Both the opportunities and challenges related to securely and fairly unlocking data is quite similar across countries, and no country or economic bloc has figured it out.
    • Data is a powerful resource that can accelerate process to the SDGs, but only if shared/transferred/transacted securely. We need to be mindful of the challenges including sovereignty, cybersecurity, and personal data protection.
    • Economies rely on the free flow of information –to communicate, inform ourselves and transact across borders      even during massive challenges like the pandemic, they enabled us to keep our economic and social lives going.
    • Integrating DFFT principles will be critical to ensure that digital public infrastructure (DPI) unlocks data that can be used to solve real challenges both within borders but also transnational challenges such as disease and climate change.

    Opportunities

    • DFFT has tremendous potential to accelerate development for all types of countries, particularly those trying to close gaps such as financial inclusion and access to critical services that foster not just resilience but the opportunity to thrive. This potential grows even larger when looking at cross-border data flows.
    • Socio-economic activities in the digital society can be accelerated with active participation in the data distribution market.
    • DFFT is extremely important in Africa for three reasons:
      • Africa is uniting in a continental free trade zone (AfCFTA), and digital infrastructure that is border agnostic is the only way to unite these countries to achieve one digital market.
      • There are many state and non-state actors who are ready to exploit Africa’s data: the only way to protect this data is by engaging multiple stakeholders who can help to safeguard data and to balance protection and openness. Particularly true with health data.
      • Africa’s Governments must be equally involved in leading the conversation on data governance in fora such as IGF, especially since multilateral, multistakeholder fora like IGF involve not just government voices but also those of civil society and the private sector.
    • DFFT is enabled by interoperable standards and certifications systems. The beauty of these systems is that they can be applied anywhere in the world, supporting small businesses and reinforcing trust regardless of country.    
    • We shouldn’t have separate systems/regimes for different types of countries but rather interoperable systems based on similar elements.

    Action areas

    Unlocking existing data by increasing trust

    • Huge potential to unlock data by freeing stranded assets – data locked behind a paywall or on a Government server.
    • The biggest challenge to unlocking data is trust – not scarcity. Data is not finite – it is infinitely reusable and thus there shouldn’t be an incentive to hoard.
    • Trust deficits can be addressed through new models that give data holders confidence to share.
    • We need to move away from the current paradigm of data winners and losers by removing artificial barriers and establishing trust frameworks.
    • There are products that have privacy by design, which set a model for balancing openness and privacy. Companies need Governments, civil society, and industry to help overcome the trust deficit and enable data flows and access to information across borders.
    • We need interoperable privacy frameworks, risk-based cybersecurity policies, and privacy-based standards. This is already happening in some countries. Even though the world looks divided in three blocks, it is feasible to come to a certain common ground. Cybersecurity is a good example.
    • There has been some progress in developing the rules and tools to promote trust, but it’s still limited to contracts or bi-lateral agreements. We need new models to make trusted data sharing the norm, rather than the exception. This includes mechanisms that include not just multiple Governments, but multiple stakeholders.
    • Global cross-border privacy rules are one example of the type of new collaborations we need to promote trust and reduce compliance costs.

    The role of national Governments

    • The data space is still uncertain for most people – everyone is learning. There is a role for everyone in tackling this uncertainty, including the Government.
    • When developing policies, Governments need to remember that we are working with people – people first before policies.
    • There also needs be a mind shift away from the false choice between data localization or not, but which data to share and how. It’s not just regulatory approach, but an overall multi-faceted strategy.
      • For example, in Rwanda, the Data Protection law was just a start. It needed to be followed by implementation measures, including a Data Protection Office, with staff who understand that their job is to allow for data flow while protecting privacy (rather than trying to ‘protect data’).

    Bringing in other stakeholders

    • Development partners can help their client Governments to better understand the risks and benefits, as well as furnish capacity to manage, so they don’t have to blindly block everything or shift to total data localization.
    • We need more discussions, multi-stakeholder dialogue. We need to be asking the privacy sector, more often, how they can help.
    • Deliberate focus on promoting more innovation within each country, engaging all types of actors to develop transformative tools to promote consent, etc.
    IGF 2023 WS #443 Taxing Tech Titans: Policy Options for the Global South

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    - Digital taxation is a nascent yet dynamic space, whose impacts on social/digital justice and competition should be evaluated, alongside its revenue potential.

    ,

    - The fate of the OECD Amount A proposal is uncertain, as power lies in the hands of a few (if not a single) countries. While the Global South may benefit, they will be price takers — therefore, domestic measures such as digital service taxes shouldn’t be ruled out.

    Calls to Action

    - Countries should consider revenue potential, impact on competition, and implementation costs and capacity when choosing between signing the OECD Amount A multilateral convention and imposing domestic taxation measures by end 2023.

    ,

    - Digital tax must become a more prominent part of platform governance discourse. Multi-sectoral (finance, digital, competition, IR) dialogues are key to understanding various nuances.

    Session Report

    This session explored on if/how countries in the Global South should tax large technology multinationals. A pre-session poll indicated that 81% of the audience believed that countries should be exploring policy options to impose taxes on large technology multinationals; 6% believed countries should not, while 13% were undecided.

    Given the variety of policy options available, the workshop (organized in the style of a debate) looked to answer one specific, pertinent key question – should countries in the Global South sign onto the Amount A Multilateral Convention put forward by the The Organisation for Economic Co-operation and Development (OECD) and G20 by the end of 2023?

    Helani Galpaya (CEO, LIRNEasia), moderated the debate. Five speakers. representing a variety of stakeholder groups and brought a multitude of viewpoints to the table. Gayani Hurulle (Senior Research Manager, LIRNEasia) framed the debate on the need to find mechanisms to tax large digital platforms given the growing centrality of these platforms in the economy.  Abdul Muheet Chowdary (Senior Programme Officer, South Centre) highlighted the revenue implications for countries. Victoria Hyde (Policy and Communications Manager, Asia Internet Coalition) drew on the experiences of the technology MNEs highlighting the benefits of having a uniform, multilateral system.  Mathew Olusanya Gbonjubola (Coordinating Director, Federal Inland Revenue Service of Nigeria and Co-Chair, UN Tax Committee) highlighted the viewpoints of a government that had made some bold decisions on the topic in the past. Meanwhile Alison Gillwald (Executive Director, Research ICT Africa) spoke on the equity and fairness elements of the proposals, also on the experiences of low capacity governments.

    There were two key elements that became crucial to addressing the debate topic – first, whether countries in the Global South should sign onto this Amount A multilateral convention at any point in time; second, whether they should sign it -- and thereby forgo the option to implement domestic measures (if not already in place) – for another year.

    The speakers were of mixed opinions. Some argued that the certainty of the OECD proposal, and the lower transaction costs for platforms, made signing the Amount A Convention a compelling one. This would allow low-capacity countries to obtain some revenue as opposed to none. However, some other speakers argued that the Amount A multilateral convention would only come into effect if at least 30 jurisdictions accounting for at least 60% of the Ultimate Parent Entities (UPEs) of in scope MNEs signed the convention. Given that many of the countries in the Global South were not UPEs of such MNEs, them signing onto the agreement would have little bearing on whether it moved forward or not – much rested in the hands of a few countries, such as the United States. The audience too was of mixed opinions, the post-session poll indicated. 39% were of the view that countries in the Global South should sign up for the Convention by the end of the year while 50% said they shouldn’t’. 11% were undecided.

    The question-and-answer session highlighted that there was room for further capacity building on topics around digital taxation. Notably, an audience member from the Ministry of Finance from a Global South country indicated that this session was useful for his capacity building and requested technical assistance for aid his country’s policymaking on digital taxation. Overall, the debate highlighted that there were no clear-cut answers to topics around digital taxation particularly as this will be an ever important and dynamic topic, further increasing the need for engagement and debate on the topic.

    IGF 2023 Open Forum #44 Future-proofing global tech governance: a bottom-up approach

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    Concepts of agility and bottom-up approaches are relevant for global tech governance

    ,

    Multistakeholderism, human rights and multidisciplinary remain as relevant as ever in addressing the next generation of disruptive technologies

    Calls to Action

    Explore and expand agility models in international institutions

    ,

    Engage in mutlistakeholder discussions about the future we want, bearing in mind diverse technical, legal and cultural values

    Session Report

    The session was divided into three parts:

    1.       Disruption: addressing the challenge brought about by disruptive emerging technologies

    2.       Agility: exploring whether and how international institutions can be agile, with an emphasis on bottom-up approaches

    3.       Common principles: searching for shared values and principles that can guide international institutions going forward

    1. Disruption

    The international policy challenge brought about by disruptive technologies includes both a substantive challenge and a geopolitical one. With respect to the substantive challenge, there is a tension between two opposite approaches. One approach suggests that there is nothing fundamentally different with new technologies, and that over time the international community will address the issues. At the other extreme, one might be tempted to see new technologies as fundamentally problematic and warranting international regulation. From the panel, there emerged a need for balance. On the one hand, it is important to keep in mind the history of international policy responses to technological developments, paying attention to recurring patterns. It is often the case that new technologies trigger alarmist responses. Therefore, any action needs to build on fact-based, informed decision-making. On the other hand, the enormity of the challenges brought about by emerging technologies should not be understated. For example, as opposed to the early stages of the development of the internet, which was led by academia and government, nowadays the private sector has a much larger impact on the development of new technologies. One of the solutions for achieving this balance might be finding a new, original perspective of how to approach the rapidly evolving technology.

    To address the geopolitical challenge, policy-makers should try to cooperate with those who understand the need to transcend geopolitics in order to foster a common good.

    Multistakeholder-based policy discussions and exchanges are seen as crucial in enabling the international community to navigate both the substantive and geopolitical challenges. In that respect, the IGF and the model it embodies play a key role.

    2. International institutional agility and bottom-up approaches

    There was a consensus that international institutions ought to strive to be agile to the extent possible, in order to adapt to the rapid pace of technological change. However, the challenge for international institutions is to act with agility, while maintaining a high degree of depth, accuracy and transparency. Adopting bottom-up mechanisms can help. One example for this is airline industry regulation, which developed organically based on stakeholders’ common challenges and goals.

    It was suggested to focus on agile processes rather than attempting to define in advance what the substantive outcome should be. In that respect, as a practical approach, instead of tackling large issues horizontally, which often requires tremendous resources and takes time, it might be useful to break up a given subject into smaller components that can be tackled individually. The UK’s AI Safety Summit is an example of this approach. Another example is the OECD's work in developing the AI Principles: the process was relatively quick, with involvement from multiple stakeholders, and the Principles have become widely used. In addition, the OECD maintains a repository of about 400 experts that are selectively called upon to provide input on specific topics. This allows the OECD to produce outputs in an agile manner.

    The IGF's Best Practice Forums are another example of how an international institution can create flexible spaces for individuals from diverse backgrounds but with similar interests can coalesce and create valuable policy work.

    In that regard, multi-stakeholderism can be harnessed to empower institutional agility.

    The above examples underscore that future-proofing global technology governance need not be "high-tech": there are some basic mechanisms that can serve as models to enable bottom-up approaches and greater institutional agility.

    3. Common principles

    Principles of transparency, openness, accountability, inclusivity, a commitment to human rights (with particular attention to the impact of technologies on vulnerable groups), and trustworthiness of technology are relevant in discussions on disruptive technologies, at a high level. These principles should ideally be embedded in the work of international institutions. The question is how to apply these principles in practice, in a context-sensitive manner.

    A related issue is fragmentation of efforts: multiple international organizations dealing with similar topics, in parallel, and issuing similar, complementary or conflicting outputs (recommendations, declarations, etc.). Is this fragmentation a feature or a bug? It was suggested that, to the extent that such fragmentation produces a diversity in outputs, it can be positive. However, if it results in a duplication of efforts by international institutions, then it should be avoided. This is essentially a resourcing question.

    Going forward, a number of suggestions were raised. One was to improve accessibility of information on various technologies and their respective impacts, so that stakeholders across the globe benefit from high-quality data to inform policy. One potentially ground-breaking idea was to prompt a re-thinking of how international institutions go about addressing technological issues. Global policy-making mechanisms that we are familiar with today evolved in the 19th century, and some of them might need to be updated. For example, it might be useful to think about global stakeholder-based policy making, based upon shared needs of similar stakeholders across the world, as a new mode of global governance.

    Conclusion

    There emerged a common understanding that international institutions, and the core values they embody and promote, remain as relevant as every in addressing the challenges posed by emerging technologies, but that there is a need to improve – and perhaps re-think – how they operate. To that end, existing examples of agile institutional governance and bottom-up approaches, should be studied and scaled-up, to the extent possible. Moreover, mutual understanding as well as sharing procedural know-how should be promoted.

    IGF 2023 Town Hall #63 Impact the Future - Compassion AI

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    The Compassion AI approach is the appreciation of humans’ and any beings’ lives in the state of oneness in diversity. Compassion AI is feasible to technically design and operationalize. We need positive liberty, positive actors and intercultural exchange to shift fears into compassion in the life cycle of AI. SDGs to be revised by including the technological and compassion approach to enable individuals to be resilient.

    Calls to Action

    It is needed to make a revision of the SDG Agenda to prioritize in it: UNESCO recommendation for Ethics on AI and wellbeing society as also empowering resilience of individuals. In parallel building AI Guardians society is needed to frame the Compassion AI Bridge Charter.

    Session Report

    Robert Kroplewski, a key figure in AI policy, opened a town hall panel to delve into Compassion AI, highlighting the need for ethical frameworks in AI development. He introduced a panel of global experts, both in-person and online, from various sectors, including Hanson Robotics and the GAIA Foundation. The session aimed to bridge the gap between ethical aspirations and the practical implementation of AI, with a focus on creating a balanced ecosystem. International efforts, legal frameworks, and policy recommendations were discussed, emphasizing the importance of a compassionate and inclusive approach to AI. The panel also addressed the challenges and disparities in access to knowledge and resources, advocating for a more equitable and responsible AI landscape. The goal was to foster an environment that benefits both humanity and the planet, ensuring the ethical development and deployment of AI technologies.

    EDWARD PYREK

    The speaker emphasizes the critical juncture humanity is at regarding AI development, urging careful decision-making to avoid past mistakes made during the internet's inception. They advocate for a global, ethical AI, acknowledging the challenges posed by cultural and religious diversities but highlighting compassion as a universal value. The Global Artificial Intelligence Alliance (GAIA) was formed to champion decentralized, compassion-based AI, introduced to the public in 2021. The speaker also introduces the Virtual Florence initiative, a multidisciplinary group of experts working together to shape the future of AI. They stress the importance of addressing the prevalent fear of AI and the future, aiming to teach compassion to both humans and AI through developed models and tools. The speaker criticizes the often superficial discussions at conferences, emphasizing the need for tangible products, impactful actions, and the upcoming AI Impact Summit in Salzburg in March 2024. They highlight the potential of AI in solving global issues, provided it is decentralized and compassion-based. The speaker also introduces the GAIA Guardians platform, aiming to unite individuals towards creating decentralized AI, and emphasizes the need for collaborative efforts from all societal levels to truly impact AI's future.

    Three Most Important Takeaways:

    1. Urgency and Responsibility in AI Development: We are at a crucial point in AI development, and careful, ethical decision-making is paramount to avoid repeating past mistakes and ensure a positive impact on society.

    2. Compassion as a Universal Value for AI: Despite the diversity in cultural and religious beliefs, compassion is identified as a common thread, essential for the development of a global, ethical AI framework.

    3. Multidisciplinary Collaboration and Addressing Fear: The future of AI should be shaped collaboratively by experts from various fields, not just IT professionals. Addressing and working with the prevalent fear of AI is crucial in teaching compassion and ensuring responsible development. Additionally, tangible actions and impactful initiatives, such as the AI Impact Summit and the Gaia Guardians platform, are vital for creating a decentralized, compassion-based AI future.

    EMMA RUTTKAMP-BLOEM 

    The speaker delves into the significance of AI technology, its rapid advancement, and its potential impacts on human agency and autonomy. They emphasize the need for human-centered technology and highlight the dual nature of AI, having immense potential for both good and harm. The UNESCO recommendation on AI ethics is presented as a global instrument aiming to address these challenges, promote responsible governance, and reduce inequality. The recommendation outlines values, principles, and policy actions to guide stakeholders towards ethical AI development, emphasizing inclusion, gender equality, and environmental protection. The speaker introduces the concept of compassionate AI, linking it to positive liberty and the capability approach, advocating for ethical entitlements that enable human flourishing and establish AI ethics as a dynamic mediator between technology and humanity.

    Three Most Important Takeaways:

    1. Rapid Advancement and Dual Nature of AI: AI technology is advancing quickly, impacting all facets of human life. It has the potential to both significantly benefit and harm society, necessitating careful consideration and ethical guidelines to maximize its positive impacts and minimize potential harms.

    2. UNESCO Recommendation on AI Ethics: This global instrument provides comprehensive guidelines, values, principles, and policy actions to ensure responsible AI development and governance. It emphasizes the importance of inclusion, gender equality, environmental protection, and international cooperation.

    3. Compassionate AI and Positive Liberty: The speaker introduces compassionate AI, linking it to the concept of positive liberty and the capability approach. Ethical entitlements are necessary to enable human flourishing and establish AI ethics as a mediator between technology and humanity, translating abstract principles into actionable duties for all AI actors.

    DAVID HANSON

    The speaker discusses the profound impact of AI on human lives, emphasizing its potential to both understand and enhance human existence. AI, being bio-inspired and capable of uncovering hidden patterns in human data, may one day be considered sentient and deserving of respect. The rapid advancements in AI, particularly in the corporate sector, have led to transformative discoveries, such as DeepMind's AlphaFold, which has revolutionized the biosciences. The speaker highlights the interplay between academia, policy, and the corporate sector in advancing AI technology. They define compassion simply as the appreciation of life in all its forms and interdependencies, drawing parallels between this concept and the development of AI. The speaker urges for the creation of technologies that enhance human caring and compassion, emphasizing the need for a holistic approach that considers long-term sustainability and the well-being of future generations. They pose a critical question on how different sectors can collaboratively work towards developing AI technologies that serve the greater good and promote compassionate interactions.

    Three Most Important Takeaways:

    1. Bio-Inspired AI and Sentience: The speaker highlights the bio-inspired nature of AI and its potential to become sentient, autonomous beings deserving of respect. This notion challenges our current understanding and interaction with AI, urging us to consider the ethical implications of such advancements.

    2. Transformative Discoveries and Corporate Advancements: The rapid advancements in AI, particularly in the corporate sector, have led to transformative discoveries, such as contribution to understanding human proteins. These advancements underscore the potential of AI to revolutionize various fields and improve human understanding of life.

    3. Compassion as a Core Value in AI Development: The speaker defines compassion as the appreciation of life in all its diversity and interdependencies. They emphasize the need for AI technologies to enhance human caring and compassion, urging for collaborative efforts across sectors to develop AI that serves the greater good and promotes compassionate interactions.

    MARC BUCKLEY

    The speaker reflects on the pivotal moment in human history marked by the advent of AI and emerging technologies, drawing parallels with past technological transformations that ushered in new epochs. They emphasize the need for humanity to transition from the Anthropocene to a new era, highlighting the role of AI in guiding this transformation with compassion, ethics, and accumulated human wisdom. The speaker points out the unique global collaboration represented by the Sustainable Development Goals (SDGs), describing them as an unprecedented Earth shot initiative agreed upon by 197 countries. However, they also identify challenges, such as the lack of collective intelligence and AI to consolidate knowledge and mediate between diverse perspectives. The speaker raises concerns about the domestication of technology versus human beings and the sacrifices humanity might make for technology. They underscore the potential of AI to exponentially drive progress towards the SDGs, emphasizing the associated new ecological economic model, which requires significant financial investment. The speaker concludes by aligning with previous thoughts on the importance of approaching these challenges and opportunities ethically and effectively to achieve the SDGs in the shortest possible time.

    Three Most Important Takeaways:

    1. AI as a Guiding Force for a New Epoch: The speaker highlights the critical role of AI and emerging technologies in guiding humanity towards a new age, emphasizing the need for compassion, ethics, and the integration of accumulated human wisdom to avoid repeating past mistakes.

    2. Global Collaboration and Sustainable Development Goals: The speaker points out the unique and unprecedented global collaboration represented by the SDGs, describing them as a collective moonshot for humanity. However, they also highlight the challenges posed by the lack of collective intelligence and AI to mediate and guide these efforts.

    3. New Ecological Economic Model and AI’s Role: The speaker introduces the SDGs as a new ecological economic model, requiring significant financial investment and ethical consideration. They emphasize AI’s potential to exponentially drive progress towards these goals, ensuring that the approach is aligned with ethical standards and effective strategies.

    TOM EDDINGTON

    The speaker draws a parallel between the current state of AI development and the mythological moment when Prometheus brought fire to humanity, emphasizing the transformative yet risky nature of AI. They highlight the nascent stage of AI in business, pointing out that while billions are being invested and business models are still being defined, the primary focus remains on commercialization and profit-making. The speaker warns that this narrow perspective could lead to negative consequences similar to those observed in climate change, urging for a more foresighted and ethical approach. The speaker emphasizes the need for guiding frameworks, such as an AI charter, to inform business decision-making and model creation, ensuring that the development and deployment of AI technologies align with ethical standards and consider their impact on humanity. They suggest looking to various models, including public health, virology, WarGames, and cybersecurity, to understand and mitigate the potential risks associated with AI. The speaker concludes by highlighting the urgency of mastering our ethics and self-discipline to manage AI's uses responsibly, ensuring that the technology fulfills its promises without posing undue risks.

    Three Most Important Takeaways:

    1. Transformation and Risk in AI Development: The speaker underscores the transformative potential of AI, comparing it to Prometheus bringing fire to humanity, while also highlighting the associated risks and the need for careful and ethical management.

    2. Commercialization Focus and Need for Ethical Guidance: The speaker criticizes the current business focus on AI commercialization and profit-making, advocating for a broader perspective that includes ethical considerations and the potential impact on humanity. They call for the development of guiding frameworks, such as an AI charter, to inform business models and decision-making.

    3. Urgency for Ethical Mastery and Responsible Management: The speaker emphasizes the need for industries and developers to adopt a mindset that prioritizes ethics and self-discipline in the development and deployment of AI technologies, ensuring responsible management and minimizing potential risks.

    MARCO GROBELNIK

    The speaker provides a nuanced perspective on the rapid development of Artificial Intelligence (AI), drawing a parallel to the Prometheus moment in mythology. They highlight the challenges faced by regulators and the industry in adapting to the swift advancements, particularly after the significant "chat GPT moment" in late 2022. The speaker emphasizes the intense competition among tech giants and geopolitical entities, stressing the importance of balancing commercial interests with ethical considerations and societal values. They express optimism about the potential of AI to contribute to compassionate interactions and societal cohesion, advocating for a more philosophical and value-driven approach in the development and application of AI technologies.

    Three Most Important Takeaways:

    1. Rapid Advancements and Regulatory Challenges: The speaker underscores the challenges faced by regulators and the industry in keeping pace with the rapid advancements in AI. They point out the need for a balance between democratic and normative approaches to regulation, ensuring that AI develops in a way that is beneficial and safe for society.

    2. Commercialization and Geopolitical Competition: The intense competition among major tech companies and between geopolitical entities is highlighted as a significant aspect of the current AI landscape. The speaker stresses the importance of not losing sight of ethical considerations and societal values in the pursuit of commercial success and technological dominance.

    3. Potential for Compassionate AI: The speaker is optimistic about the capabilities of current AI technologies, particularly in text understanding and reasoning, as a foundation for developing compassionate AI. They advocate for integrating societal values and ethical considerations into AI systems, ensuring that they contribute positively to human interactions and societal cohesion.

    Questions and Answers:

    1. Guest 1:

      • Question: Raised concerns about defining compassion in AI, questioning the limits of compassionate actions and their implications on human development. Used the example of manipulating sheep genetics to cure human cancer to probe the boundaries of compassion.
      • Answer: David Hanson and Edi Pyrek responded, emphasizing the need to balance ethical considerations and the potential of AI to enhance human compassion. They suggested that AI could potentially lead to a higher understanding and implementation of compassion.
    2. Guest 2:

      • Question: Asked if the current level of compassion in AI is capped by human developers' compassion, particularly those with significant financial resources.
      • Answer: Marco Grobelnik acknowledged the influence of big tech companies but expressed optimism about the future and the potential for positive change.
    3. Guest 3:

      • Question: Inquired about including children's voices in the AI development process, especially considering the risks AI poses, such as generating child sexual abuse materials.
      • Answer: David Hanson highlighted the importance of strong guardianship and inclusive values to protect children and ensure their voices are heard.
    4. Guest 4:

      • Question: Questioned how to eliminate quarterly earning requirements for corporations, drawing a comparison with European countries.
      • Answer: Robert Kroplewski suggested a shift towards more responsible and trustworthy business practices.
    5. Guest 5:

      • Statement: Emphasized redefining intelligence as compassion and expressed optimism about the future of compassionate AI.
      • Response: The panel did not provide a direct answer but the statement added a valuable perspective to the discussion.
    6. Guest 6:

      • Statement: Stressed the need for intentional architecture of compassion in AI development.
      • Response: The panel acknowledged the statement, aligning with the call for intentional and ethical AI development.
    Different Points of View:
    • Ethical Considerations: There was a strong emphasis on ethical considerations in AI, with different speakers highlighting the need for balance, strong guardianship, and intentional development.

    • Compassion in AI: The definition and implementation of compassion in AI were central themes, with discussions on how to practically achieve compassionate AI and the potential benefits.
    • Influence of Big Tech: The role and influence of big tech companies in AI development were acknowledged, with varying opinions on the current state and future prospects.
    Key Findings:
    1. Need for Clear Definitions: The session highlighted the need for clear definitions and boundaries regarding compassion in AI, ensuring that ethical considerations are central to development.

    2. Importance of Inclusivity: The inclusion of diverse voices, particularly those of children, was emphasized as crucial in developing compassionate and ethical AI.

    3. Optimism for the Future: Despite the challenges and ethical dilemmas presented, there was a general sense of optimism about the potential of AI to contribute positively to society, provided that intentional and ethical development practices are followed.

    IGF 2023 WS #299 Community-driven Responsible AI: A New Social Contract

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    The definition of communities and collectives can vary highly; is fluid in space and time (and more so than in the past); hence community-centered governance approaches must be agile and adapt accordingly.

    ,

    Individual rights are not enough to harness and address the benefits, harms and risks from AI innovation: it is essential to look at the implications of AI on collectives and how they interact and affect each other.

    Calls to Action

    AI innovation requires a reflection on the social contract: it is not enough to engage with communities, those in power must also facilitate and enable the effective and meaningful implementation of governance solutions.

    ,

    There is a dire need to facilitate and enable the capacity of communities to engage meaningfully and contribute to AI innovation. Incentivisation is key; communities need to be informed and empowered by those in positions of privilege and power.

    Session Report

    Introduction

    As AI progress proceeds at breakneck speed, companies, governments and international bodies are recognising that new norms and more inclusive and equitable approaches are needed to measure the impact of these technologies, mitigate risks of harm, and ensure their responsible development and use. Building on multidisciplinary research on AI governance, Chatham House, with the Office of the UN Secretary-General's Envoy on Youth (OSGEY) and Google, hosted a panel discussion to foster an inclusive and informed public debate, and policy engagement, on how collectives - countries, communities and companies - can frame and guide the responsible development of AI technologies.

    The discussion focused on several questions, including: 

    • What is the role of ‘powerful’ actors, such as governments and the private sector, in the governance of AI development and use? 

    • What community-led efforts are in place to govern the responsible development of AI? 

    • How should communities be engaged and what incentives are there for their participation in AI governance? 

    • What are some of the main considerations communities ought to take into account when devising and implementing governance approaches and solutions in the AI space?

    Establishing a common understanding around key themes, questions and risks, and ensuring diverse and systematic input regarding responsible AI development through this session, will ultimately contribute to global efforts at ensuring that these technologies are built for all, by all, and empower all.

    AI Governance: A multi-faceted approach

    The discussions included perspectives from the private sector, international organisations, and civil society on what ongoing efforts are in place to engage communities in the responsible development and deployment of AI technologies and their subsequent governance. 

    Key initiatives from across sectors and geographies have emerged, including the OSGEY’s work in engaging with the youth on AI governance issues: the youth are, indeed, arguably the largest and most connected generation. They are bound to inherit current policies, decision-making, and systemic issues; hence there is a critical need - and desire, from the youth - to ensure their representation in decision-making processes and, ultimately, a sense of control over their digital future. Greater inclusivity is a necessary response to the ongoing lack of trust and assurances with regard to the development and deployment of these technologies. The successful inclusion of the youth in AI governance will require intergenerational support and multistakeholder allyship. 

    A couple of notable industry-led initiatives have also emerged from the discussions:. Google Seach’s Quality Raters isa select group of individuals across the globe trained under Google’s set of guidelines to help stress-test and refine the quality of search queries. This programme is a key example through which technology companies can, by proxy, engage communities and the value of established processes in the roll-out and testing of products and subsequent changes. 

    In addition, research and development in voice recognition being predominantly in English, ‘non-priority’ languages therefore tend to neither be reflected in products, nor data to even develop and train these programmes in the first place. In response, Mozilla’s Common Voice initiative seeks to overcome this limitation through extensive building and engagement with communities, one notable example being with Kiswahili-speaking groups. These engagement opportunities take many forms, including: competitions for students; partnership with grassroot groups across Kenya; the ‘community champions’; as well as collaboration with linguists to capture the many dialects and variants in Kiswahili. 

    Key considerations for the way ahead

    It is clear that in order to leverage and maximise the benefits of AI technologies, governance solutions ought to think of their implications both for individuals and, most of all, collectives. As initiatives to engage with communities in the responsible development and deployment of AI technologies emerge, key considerations have emerged to inform future efforts: 

    Effective community engagement requires enabling environments conducive to meaningful solutions. There is a particular emphasis for those on the ‘powerful’ end in the new social contract (i.e., governments and big technology companies) to facilitate such environments through, for example, capacity building, established processes, incentivisation and creating opportunities to address other, pressing issues such as climate change, healthcare access, and disability exclusion. 

    The definition of communities is fluid in time and space. Individuals can belong to more than one community at the same time; and communities span across geographies, interests, languages and shared history, among others. As such, there is a need to reconsider and re-evaluate the social contract in light of AI development and the role and place of communities.

    AI must not be mistaken as a monolithic technology and is highly contextual. The nature of the technology’s impact - and the outcome of subsequent policy responses - is highly contextual and will change depending on what is at stake. For example, the human rights implications of AI technologies will be different depending on the technological application, as well as the wider societal context in which it is being deployed. As such, governance solutions must be sustainable, agile, and look into both existing and long-term risks, and strive to foster both horizontal and vertical opportunities. 

    There is no easy solution, safe experimentation is key. Concrete implementation of governance measures requires extensive research and experimentation on defining what works, and ensuring that solutions are agile, trustworthy, and meet the needs of communities. These experimentations must, however, be done in a safe environment: many lessons can be drawn from user research practices. 

    Trust is multi-faceted. There are two particular aspects to establishing trust. Trust by communities; and trust in the product and its outputs. Establishing trust by communities can be done through greater ownership over the products’ development and the many benefits they bring, as well as through meaningful engagement and implementation of policy measures and solutions. Establishing trust in the tool requires, first, reflections on a number of elements and how to adapt technical solutions accordingly. One notable example pertains to users’ perception and how labels affect a product’s trustworthiness in the eyes of the users. The question of trust is particularly relevant as AI technologies are at risk of deployment and use for disinformation campaigns, an element of increasing importance given upcoming election cycles, increasing polarisation, conflict, and risks of harm against vulnerable communities.

    IGF 2023 Open Forum #60 Empowering Civil Servants for Digital Transformation

    Updated:
    AI & Emerging Technologies
    Key Takeaways:
    Establishing a multi-stakeholder platform like the UNESCO Dynamic Coalition at both international and national levels is of paramount importance as a precursor to embarking on capacity building initiatives, particularly for countries pursuing digital transformation., 2. There is an enormous will from diverse organisations and stakeholders to collaborate in the form of a DC and share experiences from all the parts of the globe
    Calls to Action

    Establishing a multi-stakeholder platform like the UNESCO Dynamic Coalition (DC)

    , 2. Network of experts that can support and provide technical assistance and learn from each other's best practices
    Session Report

    Session Report: UNESCO Open Forum on Capacity Building for Civil Servants in Digital Transformation 

    Date: 10 October 2023, byUNESCO and GIZ's Fair Forward Team 

    The open forum on capacity building for civil servants in the context of digital transformation, jointly organized by UNESCO and GIZ's Fair Forward team, brought together a diverse group of experts from around the world. The primary goal of this session was to facilitate an open dialogue and knowledge exchange among participants to explore the possibility of establishing a Dynamic coalition for sharing best practices in capacity building initiatives. The session also aimed to develop a group – with this DC – to develop new content and knowledge products that could be leveraged by organizations and partners globally to support governments in their digital transformation efforts. 

    The Open Forum aimed to:

    1. Convene a Network of Digital Leaders: The primary objective was to bring together a diverse network of digital leaders, including experts and stakeholders deeply engaged in dialogues centered on capacity building initiatives and best practices for digital transformation among civil servants.

    2. Interactive Discussion and Examination: The session intended to facilitate interactive discussions and examinations of the challenges, requirements, and solutions related to digital capacity building as articulated by the stakeholders.

    3. Development and Reinforcement of a Dynamic Coalition: The goal was to develop and reinforce the scope, membership, and roadmap of the Dynamic Coalition. This coalition was envisioned to serve as a catalyst for global discussions and the dissemination of best practices in digital transformation and capacity building for civil servants, grounded in real-world examples.

    The participants around the table represented a diverse group of global stakeholders, including governments, civil society, intergovernmental organizations, academia, the private and tech sectors, civil servants, media, and more. The session was formally opened by a Keynote Address from Japanese Government Representative and by a representative of AI4Gov, emphasizing the importance of capacity building and technology challenges faced by the government.  Several challenges were highlighted, including network connectivity issues with platforms like Zoom, procurement challenges related to technology, varying levels of tech proficiency within the government workforce, the adoption of AI with its associated risks, and the preference for on-premise data services over cloud solutions, with the Japanese government expressing appreciation for UNESCO's capacity building efforts. The discussion was a very high-level and dynamic discussion among various stakeholders all 

    The session successfully initiated a dialogue and highlighted the pressing need for addressing the challenges posed by digital transformation. The importance of forming a sustainable coalition, sharing best practices, creating knowledge products, and supporting governments in their digital transformation journey was emphasized. 

    In conclusion, the session underscored the importance of addressing the multifaceted challenges and opportunities in digital capacity building for civil servants. The range of discussion topics explored, including sharing best practices, fostering interagency collaboration, establishing digital skill frameworks, and engaging in public-private partnerships, emphasized the need for a collective and holistic approach. Inclusivity, impact measurement, effective governance, and the role of digital leadership were integral components of the dialogue, highlighting the comprehensive nature of capacity building efforts. Resource allocation and international collaboration emerged as critical considerations, while the cultivation of a digital culture within government organizations was recognized as a key driver for success. Acknowledging challenges and formulating solutions, as well as the importance of government-wide strategies, data security, and stakeholder engagement, contributed to a robust framework for advancing digital capacity building among civil servants. These discussions set the stage for continued collaboration and knowledge sharing among stakeholders committed to enhancing the digital preparedness of government institutions.

    As a next step, participants were encouraged to collaborate and explore the formation of a coalition that can effectively address these challenges and support governments worldwide. The session concluded with a call to action for participants to work together to achieve these objectives and ensure civil servants are prepared for the digital age.

    IGF 2023 Open Forum #15 Protecting children online with emerging technologies

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    Different from traditional Internet applications, the Internet applications driven by emerging technologies introduce intelligent technologies. The use of these technologies helps to provide more well-being for children, such as health monitoring of children, recommendation of quality content, company of special groups, etc. However, emerging technologies also bring many risks to children, such as unfairness and data privacy security.

    ,

    Immersive digital spaces (virtual environments that create a sense of presence or immersion for users), facilitated by AI may inadvertently expose children to environments not designed for them, amplifying the risks of sexual grooming and exploitation. As technology evolves, immersive digital spaces can become more widespread in all fields. Therefore, there is a need to further understand the implications on and risjs for children.

    Calls to Action

    It is recommended that the United Nations analyse the development of standards and regulations for the assessment of emerging technologies in the area of children, share knowledge and best practices, and provide a platform for multi-stakeholder exchanges on how to develop common principles for the emerging technologies that we want and to ensure that we have the right institutions in place to translate them into binding standards and regulations.

    ,

    It is suggested that the international community strengthen dialogue and cooperation based on mutual respect and trust. We could not tackle difficult issues such as illegal industries targeting children and hidden cyber threats without collaboration across regions and platforms. Together, we can build a community with a shared future in cyberspace that fosters the healthy growth of children.

    Session Report

     

    Open Forum #15, "Protecting Children Online with Emerging Technologies ", aims to explore how to rationalize the use of emerging technologies to prevent and respond to various forms of safety issues encountered by children when they are online, as well as to explore in-depth ways to support the development of broader policies and interventions to keep children safe online and to explore the availability of specific action plans.

    The forum invites representative panelists from government agencies, social organizations, universities and institutes enterprises and other aspects, to participate in the speech. Experienced moderator was selected to summarize and comment on each guest's point of view in a timely manner, so that the entire forum activities are rich in content, in-depth, and tightly focused on the theme, and resonate with the thoughts of the participants with a good authority and vivid expressive power.

    Panelists conducted in-depth discussions on the development of broader policies and interventions to protect children's online safety, focusing on how to make good use of the Internet and other emerging technologies, so that information technology can maximize data integration and utilization, while maximizing the protection of children's rights and safety on the Internet. Speakers from the management, technology and other perspectives to talk more about methods and measures, more introduction of practice cases, exchange and share the international and domestic innovations in recent years, aimed at providing a series of solutions for children's online protection, with easy-to-understand knowledge propaganda, vivid and lively case introduction, to enhance the attractiveness of the forum activities.

    Participating guests agreed that the Open Forum provides reference for countries to formulate protection policies and contributes useful inspiration to promote the protection of children's online safety globally and provide a healthy online environment. More than 100 guests from government departments, social organizations, universities and research institutions, news media, Internet companies and other online and offline to participate in the forum.

    Although the forum was not able to hold a Q&A session on site due to time constraints, ,but after the meeting, the speakers had a fuller exchange with the audience, focusing on how to make better use of Internet technologies, especially generative artificial intelligence and other emerging technologies. They conducted in-depth discussions and exchanges on specific policy measures developed and implemented by various countries, such as setting up a youth mode in smartphones, applying emerging technologies to online content auditing so as to accurately identify content that infringes on children's privacy and filter the dissemination of inappropriate content for children, and setting up a mechanism for reporting and handling reports.

    IGF 2023 Open Forum #86 Child participation online: policymaking with children

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    The impact of children and young people's perspectives on shaping new online environments is more significant than commonly acknowledged. The key advancement lies in ensuring that decision-makers listen to their voices.

    ,

    It is imperative to acknowledge and encourage the active engagement of children and young people as collaborators, innovators, and drivers of positive transformation within the digital realm.

    Calls to Action

    We need the stakeholders to actively engage in promoting a comprehensive strategy for digital education and cybersecurity. This strategy must encompass not only technical competencies but also address critical domains like data and media literacy, in addition to concerns related to protection.

    ,

    We should develop a collaborative effort and enhance the skills and capabilities of all relevant stakeholders, which include children, youth, women, girls, educators, caregivers, policymakers, and ICT industry representatives. It's a shared responsibility to fulfil our roles and capitalize on opportunities to create a digital environment that's not only safer but also more empowering for all users.

    Session Report

     

    IGF 2023  

    Session Report- Child Participation Online: Policymaking with Children 

    The Internet Governance Forum (IGF) 2023 Open Forum on child participation online was organised by the International Telecommunication Union (ITU). The focus was not only to bring young people and children into the policy-making process but also to enable them to be part of the solution in matters that concern their safety and well-being online. Within the first round of the Forum debate, five stakeholders from across different domains ranging from academia, non-governmental and civil society organisations, and private sector discussed the contribution of children to creating solutions to the online challenges.  

    It should be mentioned that ITU has been working on Child Online Protection since 2009 to facilitate the sharing of challenges and best practices among Member States. Since then, the involvement of children in the dialogue on children’s rights in the digital environment has increased so that the balance between protection and participation could be achieved.  All the speakers within the discussion proved the importance of children’s overall engagement in the debate as well as their understanding of the proper use of the Internet and the promoted platforms in order to better adapt the web to their needs.  

    For example, according to the first speaker, Afrooz Kaviani Johnson, a UNICEF Child Protection Specialist, involving children in the elaboration of the Child Online Protection initiative is essential due to children’s approach to digital technologies, different from the adult’s one. Thus, working with children boosts the creativity and efficiency of projects, which eventually can help to explore actual, not perceived, risks in the online environment. 

    Amanda Third, a professor from Western Sydney University, saw the answer to the question “how to implement the Convention on the Rights of the Child” in the collaboration between adults and children. This practice was implemented in the project launched in cooperation with the child-facing organisations from 27 countries to inform the drafting of the UNCRC General Comment. Within the project, children attended 5-hour workshops where they talked about the things, they are experienced in. As a result of a collaborative effort between adults and children, General Comment no. 25, addressed to real experiences of children more than before, was made.  Another outcome of collaboration with children, according to Amanda, became an online safety app game and a set of trainings for three different age groups, released by the ITU on the 10th of October. 

    Boris Radanovich, a SWGFL Online Safety Expert, admitted that adults do not have enough experience to connect with what children are living through, which proves effective to have a youth Advisory Board as well as support various targeted projects, launched by children all over the world. 

    The fourth speaker, Courtney Gregoire, a Microsoft Chief Digital Safety Officer, revealed the fundamental reality that the digital environment was not originally designed for children. However, she claimed that with the changing realities, the ITU must strive to give young people a voice so that they can make greater use of their technological potential. 

    Hilary Bakrie, an Associate Programme Officer on Youth Innovation and Technology in the Office of The Secretary-General's Envoy on Youth, also recognised the importance of children and youth as equal partners, not as stakeholders who are occasionally consulted. She also promoted the POP, Protection through online participation, initiative. According to her, POP, firstly, helps to see how young people and children use the Internet to access protection support, and secondly, to scrutinise the role of peer-to-peer support. Eventually, she believed that these findings help to identify the ways the Internet and online platforms can be used to create solutions that can help children and young people stay safe. 

    Within the second round of the debate, the speakers shared the works and frameworks being used at the national, regional, and global level to get young people and children to engage and be part of the policy making process where their voices and actions will be recognised. They also highlighted some of the issues faced by young people in smoothly navigating the policy-making process and getting the required acknowledgement and recognition. Questions from the participants were directed in the following core areas: 

    1. Information on Policy Transparency: STEM (Science, Technology, Engineering and Mathematics) and Policy process at the national, regional, and global level needs to include concrete questions about how to include children in policy. 

    2. Children Participation: Youths and children need to be encouraged to voluntarily engage in a respective and supported manner. This also includes supplying training for their safety as they are working on extremely sensitive topics and content, and the risk factors need to be clearly mapped out. 

    3. Accountability and Recognition: The energy of young people and children has been a game changer at the regional, national and global level. Concrete examples were Tunisia and the Philippines where open consultation with children helped shape the national Child Online Protection in Tunisia and the long history of child participation has brought reforms on topics like early child marriage.  

    4. Building Emergency response Plans: Research outcome has shown that user friendly approaches have revealed that young people and children feel comfortable asking for help online. This created a sense of belonging for the children as their voices are seen and heard.  

    5. Funding and Investment: Building ability overtime requires funding, developing of regional programs and global projects needs national adaptation strategies for Child Online Protection. For example, national task forces in 5 countries to guide governments on how to implement the strategies within the initiative.  

    6. Deep Dive into Online Safety and Limitations: Children and Young people have the right to speak up on matters that concern them, and their views respected. Children’s interaction with technology may be different from adults. 

    IGF 2023 WS #350 Accessible e-learning experience for PWDs-Best Practices

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    Vidhya: born blind, highlights the importance of access for persons with disabilities. Exposes the eLearning challenges faced by PWDs: - design not accessible for persons without sight , PDF not legible, downloading challenges due to inaccessible tags/links on websites, lack of digital literacy. She calls on educational institutions for interventions such as assistive technologies.

    Calls to Action

    Universal design will allow any student to learn with equitable experience of access to knowledge. But challenges exist: the high cost of UD learning platforms, the mindset of the old academic staff, some institutions look UD as a solution for PWDs only (they consider it a nice to have). While UD and accessibility are beneficial for everyone. Actions can be taken: negotiation, meeting to explain and raise awareness on e-learning challenges of PWD

    IGF 2023 WS #33 Ethical principles for the use of AI in cybersecurity

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    1) The use of AI/ML in cybersecurity can make important contributions to strengthening cybersecurity and resilience. However, its use must be responsible and sustainable. In this context, ethical principles are an important guideline that helps users of cybersecurity solutions to understand, assess and consider the use of the components for their own usage.

    ,

    2) Ethical principles presented and discussed in the workshop should be further developed. Human control, transparency, safety, and privacy are of utmost importance. Kindly find the document with the ethical principles developed by Kaspersky here: https://box.kaspersky.com/f/f6a112eef3bd4a2ba736/?dl=1

    Calls to Action

    1) An international multi-stakeholder discussion on ethical principles for the use of AI in cybersecurity is needed. Perhaps the IGF can take that topic into acount in the future work of the PNAI.

    ,

    2) In addition to ethical principles, a risk-based regulation on AI and international governance standards are needed.

    Session Report

    The rapid development of artificial intelligence has been translated into many benefits in cybersecurity, enhancing the overall level of cybersecurity. Detection and response to cybersecurity threats have been more efficient with the use of artificial intelligence (AI)/ machine learning (ML) systems. While opportunities brought by AI cannot be refuted, it can be diverted for malicious purposes. In this context, Kaspersky deemed crucial to open a dialogue on ethical principles of AI in cybersecurity. As UNESCO has issued recommendations on the ethics of AI, the growing use of AI/ML makes urgent the need for ethical principles of AI systems in cybersecurity.

    The session started with two polls: Online participants were asked 1/ whether they believed AI would reinforce or weaken cybersecurity, and 2/ whether the use of AI in cybersecurity should be regulated and how. They agreed that AI systems were beneficial to cybersecurity. Moreover, instead of creating specific regulations, lawmakers should seize the opportunity to reinforce existing cybersecurity regulations with specific provisions on AI.

    Noushin Shabab, Senior Security Researcher at Kaspersky, highlighted the risks and opportunities of AI/ML systems in cybersecurity and presented the six ethical principles for the development and use of AI/ML set out in Kaspersky’s newly published white paper:

    1. Transparency;

    2. Safety;

    3. Human control;

    4. Privacy;

    5. Commitment to cybersecurity purposes;

    6. Openness to a dialogue.

    For Professor Amal El-Fallah Seghrouchni, Executive President of AI movement (the Moroccan International Center for Artificial Intelligence), AI could indeed improve cybersecurity and defense measures, enabling greater robustness, resilience and responsiveness of systems. Yet, AI could also enable sophisticated cyberattacks to scale up, making them faster, better targeted and more destructive. Therefore, there was a need for ethical and regulatory considerations.

    Cyber Diplomacy Knowledge Fellow at DiploFoundation, Anastasiya Kazakova, discussed how cybernorms for responsible behaviour could be implemented. Despite the need and the willingness of policymakers to regulate AI, she underscored the lack of clarity with regards to AI/ML operations. Defining AI also appears to be a challenge faced by legislators. However, Anastasiya Kazakova recommended that AI regulations should focus on the outcomes, and not on the technologies to align with users’ most pressing needs and concerns. In her opinion, cybersecurity vendors could play a role in promoting bottom-up approach in adopting self-regulation measures.

    Prof. Dr. Dennis-Kenji Kipker, Expert in Cybersecurity Law at the University of Bremen, questioned the need for specific AI regulations for its use in cybersecurity. At the forefront of cybersecurity regulations, European lawmakers have avoided to name specific technologies in the existing regulations (NIS 2 Directive) or in draft legislations (Cyber Resilience Act).

    Members from the audience expressed apprehension about the practicality and effectiveness of using ethical AI in defending against unethical adversarial AI. Furthermore, they emphasized the significance of identity in the realm of security and the crucial role that AI can play in safeguarding identities.

    At the end of the session, participants agreed on the relevance of ethical principles in cybersecurity as they represent an important guideline that helps users of cybersecurity solutions to understand, assess and consider the use of the components for their own usage. A multi-stakeholder discussion on ethical principles for the use of AI in cybersecurity is now needed to establish international governance standards.

    IGF 2023 Lightning Talk #81 Canadian data, global lessons: Here's what we can do to improve cybersecurity

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    There’s a concerning mismatch between the real cyber threats users face online and whether they’re taking steps to mitigate and counter this behaviour.

    ,

    The increasingly connected nature of operational technology, and the long technology lifecycle of critical infrastructure, introduces new ways for attackers to access and disrupt the systems we rely on. By leveraging automation and artificial intelligence tools, cyber criminals can exploit these vulnerabilities and evolve their tactics faster than major infrastructure upgrades.

    Calls to Action

    Citizens should protect themselves from most cyber threats by practicing good cyber hygiene—for example, updating software, not clicking on links in suspect emails, running a Firewall, and more.

    ,

    Jurisdictions should legislate to establish a baseline level of cybersecurity in critical infrastructure sectors; however, alongside other safeguards, there needs to be strong oversight frameworks to ensure these powers are used appropriately.

    Session Report

    At IGF2023 in Kyoto, Byron Holland (President & CEO, Canadian Internet Registration Authority) delivered the lightning talk “Canadian data, global insights: What we can do to improve cybersecurity” to an in-person audience. The lightning talk was moderated by Charles Noir (Vice-President Community Investment, Policy & Advocacy, CIRA). Audience members were geographically diverse, including members living in Australia, Saudia Arabia, and the European Union.

    Below are select key insights from the session, including from the delivered comments and audience contributions:

    • CIRA offers a unique perspective as the organization behind the .CA domain, used by 3.3 million Canadians, and a provider of a variety of DNS, cybersecurity, and registry services. CIRA also publishes a range of research reports focused on how organizations and users perceive and respond to cyber threats.

     

    • Insights from CIRA’s published research show that, up from 66% in 2022, 75% of Canadians are concerned about malware when using the internet. At the same time, about a fifth of Canadians say they’ve been the victim of a successful cyberattack. Yet, only about one-third of Canadians report using tools or services to increase their privacy and security online. There’s a real mismatch between the real cyber threats users face online and whether they’re taking steps to mitigate and counter this behaviour.

     

    • When it comes to organizations, CIRA’s published research shows similar trends and patterns, but also some important differences. A good percentage of the Canadian organizations surveyed as part of CIRA’s annual cybersecurity survey are being targeted by bad actors—forty-one per cent had experienced a cyber attack in the last 12 months. 

     

    • CIRA has also observed trends that suggest attacks on organizations are becoming more and more complex. The ‘Simda’ botnet, which uses anti-detection tools to evade discovery, is the number one piece of malware being used against CIRA’s customers.

     

    • At the same time, as many audience members noted, it’s now widely understood that bad actors can and do use cyber to penetrate and disrupt critical infrastructures that underpin most of our economic and social activities. Sectors like health and telecommunications are essential to the everyday lives of citizens making them attractive targets for malicious actors. The stakes for defending critical infrastructure networks couldn’t be higher.

     

    • Audience members agreed that many citizens may feel that they don’t have the skills or tools to protect themselves online. But there was recognition that citizens can protect themselves from most cyber threats by taking a handful of relatively simple actions—for example, updating software, not clicking on links in suspect emails, running a Firewall and more. Audience members were encouraged to promote good cyber hygiene practices.

     

    • Jurisdictions can also legislate to establish a baseline level of cybersecurity in critical infrastructure sectors—there just needs to be strong oversight framework to ensure these powers aren't abused. In Canada, the government is pushing forward Bill C-26, An act respecting cybersecurity, which will introduce new cybersecurity requirements across federally regulated sectors. This kind of legislation will encourage the networks that we all depend on to improve their cybersecurity posture.
    IGF 2023 WS #54 Equi-Tech-ity: Close the gap with digital health literacy

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    Despite significant advancements in digital health technologies, addressing the digital divide and enhancing digital health literacy remain critical challenges. Efforts should focus on developing comprehensive frameworks, assessment tools, and strategies to ensure equitable access to digital health resources for marginalized populations.

    ,

    Collaborative initiatives and innovative policy solutions are essential in bridging the digital divide and promoting health equity through digital health. Stakeholders must work together to develop and implement effective policies and programs that empower communities and improve digital health literacy, ultimately advancing healthcare outcomes for all.

    Calls to Action

    We should forge international partnerships between governments, healthcare providers, technology companies, and educational institutions to develop comprehensive digital health literacy frameworks and tools, ensuring equitable access to healthcare through technology.

    ,

    We should take action to address the unique needs and challenges faced by marginalized populations in accessing digital health resources. Support initiatives that promote inclusivity and bridge the digital divide, ensuring health equity for all.

    Session Report

    IGF 2023 WS 54 Equi-Tech-ity Close the gap with digital health literacy REPORT

    Digital health is a rapidly evolving field that holds great promise in improving healthcare outcomes and addressing disparities in access to care. This session provided key themes and perspectives surrounding digital health, technology, and social determinants, drawing insights from experts and thought leaders in the field. It emphasized the importance of responsible research and innovation, ethical considerations, youth participation, and the role of technology in emergencies. Additionally, the session underscored the need for collaboration, inclusivity, and the active involvement of ethicists in shaping the future of digital health.

    One of the central themes discussed is the Responsible Research and Innovation (RRI) Framework. This framework seeks to harmonize technological progress with ethical principles, ensuring that digital health technologies align with societal values and respect digital rights. Experts stress the need for policies that uphold accountability and ethical considerations in the development of digital health solutions. It is essential to strike a balance between technological advancements and ethical principles, as industry objectives may sometimes compromise ethical concerns.

    The session highlighted the ethical dilemmas that can arise in competitive environments where efficiency, speed, and profit often take precedence. It emphasized the importance of adhering to ethical principles, even in fast-paced technological advancements. 

    According to the session, engaging youth in the realm of digital health is recognized as a pivotal strategy to bridge the digital divide and enhance digital health literacy. Young individuals can actively participate in research processes, ensuring that interventions are culturally sensitive and responsive to the unique needs of their communities. Innovation challenges and mentorship programs were also identified as effective tools to guide youth in developing their ideas and solutions. Furthermore, digital health literacy programs can equip young individuals with the necessary skills and knowledge to navigate the digital health landscape effectively.

    The session advocated for youth participation in Internet governance policies to ensure equitable access to digital health resources. It was revealed that young advocates can make their voices heard in discussions and decision-making processes, driving positive change and promoting inclusivity in healthcare. 

    Innovation hubs were suggested as collaborative platforms where young innovators, healthcare professionals, and policymakers can collaborate to create solutions for digital health challenges. These hubs benefit from the involvement of supportive companies and resources, filling innovation gaps and fostering meaningful advancements in the field.

    The importance of open science was also underscored, emphasizing the need for open access to data and research. The proposal by Costa Rica for an open science initiative to the World Health Organization (WHO) was recognized as a significant step towards facilitating collaboration and partnerships for the advancement of digital health technologies.

    The session acknowledged the pivotal role of technology in emergencies. It was emphasized that technology can protect healthcare professionals and patients during crises, offering vital support and resources to mitigate risks and ensure effective healthcare delivery.

    The session concluded by recognizing the value of ethicists in shaping the digital health landscape. Ethicists play a crucial role in ensuring that the development and deployment of AI technologies align with ethical considerations and respect for human values.

    In conclusion, this comprehensive session delved into various critical aspects of digital health and technology. It emphasized the importance of responsible research and innovation, ethical considerations, youth engagement, innovation hubs, the role of telemedicine and robotics in pandemics, open science, technology in emergencies, and the involvement of ethicists. 

    The insights gathered from this session underline the need for responsible, inclusive, and ethically sound development of digital health technologies. Collaboration, inclusivity, and the active engagement of the youth and ethicists are essential in shaping the future of digital health and improving healthcare outcomes for all.

    IGF 2023 WS #57 Lights, Camera, Deception? Sides of Generative AI

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    International Collaboration and Ethical Guidelines: International collaboration is crucial to establishing ethical guidelines and policies for the responsible use of generative AI technologies. By working together, stakeholders can harness the potential of these technologies for positive applications in various fields while addressing challenges related to misinformation and data integrity.

    ,

    Innovative Approaches for Responsible AI: Innovative interdisciplinary approaches and research are needed to improve the prevention, detection, verification, and moderation of generative AI content. These efforts can help mitigate the risks associated with deep fakes and generative AI, promote responsible data use, and contribute to a more secure and trustworthy digital environment.

    Calls to Action

    We urge governments and international organizations to prioritize the development and implementation of ethical guidelines and policies for the responsible use of generative AI technologies. This includes fostering collaboration between stakeholders from various sectors to promote positive applications, prevent misuse, and protect individuals' rights.

    ,

    We call upon researchers, academics, and innovators to focus their efforts on advancing interdisciplinary approaches and research to enhance the prevention, detection and verification of generative AI content. By fostering innovation and collaboration across fields, we can develop effective tools and strategies to combat the spread of manipulated content, safeguard digital integrity, and promote trustworthy information in the generative AI age.

    Session Report

    Lights, Camera, Deception? Sides of Generative AI | IGF 2023 WS #57 REPORT

    The report of the discussion about generative AI reveals several important points about how this technology can be harnessed for the benefit of society. The speakers emphasized the need to make generative AI technology more accessible and affordable, especially in rural areas. In these regions, internet connectivity can be a challenge, and ensuring that hardware and software platforms are affordable is crucial. This accessibility issue is particularly relevant in East Africa and some parts of Asia.

    Another key takeaway from the discussion is the importance of designing generative AI solutions with the end users in mind. The speakers provided an example of an agricultural chatbot that failed because it couldn't understand the language used by local farmers. This highlights the need to consider the local context and the preferences of the people who will ultimately use these AI solutions.

    Data sharing was also highlighted as a vital component of generative AI development. The speakers mentioned the work of the Digital Transformation Centre in creating data use cases for various sectors. Sharing data among stakeholders is seen as a way to build trust and promote the development of solutions that can effectively address development challenges. An example of this is the Agricultural Sector Data Gateway in Kenya, which allows private sector access to different datasets.

    Public-private partnerships were identified as a crucial element of generative AI development. Both the private and public sectors have their own data, and building trust between these sectors is essential for successful data sharing and AI development. The speakers pointed out that collaboration is essential, with public and private partners working together, as seen in the transport industry where the public sector handles infrastructure while the private sector focuses on product development.

    Localized research was also emphasized as necessary to understand regional-specific cultural nuances. It was noted that there is a lack of funding and a shortage of engineers and data scientists in certain regions. Localized research is vital for addressing the specific needs and challenges of those regions.

    Transparency in the use of generative AI was highlighted as essential. The speakers used the example of "Best Take Photography," where AI generated multiple pictures that could potentially misrepresent reality. To ensure ethical use and avoid misrepresentations, transparency is presented as crucial.

    The need for more engineers and data scientists, as well as funding, in Sub-Saharan Africa was stressed. Developing the capacity for these professionals is crucial for advancing generative AI in the region.

    Public awareness sessions were also deemed necessary to discuss the potential negative implications of generative AI. The example of "Best Take Photography" was used again to illustrate the risks of generative AI in creating false realities.

    Government-led initiatives and funding for AI innovation, particularly in the startup space, were presented as essential. The Startup Act in Tunisia was cited as an example of a government initiative that encourages innovation and supports young innovators in AI. It was argued that young people have the ideas, potential, and opportunity to solve societal challenges using AI, but they require resources and funding.

    Lastly, the discussion highlighted the potential risks of "black box" AI, where algorithms cannot adequately explain their decision-making processes. This lack of transparency can lead to the spread of misinformation or disinformation, underscoring the need for transparency in how AI models make decisions.

    In summary, the conversation about generative AI underscored the importance of addressing various challenges, including accessibility, affordability, human-centered design, data sharing, public-private partnerships, collaboration, localized research, transparency, capacity development, public awareness, government initiatives, and the risks associated with opaque AI models. These insights provide a roadmap for leveraging generative AI for positive impact while mitigating potential pitfalls.

    IGF 2023 WS #234 Overcoming the Global Digital Divide? The South-Based RIRs

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    The south-based Regional Internet Registries (RIRs) play an important role in the Internet ecosystem, not only by managing critical Internet resources, but also by promoting wider digital development with training, grants, measurement, community networks, and institution building. AFRINIC, APNIC, and LACNIC can thereby contribute significantly to narrowing the north-south digital divide.

    ,

    Among the strengths of a regional, bottom-up approach is that it helps promote participation from diverse south-based local communities and stakeholders, enabling them to engage in policymaking in their own context, language and time-zone. Multistakeholderism can be adjusted to meet the specific circumstances in the regions. Important discussions are taking place on ways and extents that the RIRs are multistakeholder (and whether they should be).

    Calls to Action

    Several participants argued that, to respond effectively to challenges facing the RIR system, it is necessary to have a) more (formalized) collaboration among the different stakeholder groups within an RIR; b) more mutual support among the RIRs, including through the NRO; and c) more extensive relations between the RIRs and the wider Internet ecosystem. Reforms to the RIRs’ policy procedures and governance structures might also be considered.

    ,

    The legitimacy and effectiveness of the RIR system is partly shaped by the extent to which it can attract participation from diverse stakeholder and social groups. Multistakeholderism is praised for being open; however, more awareness needs to be created about opportunities for participation in the RIRs, especially from non-technical constituencies.

    Session Report

    Introduction

    Bringing together a diverse group of experts from different stakeholder groups and geographical regions, this workshop session focused on the role of South-based regional multistakeholder Internet governance bodies in narrowing the global digital divide. Most of the discussion addressed the three Regional Internet Registries (RIRs) located in the South: AFRINIC (for the Africa region), APNIC (for the Asia-Pacific region), and LACNIC (for the Latin America-Caribbean region). The session opened with a presentation of a recent research project on legitimacy in the South-based RIRs and subsequently explored the following questions:

    1. What role do South-based regional multistakeholder Internet governance bodies play in narrowing the global digital divide?
    2. In what ways can South-based regional multistakeholder bodies empower low-income countries and other marginalized groups in global Internet governance?
    3. How can South-based regional multistakeholder Internet governance bodies promote inclusive participation from a variety of stakeholders?
    4. What lessons from the South-based RIRs’ approach to Internet governance can be used in other development areas?
    5. How can potential challenges to the RIR system be overcome?

    Summary of the Session

    Many speakers and participants at this workshop session emphasized the importance of the Regional Internet Registries (RIRs) in the Internet ecosystem. Not only do the RIRs manage several critical Internet resources (IP addresses and Autonomous Systems Numbers); they also promote wider digital development with training, grants, measurement, community networks, and institution building. Through these efforts, the South-based RIRs (AFRINIC, APNIC, and LACNIC) contribute to narrowing the North-South digital divide and can serve their communities beyond a narrow technical mission. Contributions additionally elaborated how the South-based RIRs shape wider Internet governance through their representation at, and collaboration with, other Internet governance actors, strengthening southern voices in these spaces.

    Speakers and commentators at the workshop emphasized the importance of taking a regional, bottom-up approach to policymaking. This approach promotes participation from diverse South-based local communities and enables them to engage in policymaking around the Internet in their own context, language, and time-zone. As one of the speakers mentioned: regional multistakeholderism in the RIRs ‘brings closer to home the idea of a global Internet.’

    Some diversity of perspectives emerged regarding the role of multistakeholderism in the RIRs. Several speakers discussed how the multistakeholder approach to Internet governance can be adjusted to meet the specific circumstances of the region. Other speakers and participants from the audience raised questions as regards the ways and extents to which the RIRs can be considered to have a multistakeholder character, as well as whether they should do so. These exchanges touched upon larger debates as to what the multistakeholder approach entails, how structured it should be, and what values these governing initiatives should embody. Some contributions mentioned that taking an inclusive and bottom-up approach might be more important than taking a multistakeholder approach.

    There appears to be consensus that diverse participation (in terms of stakeholder-, social- and regional groups) is important for the RIRs and partly shapes the legitimacy and effectiveness of these bodies. While multistakeholderism is praised for being open, more awareness needs to be created about opportunities for participation in the RIRs, especially from non-technical constituencies. The door to participation may be open, but different stakeholders need to be aware that the door is open.

    Finally, many contributions touched upon the challenges that the RIR system is currently facing and how to address these. Suggestions include:

    1. More (formalized) collaboration among the different stakeholder groups within the RIRs;
    2. More mutual support among the RIRs, including through the Number Resource Organization (NRO);
    3. More extensive relations between the RIRs and the wider Internet ecosystem;
    4. Reforms to the RIRs’ policy procedures (e.g. the bylaws) and governance structures, depending on community support for such measures.
    IGF 2023 WS #21 Internet's Environmental Footprint: towards sustainability

    Updated:
    Key Takeaways:
    Collaboration and Innovation: Solving the internet's environmental impact requires teamwork among governments, industries, and civil society. Innovation, like energy-efficient algorithms and eco-friendly hardware, is vital. Collaboration and innovation must synergize, guided by comprehensive regulations.,

    Regulations and Responsibility: Stringent regulations, including extended producer responsibility laws and global deployment guidelines, are crucial. These rules foster accountability and sustainable practices, essential for mitigating the internet's environmental footprint. Collaboration, innovation, and regulations form the foundation for a greener digital future.

    Calls to Action

    Enforce Stricter Regulations: Governments and international bodies must collaborate to mandate responsible production, usage, and disposal of electronic devices. Penalties for non-compliance and incentives for eco-friendly practices are crucial for accountability and driving sustainability.

    ,

    Invest in Sustainable Research: Governments and private sectors should fund research in renewable energy, eco-friendly hardware, and efficient cable-laying and satellite deployment. Financial support and incentives can fuel the development of impactful, environmentally conscious solutions, paving the way for a greener digital future.

    Session Report

    Sustainable Internet

    Background and Context

    Since digitization is here to stay, there's a need in effective management of energy usage for ensuring that the both the ends are achieved in this process where we ensure sustainable internet and at same time ensure there is no further addition to digital divide in this era.

    While many vulnerable people are becoming victim of their over energy usage for staying online that are leading to higher bills including debts and mental health issues alongside the impact to environment as there is yet to be any policy on mandating sustainable internet. Presently we are overdubbing an energy crisis in developing and emerging nations that require massive energy for sustaining life, where the focus on newer and cleaner energy resource incentives should be adapted. As the internet is a double-edged sword that can cut emissions and shape new modern sustainable industries.

    Answering the Question of how to break the internet carbon curse is a journey of people-center findings presented in this report from conversations and sessions during the IGF showing the intersection of users, Networks, and data centers interacting and their contribution to the digital carbon footprint.

    Introduction:

    This report is based on the engagement of the session from the interactive audience participation that framed the thought process and open-ended question methodology to enhance more direct expression to tackle the question of achieving a sustainable internet by breaking the carbon emission chains with a full pictorial view of the life cycle of ICT and internet technology from sourcing materials to the end usage and trying to form common solutions. As reported on the Ericsson report the ICT sector in the sector life cycle in its entirety contributed 730 million tonnes of CO2 emissions, the figure constituted from sourcing the raw materials, electricity usage in the supply chain, and the physical buildings and hardware of ICT innovations. This accounts for 1.4% of total global emissions. Energy is unequally distributed and a major barrier to entry to the digitization of a community, hence a contributing factor to the digital divide With 70% of the population using ICT-related services compared to 10% of the population who use Aviation, There’s room for ICT-based innovations to accelerate decarbonization at every layer of usage and responsibility in the multistakeholder model, in contrast to another field the internet with its sharing and cohabitation in digital environments has created enormous offsets of carbon footprints caused by traditional industries.

     

    Key insights and Discussion:

    ● The internet and its technologies such as 5G, IOT, and cloud computing backed with Open High-Quality Data Sets for AI-driven solutions have the potential to reduce 15% of Global emissions.

    ● A stability curve is prevalent, though with increased devices and traffic the internet and ICT as a sector accounts for 1.4% of emissions, showing the potential of the industry is add oriented service layers in all of the life cycles from design and manufacture to use case functionality to further reduce the digital carbon footprint.

    ● Big tech in the sharing of the economy should innovate on better climate offsets through collaborative spaces on green tech, open grid, and climate data insight solutions to accelerate incentives on the renewable energy sector as power sources for a sustainable internet.

    ● Promotion of user-focused circular economy solutions from accountable e-waste handling mechanisms, charging from greenhouses, device usage, and deeper understanding of technology and digital footprints is key to achieving a sustainable internet Recommendations:

    ● Media has a critical role to play in Digital sustainability reportage with evidence-backed sources and objective climate agendas, Misinformation and disinformation spread by digital platforms further contribute to their contribution to the carbon Emission, Internet sustainability reportage programs, and climate-focused media literacy programs should target critical mass and youth. The emerging youth initiative is committed to sensitization through content creation on the topic of environmentalism, youth, and climate technology.

    ● The fourth Industrial Revolution is reshaping the industrial base on how communication, transportation, and energy fields intersect, The internet with real-time data facilitation should focus on solution-oriented innovations on green policy, innovation to boost consumer choices

    ● Youth need to actively shape the circular economy Innovation that rewards data-driven energy transition programs that are in tune with local contexts of climate, culture, and sustainability.

    ● Promoting dialogues on meeting SDG Goal 13, which focuses on combating climate change and its impacts by adding a layer of technological optimism and climate consciousness in its design.

    ● User devices and patterns of usage still amount to the largest contribution of emission from cyberspace, User-focused literacy programs on consumption, usage, and innovation should be key properties in creating sustainable Green-powered Fourth Industrial Revolution economies.

    ●   Introduction of the green internet traffic where the data transfer between the networks who are following sustainable internet can be marked with green in the screen and display the percentage of carbon reduction , thereby adding a sense of pride to the users towards the environment.

     

    In addition to the above, the workshop also saw the overwhelming participation from the audience and also from the other members post the session who were attending other events as many parallel events were going on, for assuring their support to the topic and possibly exploring synergies for fostering collaboration at different levels. The report can thereby conclude with the success that the impact is being found to be significant among the youth and also receive support from experience professionals as this particular theme of sustainability is a globally accepted challenge unlike other issues of internet governance that remains halted because of debate between stakeholders especially between the government and civil societies for accepting any uniform resolution.

    IGF 2023 IS3C How IS3C is going to make the Internet more secure and safer

    Updated:
    Data Governance & Trust
    Key Takeaways:
    The IS3C dynamic coalition's research projects on IoT security policy and procurement practice confirm that governments and regulators generally do not i) proactively support the deployment of security-related Internet standards or ii) require secure by design ICT devices and services in their public procurement contracts.,

    Public sector and corporate decisions on whether to procure secure by design Internet devices and network applications are often based on economic and financial considerations rather than technical or security requirements. A new approach to procurement practice should be adopted that prioritises the importance of security and safety requirements. This would drive more effective and wider deployment of security standards.

    Calls to Action
    The IS3C dynamic coalition urges governments and regulators to take action to drive the deployment of key security standards and stakeholders in the cybersecurity sector to increase their efforts to break down the institutional silos that prevent effective cooperation in the deployment of cybersecurity standards., IS3C calls for stakeholder support in the educational and industry sectors for funding the establishment of a cybersecurity resource hub that will close the gap in supply and demand for secure by design in cybersecurity tertiary education.
    Session Report

     

    1. IS3C’s annual reporting of its recent activities

    In the first part of its third annual IGF session since its launch at the IGF in 2020, the Dynamic Coalition on Internet Standards, Security and Safety (IS3C) reported in Kyoto on its activities since the previous IGF in Addis Ababa in 2022, and announced the publication of the following key reports:

    • The coalition’s working groups on Security by Design for the Internet of Things (WG1), and on Procurement and Supply Chain Management (WG3) presented their recently completed research projects which conducted global reviews of publicly available policy documents from national governments, regulators and public administrations in all regions.
    • IS3c’s working group on education and skills (WG2) published its report on tertiary cybersecurity education at the IGF in Kyoto. 
    • IS3C's comprehensive review of the relevance of its work on cybersecurity to the UN’s Sustainable Development Goals.  

    Drawing on their research findings and analysis, the speakers in the IS3C session described the following key issues and challenges for governments and industry.

    1. Government administrations and private sector organisations generally do not use their purchasing power to maximise the security of the ICT devices and network services through their procurement contracts;
    2. there is a limited active cooperation between governments and industry in the development and promotion of secure ICT products and services. This also makes it difficult for industry to comply with commonly-agreed standards;
    3. Open standards created by the technical community are not recognised or endorsed by most governments and this results in the public core of the Internet’s infrastructure  remaining unprotected and vulnerable to attack;
    4. Increased cooperation between governments and industry will lead to a better protected, more secure and safer Internet for all based on harmonised approaches to security by design of ICT devices and applications;
    5. The lack of a level playing field in the global ICT markets results in a less secure ICT environment due to products being released onto the market that are insecure by design;
    6. The low level of demand by users for ICT devices and services that are secure by design due primarily to lack of awareness, creates disincentives for industry to manufacture and develop ICT products and services that are secure by design;
    7. Governments and private sector organisations can act as major drivers for adopting security by design through ensuring their ICT procurement practices specify this principle as a fundamental requirement.

    Following their presentation in Kyoto by members of the coalition’s leadership team, the working group reports are now available on the IS3C website at https://is3coalition.org/docs-category/research-reports/   The IS3C coalition members look forward to working with the IGF’s Secretariat and the Leadership Panel in promoting the adoption of the specific recommendations contained in these action-orientated reports. 

    2. IS3C’s Proposal to establish a hub for cybersecurity education and skills

    The report of WG2's research project identified major gaps in the supply and demand of cybersecurity skills, including knowledge of relevant existing standards and successful best practices. In order to address these gaps, the IS3C announced in Kyoto its proposal to establish a Cybersecurity Hub with the support of the IGF. The primary aim of the hub is to bring together stakeholders in industry, the technical community and the tertiary cybersecurity education sector, in order to provide guidance and recommendations on how to close the gap between the supply side of educational and training curricula on cybersecurity, and the demand side of specific skills and expertise required by the industry. It is also envisaged that at a later stage the remit of the new hub could be extended to include cybersecurity skills at all educational levels, and advising how to close the current gaps in cybersecurity employment supply and demand. 

    3. IS3C’s ongoing work and projected outputs in 2024/25

    In the second part of the session in Kyoto, the coalition’s coordinator Wout de Natris invited questions and comments on IS3C’s next phase of work and planned outputs in 2023-24.

    Following up the results of WG3’s research on procurement and supply chain management, IS3C coalition members will develop a narrative for ICT cybersecurity staff to draw on as a guide for influencing ICT procurement and supply chain management decisions for devices and applications that are secure by design. This is expected to have substantive impact on enhancing security, safety and sustainability.

    Wout de Natris also explained that three new IS3C working groups had been created in 2023 to take forward specific new project work in 2024. 

    WG5 is in the process of developing a tool for governments and other larger organisation to use when procuring ICTs. This will describe for policymakers the prioritisation of 40 of the most important security-related Internet standards. The release of the list is planned to be published by the end of 2023.

    WG8 will conduct work relating to the fundamental building blocks of the Internet are the domain name system (DNS) and the system of routing that allows Internet traffic to flow between users’ devices and websites. To achieve greater security in the DNS and the routing system and increase resilience against malicious attacks and risks of large-scale data theft, the engineering community developed two protocols: Domain Name Security Extensions (DNSSEC) and the Resource Public Key Infrastructure (RPKI). WG8 members will conduct outreach and engagement efforts to increase trust in, and contribute to the wider deployment of both of these critical protocols with the aim of enabling public and private sector decision takers to deploy them effectively in their respective organisations.

    WG9 will start work in early 2024 on standards for deployment in emerging technologies relating in particular to quantum and artificial intelligence. It will conduct a comparative analysis of current policy initiatives worldwide with the aim of developing policy guidance and recommendation in early 2024.

    A key overarching challenge that IS3C will continue to address in 2023/24 is turning the theory of cybersecurity into widespread impactful practice in support of the UN’s Sustainable Development Goals. IS3C will create capacity building programmes to bring the theoretical guidance and recommendations to the people who need to learn how to set them in practice and b) find the funding for the next phases working leading up to the IGF in Riyadh in 2024.

    There is a world to win where cybersecurity is concerned that is currently under-realised in many key areas of policy development and best practice. The IS3C stakeholder coalition will continue through its expanding work programme to address the key gaps identified in its various research activities and to make recommendations and provide toolkits to guide policymakers and decision-takers on how to resolve them. 

    Further information about the IS3C (Internet Standards, Security and Safety) IGF coalition is on the IGF website at https://www.intgovforum.org/en/content/internet-standards-security-and-safety-coalition-is3c and on the coalition’s dedicated website  https://is3coalition.org

    Joining the coalition and contributing to its working groups is free simply by subscribing to the members mail list at https://mail.intgovforum.org/mailman/listinfo/dc-isss_intgovforum.org

    The members of the IS3C dynamic coalition express their deep appreciation to the Japanese Government for hosting the IGF in Kyoto. The coalition benefitted substantially from the opportunities provided in Kyoto to report on its activities and to engage such a wide diversity of stakeholders worldwide.

    The IS3C leadership team also wish to thank the IGF’s Secretariat, the Dynamic Coalition Coordination Group (DCCG), the Multistakeholder Advisory Group (MAG) and the Leadership Panel for their ongoing support for the year-round activities of IS3C and the other dynamic coalitions.  

     

     

    IGF 2023 Lightning Talk #96 Emerging Tech and Solutions for Digital Inclusion

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    New emerging tech shall be a tool of empowering rather than deepening the digital divide. It is very important to hold the concept of People Oriented as from very beginning of developing work, especial for the Internet developers or service providers.

    Calls to Action

    Private sector is the important driving force of tech innovation that it is requested to fullfil its social responsibility to jointly facilitate an inclusive digital future.

    Session Report

    Co-hosted by the Internet Society of China (ISC) and China Internet Network Information Center (CNNIC), organized by China IGF, the session “Emerging Tech and Solutions for Digital Inclusion” mainly discussed how to promote the comprehensive development of inclusive digital environments, by taking the advantage of emerging tech and solutions, and avoiding the new gaps coursed by new tech.

    Mr. Huang Chengqing, Director of China IGF and Vice President of ISC, Ms. Zhang Xiao, Executive Deputy Director of China IGF and Deputy Director of CNNIC and Ms. Gu Haiyan, General Manager of Law Affairs of Sina Group, were invited to share their insights and observations on how to narrowing the ageing group-related digital divide and tackle with the challenges faced.

    Mr. Huang Chengqing shared the efforts of China’s Internet application aging and information accessibility transformation, including measures in various aspects such as the top-level design of the system, standard formulation, technological innovation, and public welfare actions to actively promote the construction of an inclusive digital society benefiting everyone.

    Ms. Zhang Xiao stressed how to narrow the digital divide and improve digital inclusion from the perspective of strengthening network connectivity, deepening digital infrastructure construction in rural areas, fostering digital literacy as well as improving network service innovation and quality.

    Ms. Gu Haiyan introduced the work and practice of Sina Weibo in developing aging-friendly version of applications, providing voice assistance systems, optimizing visual design, and strengthening privacy protection. She also proposed that digital empowerment is an important foundation to help vulnerable groups enjoy inclusive and sustainable development of the digital economy.

    The three speakers agreed that new technology shall be the leverage to a more inclusive social environment rather than to create new digital divides. It is of great significance to create a barrier-free/information accessibility environment, which needs joint efforts from all stakeholders, such as the policy support from the government sectors and technical innovation from private sectors. The key challenge is how to find the balance between business interests and social responsibility so as to motivate the private sectors to innovate and update the services to meet the need of information accessibility, as their major role to lead and promote the technology innovation. 

    The moderator also introduced some best practices carried out in China and call on multi-stakeholders to strengthen digital cooperation, narrow the digital divide, and promote the inclusive and sustainable development of the Internet. 
     

    IGF 2023 Town Hall #32 Internet Engineering Task Force Open Forum

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    1. The IETF relies on an open, bottom-up participation where individuals from different parts of the Internet ecosystem work together to evolve and enhance Internet technologies.

    ,

    2. Diversity across multiple axes is key to building the best Internet. Diversity in IETF has steadily increased, and further proactive steps (e.g. policymaker programs, operator outreach, academic outreach, support for newcomers and minority attendees) are being taken to encourage participation from those who may encounter barriers to engage in the standards process.

    Calls to Action

    1. The technical community should continue to encourage different stakeholders to participate in the IETF's open standards process by outreach, knowledge-exchange and targeted support.

    ,

    2. The technical community should continue to identify opportunities for engagement in multistakeholder forums like the IGF.

    Session Report

    The IETF held a Town Hall meeting at IGF 2023 to discuss and highlight the importance of interoperability based on common, interoperable, and continually evolving infrastructure standards in maintaining a global and healthy Internet, and avoiding fragmentation. Several members of the IETF leadership participated in the panel discussion and in the broader IGF events, both in person as well as remotely. The Town Hall was attended by 40-50 persons in the meeting room and about 30 people who participated remotely. 

    At the start of the session, Mirja Kühlewind (Internet Architecture Board (IAB) Chair) provided a brief overview of the IETF, how it is organized and how openness (of both participation and standards) and transparency are key values that are pervasive throughout the organization and the ways of working. She also emphasized the technical focus of the IETF.

    Jane Coffin (ISOC) moderated a panel consisting of the following members:

    In Room:

    Mirja Kühlewind(IAB Chair)

    Colin Perkins (IRTF Chair)

    Mallory Knodel (IAB Member)

    Remote:

    Lars Eggert (IETF chair)

    Andrew Alston (Routing Area Director)

    Dhruv Dhody (IAB Member)

    Suresh Krishnan (IAB member)

    The panel members provided a brief description of how they got involved in the IETF and described their experiences on how they went from a new participant  into  leadership positions. They also discussed some of the steps IETF has been taking to encourage participation from diverse stakeholders and to minimize barriers to participation.

    There was lively discussion after the panel in which several audience members brought up a wide variety of topics including diversity of participants, coordination with web standards, the strength and the success of voluntary technical standards, the need for more interactions between the Internet governance and technical communities and dealing with legacy technologies. The follow ups from these discussions are summarized in the Key Takeaways and Calls to Action above.

    IGF 2023 Open Forum #45 The Virtual Worlds we want: Governance of the future web

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    There is need to set guardrails and framework Virtual Worlds as it is in early stages of development

    ,

    International collaboration is crucial for the effective global governance in a borderless Virtual Worlds. A multi-stakeholder model in the governing of in governing Virtual Worlds is needed.

    Calls to Action

    Multistake-holder community should work on ethical principles in engineering and commercialization of virtual worlds’ innovations.

    ,

    There is need for a wholistic approach that embeds user data control, interoperability and economic value participation to foster responsible innovation in the virtual world

    Session Report

    IGF 2023 - THE VIRTUAL WORLDS WE WANT: SUMMARY Report

    The panel at IGF 2023, "The Virtual Worlds We Want," delved into the evolving realm of virtual worlds and metaverse, addressing the challenges and opportunities that lie ahead. Led by Miapetra Kumpula-Natri, the session brought together a diverse group of experts, each contributing unique perspectives and insights. The discussions focused around three pivotal themes: interoperability, standardization, regulation & governance, and accessibility & inclusivity, laying down a comprehensive roadmap for the future.

    Interoperability emerged as a crucial theme, highlighting the need for open and accessible virtual worlds. The panel emphasized preventing market dominance by a few big players and fostering an environment where users have control over their data and identities. The discussion underscored the importance of developing common technical standards for the metaverse, addressing the potential issues of marginalization and fragmentation. The panel also discussed the need for a clear data policy and AI governance frameworks to navigate the complexities of a decentralized identity framework.

    Openness and User Empowerment: Panellists expressed a unanimous vision for virtual worlds that are not only open and interoperable but also empower users. They stressed the importance of future iterations of the web, such as Web 4.0, being accessible to all, ensuring a democratic and user-centric virtual space.

    Technical Standards for the Metaverse: The development of common technical standards was identified as a critical step towards achieving interoperability. This would create a standardized baseline, ensuring that the virtual world is not fragmented and is inclusive of all demographics.

    Decentralized Identity and Governance: The conversation delved into the need for a decentralized identity framework, highlighting the challenges posed by technical and jurisdictional complexities. This called for robust data policy and AI governance frameworks to ensure clarity and security.

    Holistic Approach for Responsible Innovation: A holistic approach was deemed necessary to embed user data control, interoperability, and economic value participation within the virtual worlds. Special attention was urged towards considering new types of data and their implications on innovation and user rights.

    The panellists agreed on the imperative need for standardization within the metaverse to drive innovation and interoperability. They stressed setting clear guardrails and frameworks to guide the early stages of development in this space. The significance of international collaboration for effective global governance was underscored, with a call for governments to actively participate and support discussions on virtual world governance.

    Driving Innovation through Standardization: A strong case was made for the establishment of a single standard organization, dedicated to fostering innovation and ensuring interoperability across the metaverse.

     

    Setting Development Guardrails: Given the nascent nature of virtual worlds, there was a consensus on the need to set clear guardrails and frameworks to guide their development, ensuring a balanced and sustainable growth.

    Global Governance through Collaboration: The borderless nature of the metaverse calls for international collaboration. Drawing parallels from the governance of the internet by ICANN, a multi-stakeholder model was recommended.

    Government Support and Engagement: The crucial role of government support in discussions around virtual world governance was highlighted. There was a call for governments to actively engage with and support the IGF as a central platform for these critical discussions.

    Accessibility and inclusivity were identified as key pillars for the development of virtual worlds. The panel stressed the importance of ensuring these principles right from the infrastructure level, highlighting the need for high-capacity, low-latency networks, especially for AR and VR applications. A human-centric approach, ethical principles in innovation, and significant investments in virtual public services were emphasized as vital components of an inclusive virtual world.

    Infrastructure Accessibility: Ensuring accessibility at the very foundation of the metaverse’s infrastructure is vital. The panel highlighted technical challenges, such as the need for stable radio links, and emphasized the importance of overcoming these to ensure a seamless experience for all.

    Human-Centric Virtual Worlds: The panel advocated for a human-centric approach in developing virtual worlds, emphasizing the need to uphold individual rights and safety, while also fostering innovation and growth.

    Ethical Innovation and User Empowerment: Ethical principles should guide the engineering and commercialization of innovations in virtual worlds. The empowerment of users, treating them as more than just consumers, was highlighted as a critical aspect of responsible development.

    Investing in Virtual Public Services: The need for substantial investments in virtual public services, such as smart cities, healthcare, and other areas impacting quality of life, was underscored. The panel emphasized the importance of establishing strong connections between academia, augmented reality, and the metaverse, ensuring inclusivity in development.

    The panel called for a broadened understanding of virtual worlds and the metaverse, urging stakeholders to view it as the next iteration of the internet, going beyond just AR, VR, and XR environments.

    The importance of preserving the technical underpinnings of the internet for future integration with virtual worlds was highlighted, ensuring a seamless transition and integration between the two realms.

    "The Virtual Worlds We Want" session at IGF 2023 fostered an enriching dialogue, shaping a comprehensive and visionary discourse on the future of virtual worlds and the metaverse. The panellists, through their diverse expertise, provided valuable insights and recommendations, emphasizing interoperability, governance, and inclusivity as fundamental pillars for responsible and innovative development.

    IGF 2023 Networking Session #109 International Cooperation for AI & Digital Governance

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    The advent of AI requires a holistic approach, not only from the perspective of technology, but also from social, economic, and ethical points of view.

    ,

    Diffusion and sophistication of AI technologies accentuates existing social, economic, and political issues and creates unexpected, unprecedented problems.

    Calls to Action

    Closer international cooperation for regulatory framework for AI

    ,

    Extended participation and inclusion for protection of human rights

    Session Report

    SESSION SUMMARY

    • A Human Rights Approach to Digital Governance (Matthew Liao, NYU)
      • "5W1H " framework for AI regulation, focusing on several key questions: What should be regulated? Why should AI be regulated? Who should regulate AI? When should regulation begin in the technology life cycle? Where should regulation occur? How should regulation be enacted?
      • AI regulation should prioritize the protection and promotion of human rights, encompassing everything that could impact these rights and everyone including companies, researchers, governments, universities, and the public, has a responsibility to engage proactively in the regulation process to overcome challenges in enforceability.

     

    • AI and Cyber Physical Systems Policy Lab (Dasom Lee, KAIST)
      • AI&CPS lab focuses on the integration of AI into infrastructure to promote environmental sustainability. The lab investigates areas like energy transition, smart grids, renewable energy technologies, transportation (including automated vehicles and drones), and data centers. She emphasized the interconnectedness of these fields, highlighting the importance of a harmonious approach to infrastructure development.
      • One of the on-going projects, ‘Privacy and Culture Project’ shows that privacy is contextualized in different geographical regions based on culture and history, which calls for an international cooperation of experts and academics for tackling future challenges.

     

    • Challenges and Opportunities of Digital Governance for Development in the Developing Countries (Atshushi Yamanaka, JICA)
      • Digital governance and technologies offer opportunities for innovation and AI can create new products and services that contribute to socio-economic development. “Reverse innovations”, that emerge from developing countries can impact the global landscape, which can be fostered by enhancing digital public goods and infrastructure to ameliorate the digital divide.
      • Digital governance requires a multi-stakeholder approach inviting stakeholders in policy-making processes, and more platforms like IGF if needed in the future, especially to overcome new challenges and to invite developing countries need to take a more significant role in rule-making and framework development.

     

    • Conversational Agents for Digital Inclusion (Rafik Hadfi, Kyoto University)
      • Digital inclusion goes beyond access to technology and also encompasses concepts like equity, self-realization, and autonomy, and ultimately aims to disadvantaged individuals with access to ICT technology and, in doing so, promote equity.
      • A case study in Afghanistan showed how the AI system can enhance several aspects of the online debates, including increasing the diversity of contributions from women, reducing inhibitions, and encouraging more ideation on local problems.

     

    • Democratising AI for a thriving region (Liming Zhu, University of New South Wales)
      • In 2019, Australia introduced AI ethics principles that prioritize human-centered values such as pluralism, fairness, and inclusiveness, as well as quality attributes specific to AI, including privacy, security, transparency, and accountability.
      • As part of this comprehensive approach to AI governance, Australia has introduced measures such as responsible AI risk assessments, question banks, and AI risk registries to guide organizations in identifying and mitigating potential risks associated with their AI applications. This approach aims to ensure that AI benefits are leveraged responsibly and ethically, with consideration for social and environmental well-being.

     

    • Towards Hyperdemocracy: Discussion and Consensus among People and Machines (Takayuki Ito, Kyoto University)
      • The Hyperdemocracy project aims to utilize artificial intelligence (AI) to address contemporary social challenges such as fake news, digital manipulation, and echo chambers, with a transition in 2015 to develop a system that leverages AI agents to support group interactions and crowd-scale discussions online.
      • A case study was conducted in Afghanistan to collect public opinions during the withdrawal of American troops in August 2021, with the AI analyzing the types and characteristics of these opinions. The project is currently developing a next-generation AI-based competition support system to address what they refer to as the "hypodemocracy problem."

     

    • International Development and AI: Three Snapshots (Seung Hyun Kim, KAIST)
      • Unequal opportunities can arise from technological advancements and introduction of new technologies can create unexpected problems arising from existing social, economic disparities.
      • Fragmentation in governmental ICT systems can occur due to the involvement of different financing institutions and service providers.
      • Technology sovereignty is a critical concern, and the dynamic between efforts to keep nation's security with technological innovation will become increasingly difficult.
    IGF 2023 WS #516 Beyond North: Effects of weakening encryption policies

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    The interconnected nature of the Internet means that weakening a service in one region implies a weakening effect for all users, as the implications are not constrained by borders.

    ,

    The Global South tends to follow legislative trends set by the Global North, including those that weaken encryption.

    Calls to Action

    Encryption should be seen as more than just protecting privacy but, in a broader sense, as a human rights matter that guarantees freedom of opinion, freedom of expression, and other human rights.

    ,

    In a multi-stakeholder position, we need to address the topic and ensure that actors understand that their policy choices have effects that extend well beyond the originally intended region.

    Session Report

     

    The workshop Beyond North: Effects of weakening encryption policies began with an introductory speech by the in-person moderator Olaf Kolkman (Internet Society) regarding legislative proposals that could impact the use of strong encryption. Among them, proposals from the Global North, such as the United States, European Union, and United Kingdom, were mentioned. However, as emphasized, the impacts extend beyond the regions from which they originate, considering the global nature of the Internet.

    At the initial panel stage, an activity was conducted through the Mentimeter platform to understand the audience's point of view regarding the risk of fragmentation in services offering encryption, should such laws be passed in their respective countries. On a scale of 0 to 10, the average result was 6.4. Another question posed to the audience aimed to understand how the Internet ecosystem and human rights can be affected extraterritorially by extraterritorial policies. The responses formed a word cloud, with words such as 'fragmentation,' 'human rights threatened,' 'security risk,' and 'confidentiality issues.'

    After the interactive moments, the panelists had the opportunity to make their contributions. The first to speak was Professor Masayuki Hatta, an economics professor at Surugadai University, seeking to understand how the effects of North Global encryption policies can impact the economies of the Global South. Professor Hatta reflected on the topic from his perspective originating in Japan and from the viewpoint of people who use the services. In his opinion, few people are aware of encryption or know that they are using encrypted services. According to him, this creates a problem regarding the effects that anti-encryption laws create, as sometimes these people may not even be aware that encryption is being prohibited in their locations.

    The next guest speaker to address the audience was Mariana Canto, a visiting researcher at the Wissenschaftszentrum Berlin für Sozialforschung. When asked by the in-person moderator how the power dynamics of the Global North could impact the development of cybersecurity policies in the Global South, Mariana Canto began her argument by highlighting the practice of the Global South following trends of the Global North and the importation of narratives. As an example, she pointed out the General Data Protection Regulation (GDPR) and the Lei Geral de Proteção de Dados (LGPD) from Brazil. 

    The speaker also emphasized the impossibility of discussing regulation without connecting it to the real world. In her analysis, the current concept of privacy is of white and middle-class origin, a privilege for some, while people of color are systematically surveilled.

    Regarding the agenda involving encryption, Mariana Canto emphasized the fight against the dissemination of child sexual abuse material. The legislative proposals addressing this issue, originating from the Global North, impact the Global South, in a way that the narrative of law enforcement's inability to act facilitates the insertion of surveillance tools.

    The third speaker was Prateek Waghre, Policy Director of the Internet Freedom Foundation (IFF), who was asked about India's national sovereignty policies relevant to disputes over encryption usage. His speech began by pointing out the importation of legal instruments from one part of the world to another, even if they have different underlying objectives.

    Drawing on a German case, with the Network Enforcement Act (NetzDG), the IFF director highlighted the process of importing elements of the law by other countries, as noted by researchers, especially more authoritarian governments, which make direct references to NetzDG as inspiration for their legislative projects. Some of the inspired projects require the local presence of foreign companies to operate in the country, sometimes used to threaten these companies and their employees.

    Discussing the case of India, the speaker brought up various digital laws, which in his view, have negative aspects. He highlighted the draft Indian Telecommunication Bill, the Digital Personal Protection Act of 2023, and the effort to rewrite intermediary liability with updates. In his assessment, a common thread among them is a significant level of government control (union executive) with little oversight perspective.

    The next to speak was Pablo Bello, Director of Public Policy for WhatsApp in Latin America, who was questioned by the moderator about the company's assessment of the potential risk of Internet fragmentation of encrypted services due to Global North policies. In his assessment, yes, there is a risk of fragmentation with the imposition of security measures, as if one part adopts lower standards, the implications affect everyone.

    According to Pablo Bello, the perspective of the Global South should be heard, as decisions at this level have implications for all countries. Citing data from The Economist about democracy, only 8% live in absolute democracies, mostly in the Global North, while 55% live under authoritarian regimes, which could pose risks to those populations in the case of cryptographic weakening. The speaker advocates the need for a multisectoral approach to the problem.

    The final speaker was Juliana Fonteles, a consultant at the Inter-American Commission on Human Rights, who was asked about how extraterritorial effects of anti-encryption policies could influence rights protected by the American Convention. Her intervention began by pointing out that the notion of the right to privacy is different from other countries and is not normally seen as the most important. Many countries lack laws regulating the protection of personal data that could ensure safeguards on the treatment of personal data by state and non-state actors, which is central to anti-encryption policies.

    She also highlighted the history of Latin American and Caribbean countries in violent repression of political demonstrations, persecutions, journalist assassinations, persecution of human rights defenders, and the criminalization of LGBTQIA+ individuals. Information about the behavior of these people is all recorded in private communications protected by encryption. In her role, she has received various reports from journalists and human rights defenders of state surveillance practices through spyware to persecute them.

    In this case, Juliana Fonteles argues that cryptographic weakening affects not only the right to privacy but should be considered in a broader spectrum as impacting freedom of expression, access to critical information, the right to opinion, and other rights.

    IGF 2023 Networking Session #170 Network Session: Digital Sovereignty and Global Cooperation

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    Digital sovereignty is a broad term to use when thinking about how to understand cooperation or tensions across states

    ,

    A functional approach to understanding different parts of the digital sovereignty debate can be useful.

    Calls to Action

    Bring together different understandings of digital sovereignty from different parts of the world: collaborative research is necessary!

    ,

    Break down discussions using the concept of digital sovereignty into different fields; only policy specific discussions will allow us to understand how sovereignty is exercised in a digital space.

    Session Report

    The network session ‘Digital Sovereignty and Global Cooperation’ brought together a diverse group of people to discuss the concept of digital sovereignty and how it affects global cooperation. Professor Jamal Shahin kicked off the meeting by giving a short introduction to the concept in which the variety of definitions used by actors was emphasised. Afterwards, questions were asked to both the online and on-site participants, with the help of a Mentimeter, to get insights into the group characteristics. The answers to the questions showed that the participants came from about 16 countries and all had different work backgrounds (private, public, academia, civil society), highlighting the importance and interest in the concept across regions and sectors.

    After this introduction, the group was split up into three different break-out rooms (one online) where discussions were held on the basis of the following question: Do you see a tension between digital sovereignty and digital cooperation, and why? The responses to the question showed varying opinions. On the one hand, it was said that digital sovereignty hinders digital cooperation because it leads to states worldwide focusing on becoming sovereign over ‘their’ territory and citizens, which leads states to look ‘inwards’, preventing or hindering global digital cooperation. On the other hand, it was emphasised that pursuing digital sovereignty can also lead to international cooperation. This, as one can only become genuinely sovereign if this is recognised by the ‘outsiders’ and thus can foster international digital cooperation as a result of the need for external recognition.

    Another critical point highlighted during the discussions in one of the groups is that the question is perhaps too general, and it might be more important to focus on specific aspects of digital governance. This is to understand further when or if the notion of digital sovereignty fosters or hinders digital cooperation about particular issues related to digital governance. In the second part of this networking session, these different fields in which tension surrounding digital sovereignty, such as data protection, online piracy, cybercrime, cybersecurity, censorship and content moderation, government data policy, satellite internet, taxation and data flow across borders were further discussed.

         Key Takeaways

    1. Digital sovereignty is a broad term to use when thinking about how to understand cooperation or tensions across states.

     

    1. A functional approach to understanding different parts of the digital sovereignty debate can be helpful.

    Call to Action

    1. Bring together different understandings of digital sovereignty from different parts of the world: collaborative research is necessary!

     

    1. Break down discussions using the concept of digital sovereignty into different fields; only policy-specific discussions will allow us to understand how sovereignty is exercised in a digital space.
    IGF 2023 Open Forum #146 Disrupt Harm: Accountability for a Safer Internet

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    Governments and civil society organizations have addressed technology-facilitated gender-based violence over the past three decades. However, there are gaps in current regulatory frameworks, some of which are not human rights-based and poorly enforced. Multilayered approaches beyond criminalization are needed, including effective prevention and response strategies and remedies for survivors.

    Calls to Action

    Policymakers: develop multilayered strategies to prevent and respond to technology-facilitated GBV that are context-specific, evidence-based and grounded in human rights, in close partnership and collaboration with communities and civil society organizations.

    ,

    Civil society: demand a narrative shift: the Internet should be a space for pleasure, joy, freedom, and unrestricted self-expression for all, free from violence and discrimination, and demand enforcement of human rights based protections to make these spaces safe and gender equal.

    Session Report

    The event “Disrupt Harm: Accountability for a Safer Internet” convened by UNFPA aimed to explore the mechanisms through which the harms caused  by technology-facilitated gender-based violence are being addressed in order to achieve safer internet and digital spaces for women and girls in all their diversity. 

    Senator Martha Lucía Mícher Camarena, chair of the Gender Equality Committee in the Mexican Senate, opened the event. Senator Mícher spoke of her experience and success in developing and  implementing legislative changes to protect women and girls in the digital space. A recent reform which was unanimously approved in the Mexican Senate represented a crucial milestone in technology-facilitated gender-based violence (TF GBV) regulation at the national level which has provided the opportunity for learning at the international level. The legislation defines  digital violence and provides a regulatory structure for protection. According to Senator Mícher, applying a legal framework that supports women and girls in the digital space is non-negotiable in our shared mission to protect women and girls’ rights as a whole.

    The following panel discussion featured contributions of Sherri Kraham Talabany, the President and Executive Director of SEED, Karla Velasco, policy and advocacy coordinator at the Association for Progressive Communications (APC), Julie-Inman Grant, Australia eSafety Commissioner, and Juan Carlos Lara, Co-Executive Director of Derechos Digitales.

    Ms. Talabany highlighted the unique challenges faced by women and girls in Iraq and the Middle East. In this region, high internet penetration co-exists with gender inequality and conservative norms, leading to the significant vulnerability for women and girls in terms of gender-based violence (GBV) and TF GBV. Notably, 77% of the population is online, with 50% of Iraqi women and girls experiencing or knowing someone affected by TF GBV. This online violence often spills into the real world, resulting in murder, honor killings and increased suicide rates. Ms. Talabany also highlighted the need for a regional approach to address the Middle East's unique challenges. SEED have initiated a TF GBV task force in Iraq, which focuses on the protection of women’s human rights while safeguarding online freedom of expression. Urgent needs that emerged from this intervention include the need for secure reporting, investigations, and protection from specialized agencies with skilled personnel including support from NGOs well-versed in this issue as well as tech companies who must adopt a human rights and survivor-centered approach. 

    Karla Velasco spoke to the work of APC in women's rights, sexual rights, and feminist movements in the majority world. This work started in 2006, at a time when terms like "online gender-based violence" didn't exist. To have seen TF GBV at the forefront of international agendas and being discussed at events like the IGF for the past few years has been a remarkable achievement for APC and CSOs alike. Karla emphasized the importance of amplifying the voices of women and individuals with diverse genders and sexualities in the digital realm. She proposed a shift in focus from conceptualizing TF GBV to emphasizing remedies and responses. She also encouraged going beyond addressing the gender digital gap and women's access to the Internet. Instead, she urged consideration of how people connect online, and how these online interactions and challenges cut across intersectionality and location. Lastly, Ms. Velasco called for a change in the narrative surrounding women and girls' use of the Internet, celebrating digital spaces as sources of pleasure, fun, and creativity.

    Commissioner Julie-Inman Grant, Australia eSafety Commissioner, discussed the pivotal role of the eSafety Commission in coordinating legislation within the Commonwealth and promoting online safety. She underscored the necessity for evidence-based solutions over one-size-fits-all approaches and stressed the importance of recognizing the diverse impact on the most vulnerable communities. To illustrate this perspective, she provided the example of Indigenous communities and women with intellectual disabilities in Australia, who contend with intersectional layers of vulnerability and discrimination, thereby shaping a distinct experience of TF GBV in comparison to the broader Australian population. This underscores the critical need to consider these and other intersections and collaborate with the affected communities to design effective solutions. 

    Juan Carlos Lara highlighted that Derechos Digitales recognizes the internet as a space with both risks and opportunities. The digital realm offers a platform for social and justice-related demands, as well as a means to prevent and address GBV. While legislative efforts are crucial in tackling TF GBV, it's important to acknowledge that this is a multifaceted issue that requires a combination of approaches. Legislation alone is insufficient; it must be effectively enforced. Moreover, any legislation should carefully consider the rights of survivors, including privacy, freedom of expression, and access to information. Given that vulnerable groups are disproportionately affected, adopting an intersectional approach that accounts for contextual and social differences is imperative. Criminalization, in and of itself, isn't the panacea; instead, education and legislation should aim to mitigate harm without perpetuating violence.  Derechos Digitales calls for strengthening digital rights, encompassing privacy, security, and freedom of expression. Any legislation addressing TF GBV must carefully consider these aspects, and the forthcoming guidance created in collaboration with UNFPA is anticipated to facilitate this ongoing process.

    At the close of the session, panelists responded to questions from the audience on national progress on the Online Safety Bill, the creation of data on TF GBV without relying on platforms, and international policy progress in addressing TF GBV.

    Finally, the event was closed by Ms. Eiko Narita, UNFPA Japan Office Chief, who underscored the need for collective action to combat the harms encountered on the Internet. She also emphasized the significance of acknowledging CSOs, who provide the vital perspective of women and girls experiences as well as holding a critical accountability role. She encouraged the attendees to carry forward the meaningful discussions initiated at the IGF.

    The event witnessed a large turnout with around 64 in person and 15 virtual attendees. Numerous civil society and Government representatives, expressed confidence in using the outcomes of the discussions as a key contribution to their future endeavors. Several attendees formed valuable connections with the panelists, paving the way for future collaborative initiatives.

    IGF 2023 Open Forum #67 Internet Data Governance and Trust in Nigeria

    Updated:
    Data Governance & Trust
    Session Report

    NIGERIA OPEN FORUM AT THE UNITED NATIONS IGF 2023.

    Wednesday, 11th October, 2023.

    KYOTO, JAPAN.

    Organized by:

    • National Information Technology Development Agency
    • Nigeria Internet Governance Forum MAG (NIGF MAG)
    • Nigerian Communications Commission (NCC)
    • Nigeria Internet Registration Association (NiRA)
    • Internet Society Nigeria Chapter (ISOC NG)
       

    Speakers

    1. Hon. Adedeji Stanley Olajide - Chairman House Committee on ICT and Cybersecurity.
    2. Mr. Bernard Ewah – Lead Paper Presenter, National Information Technology Development Agency.
    3. Mrs Nnena Nwakamma - World Wide Web Foundation.
    4. Dr. Chidi Diugwu - Nigerian Communication Commission.
    5. Dr. Jimson Olufuye – Konteporary Konsulting Limited, Business Community.

    Moderators

    1. Sen. Shuaibu Afolabi - Chairman Senate Committee on ICT and Cybersecurity - Chair of Session.
    2. Engr. Kunle Olorundare - Online Moderator, President, Internet Society, Nigeria Chapter (ISOC NG).

     

    Onsite Facilitators

    1. Mrs. Mary Uduma – Chair, Africa Internet Governance Forum.
    2. Mr. Igonor Oshoke - Program Manager, Nigeria Internet Governance Forum.

     

    Rapporteurs:

    1. Mrs. Uchechi Kalu
    2. Nitabai Prosper Dominic
    3. Zarah Wakil

     

     

     

     

     

     

    Internet Data Governance and Trust in Nigeria

    Introduction

    In the fast-evolving digital landscape of Nigeria, where the internet has woven its way into every facet of life, the topic of Internet Data Governance and Trust has become paramount. With an impressive 156,987,433 active internet subscriptions and a broadband penetration of 48.49% as of February 2023, Nigeria is not just the most populous nation in Africa but also a vast market ripe with immense ICT investment opportunities.

    This Open Forum, led by a paper presented by Mr. Bernard Ewah of the National Information Technology Development Agency (NITDA), delves into the intricacies of data governance and trust, emphasizing the shifting dynamics of data in a world where data has transitioned from a mere commodity to a valuable resource. As the custodians of data, regulators must now navigate this transformative landscape, seeking a balance between data extraction and protection, all while navigating the complexities of integrating structured and unstructured data. Infrastructure development, in this context, stands as a pivotal requirement [Lead Presenter].

    The lead paper establishes that with a wealth of data comes an abundance of opportunities for all stakeholders, particularly the private sector and other interest groups, to harness this data's potential for economic growth. The task at hand is to create regulatory avenues that incentivize private sector investment in new digital infrastructure. Such governance demands policies that define the responsible use of data and clear mechanisms for execution [Lead Presenter].

    The Open Form placed a strong emphasis on the capacity-building of users who stand to benefit from this data-driven revolution, which includes agencies like the Bureau of Public Procurement (BPP) and the Population Commission.

    In this Open Forum, civil society, represented here by Mrs. Nnnena Nwakamma from the World Wide Web Foundation, underscores the need for innovation in data regulation and urges the creation of value from data beyond mere population statistics. Governance, in her view, must extend beyond silos, encouraging a dialogue between stakeholders to foster trust, even as regulatory instruments are put in place (Civil Society).

    The private sector, represented by Dr. Jimson Olufuye from Konteporary Konsulting Limited, applauded the government for its regulatory efforts, recognizing the inherent value in data. Data's availability for analysis and the subsequent boost in GDP is a testament to the potential harnessed when data flows freely and securely. To unlock the true potential of this resource, the report suggests the need for frameworks for cross-border data and engagement in various programs and agreements like the CCI (Private Sector).

     

    Dr. Chidi Ugwu of the Nigerian Communication Commission outlined the different data sources and categories emerging from Nigeria, highlighting the value of metadata and its importance for regulators. The rapid movement of metadata necessitates monitoring and jurisdictional considerations. The robust regulatory instrument, the Nigeria Data Protection Regulation (NDPR), was localized, but attention is needed for data that travels beyond Nigeria's jurisdiction (Regulator).

     

    In the government's view, as presented by Hon. Adedeji Stanley Olajide, Chairman House Committee on ICT and Cybersecurity, laws that ensure data usability, security, and flexibility is paramount. The chain of data custody must be protected, and laws must be explicit and rigid to guide the principles of data. Engr. Kunle Olorundare, Online Moderator, Acting President of the Internet Society, Nigeria Chapter, underscores the importance of an open and secure internet, individual data encryption, the right to be forgotten, and the need for data protection (Government, Online Moderator).

     

    This Open Forum not only looked at Nigeria but also engages with insights from Ghana, emphasizing the significance of regional cooperation and the importance of legislative enforcement for data protection. It's clear that in this digital age, the spotlight is on the fundamental rights associated with data, with various questions and concerns arising from different quarters (Sam George, Member of Parliament, Parliament of Ghana).

    In summary, this session explored the critical dimensions of Internet Data Governance and Trust in Nigeria, addressing the shifting landscape of data and the call for cooperation and vigilance in safeguarding this invaluable resource.

     

    Key Outputs:

    Lead Presenter- Mr. Bernard Ewah – Lead Paper Presenter, National Information Technology Development Agency

    • The era of data commoditization underscores the importance of recognizing the value of data and its implications for data subjects.
    • Regulators must remain acutely aware of the ever-evolving dynamics of data in the digital landscape.
    • Striking a balance between data extraction, value creation, and the protection of data subjects is of paramount significance.
    • The integration of both structured and unstructured data emphasizes the complexity of managing data in this digital age.
    • The development of robust infrastructure is a fundamental requirement for effectively managing and utilizing data resources.
    • There exist abundant opportunities for all stakeholders, especially the private sector and other interest groups, to leverage data for economic growth.
    • Regulations should be designed to facilitate private sector investment in the development of new digital infrastructure.
    • Effective data governance necessitates the formulation of clear policies for data use and management.
    • Strengthening the capacity of users, including organizations like the Bureau of Public Procurement (BPP) and the Population Commission, is a key factor in enhancing data utilization and governance.

     

    Civil Society -Mrs Nnena Nwakamma - World Wide Web Foundation

    Question: As civil society, briefly share your thoughts on the current state of Data Governance in Nigeria. What more needs to be done to enhance transparency and accountability in data collection, processing, and sharing practices by both private and public entities operating in Nigeria?

    • Civil society emphasizes the need for a regulatory framework that fosters innovation in Nigeria, underscoring the dynamic nature of data and the necessity to adapt to changing circumstances.
    • The true value of data lies in its application and the creation of value, shifting the focus from mere population statistics to practical utilization.
    • Data governance is a multifaceted endeavor, necessitating a holistic approach that integrates other societal elements.
    • How can regulations be innovatively designed to drive value creation from data beyond demographic statistics?
    • Emphasize the need for holistic governance, ensuring that data management aligns with other societal needs.
    • As responsible citizens, it's essential to consider which data is being generated and made accessible, highlighting the individual's role in data governance.
    • Continued dialogue and collaboration among various stakeholders are central to building trust in data governance, even in the presence of regulatory instruments.

     

    Private Sector -Dr. Jimson Olufuye – Konteporary Konsulting Limited, Business Community

    Question: Given Nigeria’s population size and the vibrant innovation ecosystem, how is the Private Sector responding to new opportunities from new data sources?

    • The private sector recognizes the immense value that can be derived from data, lauding the government for taking steps to put regulatory instruments in place, which are poised to unlock substantial potential for Nigeria.
    • Data availability for analysis is central to the private sector's response to data opportunities. This, in turn, has contributed to the significant boost in Nigeria's GDP.
    • Acknowledging the importance of cross-border data, the private sector underscores the necessity for a comprehensive framework to facilitate data exchange.
    • Engagement in various programs and the signing of agreements like the CCI are considered essential to harness the full potential of data opportunities.
    • Nigeria's significant population size and ICT potential make it a prime candidate for substantial data-related investments.
    • Comprehensive data governance policies and frameworks are needed to address data misuse, breaches, and the loss of personal information.
    • The enactment of the Nigeria Data Protection Regulation (NDPR) in 2019 marked a step toward data governance and trust but faces challenges in enforcement and awareness.
    • Increasing awareness and capacity-building programs are crucial for stakeholders to understand data governance principles and responsible data use.

     

    Regulator- Dr. Chidi Ugwu- Nigerian Communication Commission

    Question: What are the likely Data Sources and Categories of Data coming from Nigeria and how is Nigeria giving value to this data especially as an emerging digital economy.

    • In the realm of telecommunications, the annual contribution to Nigeria's GDP stands at a significant 14 percent. This statistic highlights the profound impact of telecommunications on the nation's economic landscape, driven in part by data-related activities.
    • The core focus when addressing data is understanding its essence. Metadata, which represents structured data, has garnered substantial attention from regulators due to its relevance in data governance and security.
    • An essential aspect of data management is examining how data is generated and profiled, especially in the context of email usage. Understanding the intricacies of data usage is vital for effective governance.
    • Metadata, often regarded as "data about data," carries immense significance and requires vigilant monitoring. The pace at which metadata can traverse networks is compared to the speed of light, underscoring its potential for impact and risk.
    • While the Nigeria Data Protection Regulation (NDPR) has been localized to address data governance within Nigeria's borders, questions arise regarding data that transcends the country's jurisdiction. It is crucial to comprehend the extent to which data travels beyond national boundaries.
    • An essential aspect of data governance is the duty of care held by data controllers and processors. They bear the responsibility of safeguarding data and ensuring compliance with regulations.
    • Regulatory bodies such as NCC have introduced robust instruments and safeguards to protect individuals and their data. These instruments play a vital role in ensuring data security and governance.
    • The NDPR has been localised. What about those that data has travelled out of the jurisdiction of Nigeria. We need to understand to what extent our data travel.
    • There is a duty of care from Data Controller and Processor.
    • The Regulator has brought a robust regulatory instrument that protects the individual.

     

     

    Government - Hon. Adedeji Stanley Olajide, Chairman House Committee on ICT and Cybersecurity

    Question: How is the Nigerian government managing the outcomes of data governance regulations? (institutionalising the implementation, Monitoring, Disclosure,  Prosecution, Outcome and Continuous Evaluation as it regards Data Governance.

    Promotion of Data Laws:

    • It is imperative to promote laws that render data not only usable but also secure and flexible. These laws should emphasize the need for data consistency, ensuring that data maintains its integrity and reliability.

    Protecting the Chain of Data Custody:

    • The Government now focuses on the safeguarding of the chain of data custody is fundamental in data governance. It is essential to secure and manage data through its entire lifecycle, from creation to disposal.

    Clarity and Stringency of Laws:

    • Data governance laws should be characterized by clarity and strict rules, leaving no room for ambiguity. Stringent regulations are necessary to uphold data security and compliance.

    Data as a Moving Target:

    • In the dynamic landscape of data governance, data behaves as a "moving target." Understanding and addressing this dynamism is crucial for effective data management and Legislation.

    Revamping Laws for Guiding Data Principles:

    • Laws pertaining to data governance need to be continuously revamped to ensure their alignment with the evolving principles of data management and protection.

     

    Civil Society: Engr. Kunle Olorundare- Online Moderator, Acting President (Internet Society, Nigeria Chapter)

    • The Internet Society strongly advocates for the open nature of the internet, recognizing it as the primary source of data generation in the digital age. This commitment aligns with the Internet's role in enabling data-driven processes.
    • Ensuring data is kept encrypted and secure reflects a commitment to safeguarding individuals' personal information.
    • The Internet Society places importance on the right to be forgotten, acknowledging individuals' prerogative to have their personal data removed from public visibility. This right is integral to data privacy and protection.
    • The Internet Society underscores the necessity for a secure internet environment. This emphasis aims at preventing unauthorized individuals or entities from accessing an individual's data.

    Africa Parliamentary Network: Sam George, Member of Parliament, Parliament of Ghana

    What can AFRINIC do to ensure Data Governance?.

    • Thorough examination of the African Union's data policy is crucial. Understanding and aligning with this policy framework can play a vital role in shaping data governance practices in the region.
    • It's essential to be cautious not to confine data governance efforts to mere checkbox compliance. The experience of Ghana in passing the Data Protection Law in 2012 highlights the importance of moving beyond formalities to ensure effective implementation.
    • To bolster data protection efforts, portfolios within government ministries should allocate resources dedicated to the implementation of data protection laws.
    • Governments have increasingly recognized data as a fundamental right. This acknowledgment underscores the importance of safeguarding data and the privacy of individuals in the digital age.

     

    Other Key Outputs

    • The internet has become an essential part of life in Nigeria, offering opportunities and challenges in data governance and trust.
    • Nigeria's significant population size and ICT potential make it a prime candidate for substantial data-related investments.
    • Comprehensive data governance policies and frameworks are needed to address data misuse, breaches, and the loss of personal information.
    • The enactment of the Nigeria Data Protection Regulation (NDPR) in 2019 marked a step toward data governance and trust but faces challenges in enforcement and awareness.
    • Multi-stakeholder collaboration is essential to develop effective data governance policies, involving government, civil society, and the private sector.
    • Increasing awareness and capacity-building programs are crucial for stakeholders to understand data governance principles and responsible data use.
    • Transparency and accountability in data governance practices, along with stakeholder engagement in decision-making processes, are vital for building trust.
    • Data sources include Social Media Data, Mobile Phone Data, Scanner or transaction data, Automatic Systems Data, Geo-special Data.
    • The International Telecommunication Union can help in upscaling women in Cybersecurity.

     

    IGF 2023 Open Forum #169 Futuring Peace in Northeast Asia in the Digital Era

    Updated:
    Key Takeaways:

    There is a need for comprehensive digital literacy programs in Northeast Asia that involve collaboration between the private sector, non-governmental organizations (NGOs), and government entities, in order to equip both young people and older generations with digital knowledge, access, and tools, in order to bridge skill gaps and provide cross-border capacity building.

    ,

    Historical issues in Northeast Asia have led to limited cross-border interactions and shared narratives, which need to be addressed through inclusive algorithms and multistakeholder dialogues. Arbitration and mediation skills, how to engage effectively online, and recognizing mis- and disinformation should be included as tools to solve conflicts in both virtual and physical worlds.

    Calls to Action

    1: Increased youth engagement in internet governance: younger generations are “digital natives” and perspectives of age diversity need to be considered when developing policy regulations and codes of conduct.

    , 2. Put a positive spin on "fragmentaton" and call it diversification in order to constructively seek different angles of cooperation and to carve out space for regional diversity
    Session Report

    Comprehensive Digital Literacy Programs: One of the key takeaways from the session was the need for comprehensive digital literacy programs in Northeast Asia. While there were existing programs offered for instance by private entities, they often lacked inter-regional and public-private coordination. It was emphasized that addressing digital literacy should involve collaboration between the private sector, non-governmental organizations (NGOs), and government entities. The goal was to equip both young people and older generations, that are both more vulnerable online, with digital knowledge, access, and tools, in order to bridge skill gaps and provide cross-border capacity building. Contents should include understanding of tech concepts, how to engage effectively online, recognizing disinformation and misinformation, and fostering an open and inclusive mindset. It was suggested that arbitration or mediation skills be included in these programs as tools to solve conflicts in both the virtual and physical worlds.

    The Metaverse and Internet Governance: The discussion highlighted the emergence of the metaverse sphere and the importance of shaping its development at its infancy. Participants noted concerns about the lack of educational content, the need for balance between state powers and bottom-up approaches, accessibility and the need for privacy and data protection. They stressed the need for regional platforms to counterbalance the influence of Western tech giants and for a code of conduct to be developed. It was also recognized that historical issues in Northeast Asia have led to limited cross-border interactions and shared narratives, which needed to be addressed through inclusive algorithms and multistakeholder dialogues.

    Increased Youth Engagement in Internet Governance: The session underscored the importance of actively involving youth in policy discussions at early stages of policy-making processes. It was noted that the internet should not inherit the challenges of the physical world. As younger generations are “digital natives”, perspectives of age diversity needed to be considered when developing policy regulations and codes of conduct.

    Carving out Space for Diversity: Noting that countries are at different stages of development, youth called for increased public-private collaboration and intergenerational engagement. Noting increased fragmentation in the region, speakers called on instead giving it a positive spin and calling it diversification, in order to constructively seek different angles of cooperation. To bridge differences and facilitate cooperation, youth called for digital literacy programs that carves out space for regional diversity and respect the nuances of Northeast Asian societies and cultures.  

    IGF 2023 Open Forum #160 The Postal Network: A Vehicle of Digital Inclusion

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    (1) The postal network is immense and extends to many of the remotest communities. Its reach can be maximized to help boost digital inclusion and drive progress on the Sustainable Development Goals.

    ,

    (2) There are clear benefits that the post office brings as a vehicle for sustainable digital inclusion, especially for citizens and businesses in underserved communities

    Calls to Action

    (1) Governments should include posts in their digital transformation and e-commerce strategies and should reflect on their WSIS commitment done 20 years ago to connect all post offices.

    ,

    (2) Collaboration between global connectivity initiatives such as CONNECT.POST and GIGA, should be strengthened to maximize synergies for achieving digital inclusion utilising schools and post offices. (3) Push for greater awareness among policymakers on the potential of the postal network to contribute to governments' development goals and SDGs.

    Session Report

    The first ever UPU IGF Open Forum delivered a lively and engaging discussion on the role of the postal network in digital inclusion and sustainable socio-economic development, particularly in underserved communities where citizens and small businesses are still disconnected from the digital economy.

    The speakers, from various regions and backgrounds, highlighted the importance of the postal service in providing technical solutions, advancing cybersecurity, and supporting small and medium-sized enterprises to participate online. The discussions focused on the challenges and opportunities for postal networks to adapt to the digital age and reposition themselves in order to provide universal services and reliable logistics delivery services.

    Speakers called attention to the successful collaboration between governments and postal operators in Zimbabwe and Ghana, where the postal office is recognized as an implementation partner for government policy.

    Also highlighted was the importance of educating postal employees around digital capabilities and utilizing the postal network to promote digital literacy in the communities it serves. The importance of digital literacy and skills particularly in rural areas to ensure that communities can take advantage of the opportunities brought by digitalization was clearly emphasised.

    The CONNECT.POST initiative led by the UPU aims to ensure that every post office has sufficient access to the internet by 2030. The session emphasizes the need for governments, the private sector, and development partners to work together to achieve this goal.

    The importance of integrating the postal network into governments' digital plans was highlighted, following which questions and feedback from the audience were facilitated.

    Video Recording

    Photos

    Social Media: LinkedIn | Instagram | Facebook

     

     

    IGF 2023 DC-SIDS Addressing the Challenges of a Unified SIDS Digital Platform

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    Global communities should seek to adopt and apply UNESCO's ROAM-X Internet Universality Indicators framework as a tool for evaluating and enhancing Internet development in Small Island Developing States (SIDS).

    ,

    Using readiness assessments and similar toolkits can help strategically address digital challenges and promote inclusive growth in SIDS regions.

    Calls to Action

    Local SIDS communities and global fora must coordinate in order to actively voice SIDS-specific issues, including the establishment of Internet Exchange Points (IXPs), strengthening cybersecurity, and fostering local digital fora.

    ,

    Supporting organisations interested in SIDS issues can work together, co-ordinated by the DC-SIDS, to identify solutions and help implement digital tools to catalyze cooperation between and within SIDS regions ensuring best fit and sustainable approaches.

    Session Report

    Introduction & Setting the Scene

    During this Roundtable, the lead discussants addressed several critical aspects related to Small Island Developing States (SIDS) and the pressing need for a unified SIDS digital platform. 

    Tracy Hackshaw, one of the co-chairs of the Dynamic Coalition at Small Island Developing States (DC-SIDS), initiated the session by underlining the importance of collaboration and communication among SIDS to tackle on a high-level the digital challenges they face.

    Rodney Taylor, Secretary General of the Caribbean Telecommunications Union (CTU) and Maureen Hilyard, the other DC-SIDS co-chair delved into the challenges encountered by SIDS when engaging with international organizations. Insufficient resources and personnel limit their participation in various processes. However, they celebrated the success of the first SIDS Internet Governance Forum in 2022 and advocated for greater attention to digital governance in these regions.

    Andrew Molivurae provided insights into the Pacific Internet Governance Forum (IGF), highlighting its fifth year and the diverse topics covered, such as emerging technologies, connectivity, and cybersecurity. Collaboration with organizations like AuDA, InternetNZ, and ISOC was highlighted.

    Maureen Hilyard introduced the concept of "platform cooperativism" as a strategic approach, focusing on human rights and creating a democratic digital economy.

    Additionally, the use of Internet universality indicators was presented, promoting principles such as human rights and openness in the digital realm. The operationalization of these principles was provided by UNESCO's Tatevik Grigoryan by introducing the ROAM-X framework. By embracing ROAM-X, SIDS can address digital challenges and promote inclusive growth in these regions.

    The discussion then shifted towards the importance of multi-stakeholder participation and capacity building, emphasizing that this approach is relevant for all countries, not just developing ones.

    The necessity for continuous collaboration in a working environment, rather than just periodic events, was stressed. The success of long-running events like the Caribbean Internet Governance Forum (IGF) was showcased as a model of effective policy framework development and updates.

    The lead discussants concluded their segment by emphasizing the need for a unified SIDS digital platform to address the challenges faced by SIDS effectively. They called for active engagement and visibility in the International Governance Forum (IGF) space to amplify the voices of SIDS.

    Discussant Interactions/Roundtable Discussion

    The second part of the workshop opened the Roundtable to interactions with additional discussants in an open discussion format. The discussions revolved around the possibility of launching a unified SIDS digital platform and the importance of breaking down silos to maximize resources and enhance coordination.

    Discussants stressed the need to start with a clear understanding of the nature of interaction desired and who should be involved. Mapping the SIDS landscape and focusing on content creation were deemed essential steps.

    They then discussed the tools, materials, and resources necessary to develop the unified platform, following which they emphasized the importance of involving young individuals willing to contribute their time and effort to move the platform forward.

    There was a call for government involvement, with specific attention to UNESCO's National and Regional Offices and the importance of a broader solution to address SIDS' issues.

    The Global Digital Compact (GDC) was highlighted as a driving force for the initiative, with an emphasis on the need for accessible information and platforms for interaction across different sectors.

    The Roundtable concluded with an exploration of the upcoming United Nations SIDS4 Summit to be held in Antigua and Barbuda in late May 2024 as an opportunity to advocate for the unified platform and the importance of intercessional engagement to advance SIDS' issues.

    Conclusions and Key Takeways

    The key takeaways from the workshop included the importance of adopting the ROAM-X framework to evaluate and enhance internet development in SIDS. By embracing ROAM-X assessments, they strategically addressed digital challenges and promoted inclusive growth.

    Moreover, local communities and global fora must coordinate efforts to actively voice SIDS' issues. This includes the establishment of Internet Exchange Points (IXPs), strengthening cybersecurity, and fostering local digital fora.

    Photos & Clips | Video Recording

    IGF 2023 Launch / Award Event #176 Africa Community Internet Program Donation Platform Launch

    Updated:
    Digital Divides & Inclusion
    Session Report

    KEY TAKEAWAYS

    • There should a rebirth of community centers that enable connectivity in rural and urban poor communities
    • Through WiFi, 3G, 4G, 5G networks, and satellite networks, a pack is going to be designed for anyone anywhere in the world to connect with these internet packs
    • Enhancing teacher training in rural communities using the internet back pack
    • It is important to empower people with the skills needed to engage.
    • Regulators should improve digital policies in rural communities
    IGF 2023 WS #220 Cybersecurity of Civilian Nuclear Infrastructure

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    The variety of risks posed by cyber operations against civilian nuclear infrastructure - ranging from harm to the life and health of individuals and environmental damage to significant psychological impact - should be balanced with the opportunities that the nuclear energy sector offers for clean and accessible energy, particularly thanks to small modular reactors and their use to power remote areas and AI.

    ,

    States are chiefly responsible for formulating and adopting relevant technical regulations, norms, and legal obligations to protect civilian nuclear infrastructure against cyber operations. These may be negotiated and agreed through the International Atomic Energy Agency (IAEA), the UN Open-Ended Working Group, or other forums. But since cybersecurity cuts across sectors, actors and disciplines, states need to work with other stakeholders.

    Calls to Action

    There is a need to pay greater attention to cybersecurity in the context of civilian nuclear infrastructure to avoid a variety of risks posed by cyber operations in this context. This includes better understanding the threat landscape, demystifying myths about the interface between cyber and nuclear safety and security, and enhancing dialogue between actors operating in both sectors - including technical, policy and legal actors.

    ,

    In light of the human, environmental, and national and international security risks posed by cyber operations against civilian nuclear infrastructure, states need to enhance the cybersecurity of the sector by different means, including technical, legal, and policy approaches. They should cooperate with the IAEA and the private sector and bring the issue to the agenda of existing multilateral discussions.

    Session Report

    This session was co-chaired by Dr Talita Dias and Rowan Wilkinson from the International Law Programme, Chatham House). It focused on the convergence of cyber and nuclear risks from a technical, policy, security, and legal perspective. To shed light on those different perspectives, the session benefited from the inputs of Marion Messmer (International Security Programme, Chatham House), Tariq Rauf (former Head of Nuclear Verification and Security Policy Coordination at the International Atomic Energy Agency (IAEA), Dr Giacomo Persi Paoli (Head of the Security and Technology Programme at the United Nations Institute for Disarmament Research – UNIDIR), Michael Karimian (Director for Digital Diplomacy at Microsoft, Asia Pacific), Tomohiro Mikanagi (Legal Advisor to the Japanese Ministry of Foreign Affairs), and Dr Priya Urs (Junior Research Fellow in Law at St John’s College, University of Oxford).

    As previous examples of cyberattacks against nuclear infrastructure in Iran, India, North and South Korea, Norway, Germany, the US, Ukraine, and the International Atomic Energy Agency (IAEA) illustrate, the actual and potential risks of such attacks include both physical and non-physical harms such as: a) extraction of sensitive information about nuclear capabilities; b) malfunctioning of equipment, including nuclear enrichment centrifuges; c) disruption of energy supplies; d) increased radiation levels; e) potentially disastrous consequences for the environment, human life and health; f) psychological harm to individuals, such as trauma and fear arising from the threat of those consequences; and g) reputational harm to the nuclear energy sector, as well as States, international organisations and companies involved the provision of nuclear energy or cybersecurity. According to the results of our interactive survey during the session (using Mentimeter), physical consequences, in particular increased radiation levels, are the number one concern among both in-person and online participants.

    Malicious cyber operations threatening civilian nuclear infrastructure may take a variety of forms. They primarily include: a) disruption of software (e.g., through malware or other forms of malicious code); b) disruption of hardware, including malfunction of equipment (e.g., through physical penetration via USB sticks); c) data gathering or surveillance operations (e.g., through phishing and other social engineering tactics), and d) information operations (e.g., mis- and disinformation about nuclear risks). Such operations can either intentionally target civilian nuclear infrastructure or cause collateral damage to it. The session also highlighted that these risks have now been amplified by: a) the push for green energy; b) the increased use of small modular reactors and microreactors; c) the use of nuclear energy, including those reactors by private companies, to power AI systems; d) the use of AI to automate and diversify the types of cyber operations against different targets, including critical infrastructure and potentially nuclear infrastructure.

    There was agreement that these risks remain significant and concerning, as even remote or uncertain risks can have catastrophic consequences for humanity. While it was once thought that civilian nuclear infrastructure was safe from cyber threats because of the use of specific controlling systems, the need to upload these systems, including to off-the-shelf software, has meant that cybersecurity is now a sector-wide concern. New developments in the civilian nuclear sector, such as small modular and micro nuclear reactors pose both challenges and opportunities. On the one hand, they are safer by design and have enabled nuclear energy to reach remote parts of the world. At the same time, the increasing number and variety of those reactors, coupled with the fact that many use off-the-shelf software, may also increase the attack surface and raise cybersecurity vulnerabilities, particularly given long IT supply chains and inconsistent national standards for their development and operation. The war in Ukraine has also highlighted the vulnerability of civilian nuclear infrastructure to cyber and physical attacks in parallel. A particular risk of these attacks is that they lead to nuclear reactors being switched off or disconnected from the power grid, which could interfere with their cooling system and lead to a meltdown situation. These risks could be mitigated by, inter alia, having sufficient backup generators on site.

    The IAEA has issued more than 30 technical guidelines and recommendations to try and mitigate the risks arising from cyber operations against the civilian nuclear sector. Their main message is that cybersecurity is vital for nuclear safety, i.e. the physical integrity of nuclear power plants, radioactive materials, and nuclear personnel, as well as nuclear security, i.e., protection from criminal or other intentional unauthorized acts involving or directed at nuclear and other radioactive material, associated facilities and associated activities, including confidential information and nuclear control systems. However, it was stressed that, despite the IAEA’s efforts, at the end of the day, nuclear security is a national responsibility. Thus, the IAEA Nuclear Security Series complements international legal instruments on nuclear security and serves as a global reference to help parties meet their obligations. While the security guidance is not legally binding on Member States, it is widely applied. It has become an indispensable reference point and a common denominator for the vast majority of Member States that have adopted this guidance for use in national regulations to enhance nuclear security in nuclear power generation, research reactors and fuel cycle facilities as well as in nuclear applications in medicine, industry, agriculture, and research. Key measures recommended in the Series include audits, risk assessments, training, awareness, and capacity-building, as well as international cooperation. It was also noted that the Convention on the Physical Protection of Nuclear Material (CPPNM) requires States Parties to protect nuclear material used for civilian purposes from cyber operations but lacks universality (notably, Iran is not a party to it).

    Within the United Nations (UN), States have been discussing cybersecurity for 25 years. Notably, UN Member States have endorsed 11 Norms of Responsible State Behaviour in cyberspace, most of which seek to protect critical infrastructure from malicious cyber operations. However, the protection of the nuclear sector from cyber operations has not been specifically raised or addressed as of yet. It was argued that the main UN forum for cybersecurity discussion – the Open-Ended Working Group on the security of and in the use of information and communications technologies – is probably not well-placed to discuss how those general norms can be implemented in specific sectors, such as nuclear energy. The idea was put forward to establish a dedicated forum, within or outside the UN, to discuss the operationalisation of the Norms, including in the nuclear sector.

    The key role that the private sector plays in the protection of the civilian nuclear infrastructure, including power plant operators and software companies, was also discussed. Steps that can be taken by private actors to increase such protection include embedding cybersecurity by design in product development, staying ahead of malicious cyber actors, such as by better understanding the cyber threat landscape and sharing threat intelligence, providing appropriate education and training on the cybersecurity of their products, as well as engaging with States and civil society to strengthen existing technical, policy and legal protection. One prime example of a multistakeholder initiative pushing for the protection of critical infrastructure is the Cybersecurity Tech Accord. This could serve as inspiration for future policy efforts to enhance the cybersecurity of the civilian nuclear sector.

    The session concluded with a discussion of the international legal framework applicable to the issue. It was noted that while international law lacks specific rules on the cybersecurity of the civilian nuclear sector, existing rules apply. Notable among these are the rules of sovereignty, non-intervention, and due diligence, which apply by default in cyberspace, including in the context of cyber operations against the civilian nuclear sector. Sovereignty prohibits States from carrying out cyber operations that cause physical or functional effects in the territory of another State, or that undermine inherently governmental functions, including in the context of civilian nuclear infrastructure. Non-intervention prohibits States from carrying out or supporting cyber operations that coercively interfere in the internal or external affairs of another State. Insofar as nuclear energy policy falls within this ambit, cyber operations that threaten the sector are prohibited intervention. Finally, it was noted that States must exercise due diligence in protecting domestic and foreign civilian nuclear infrastructure from cyber operations as far as possible. Specific duties to protect the civilian nuclear sector from harmful operations, including by digital means, arise from the Nuclear Terrorism Convention and the CPPNM.

    IGF 2023 WS #496 Scramble for Internet: you snooze, you lose

    Updated:
    Avoiding Internet Fragmentation
    Key Takeaways:

    With the completion of the 4th Industrial Revolution, the international community has stepped into a new technological age, which could threaten the existence of a unified Internet if stakeholders do not take appropriate measures.

    ,

    A certain level of fragmentation is acceptable and has been going on since the creation of the Internet. National jurisdictions, regulations and cultural differences create barriers between users, but the technical level of the Internet remains uniform.

    Calls to Action

    Decision makers have to listen to the views of all stakeholders involved in Internet governance processes. Thus, it is very important to pay special attention to the interests of the private sector.

    ,

    Despite the unequivocal need to respect national jurisdictions, internatioan community still should try to achieve global solutions at UN platforms such as the IGF and within the framework of initiatives such as the Global Digital Compact.

    Session Report

    Speakers of the session included the Director of the Department of Commercial Resource Management at MTS Olga Makarova, the president of the OpenLink Group Milos Jovanovic, the head of ISOC Bolivia Roberto Zambrana, Chairman board of Trustees at Dot Africa Foundation Barrack Otieno and the director of the Center for Global IT-Cooperation Vadim Glushchenko. Moderator of the discussion was the Chairman of the Council of the Center for Global IT-Cooperation Roman Chukov.

    “Any attempt to confiscate IP addresses from one or more states could have dire consequences for the Internet. We will face deep structural fragmentation. We will witness a real “Splinternet” without trust, unique identifiers, it will lose its global nature,” Olga Makarova warned at the beginning of her speech.

    She also proposed for the first time to use mathematics rather than words when assessing the risks of fragmentation in each key area. The formation of mathematical models for assessing the risks of fragmentation is proposed to be carried out taking into account technical, commercial and political factors that may influence fragmentation as parameters. In turn, technical, commercial and political factors are proposed to be assessed taking into account their distribution, influence, focus and nature. This will make it possible to form a more accurate and uniquely defined assessment of the threats of fragmentation, to minimize the uncertainty of concepts that today everyone interprets the term “multi-stakeholder” in their own way. In her opinion, without the use of mathematical models, conscious monitoring of the threats of fragmentation and combating them is not possible.

    Barrack Otieno noted that it is important for the Global South that Internet design principles be taken into account when addressing fragmentation. Among other things, in those countries there is another serious problem - Internet shutdowns and network malfunctions. Either because the equipment fails, or because of the low level of training in the field of information security. “We see that areas experiencing multiple internet outages do not have network management mechanisms in place. I'm talking about national forums or opportunities to organize discussions like this that bring stakeholders together to discuss, on an equal footing, issues affecting Internet development in specific jurisdictions. I would add that it is important for all stakeholders to pay close attention to their roles and responsibilities in any country,” he concluded.

    Roberto Zambarana added that another type of fragmentation that particularly affects the Global South is related to business models for the provision of Internet services: access to the Internet is limited by the high cost of communication. “In fact, the main problem is, of course, the lack of action. Whether it comes from government, the private sector, or even civil society, the problem is that this inaction is preventing countless people from feeling part of a global society, connected to the world through the Internet,” he said.

    In his speech, Milos Jovanovic pointed out that infrastructure is one of the components of fragmentation. Since every state wants to protect its citizens, it not only requires international IT companies to comply with the laws of the country where they operate, but also tries to ensure that the infrastructure is protected from hacker attacks. And for this it is necessary to produce equipment and create software independently, in other words, to have technological sovereignty so as not to be dependent on external players.

    Elaborating on the topic of technological sovereignty, Vadim Glushchenko drew attention to the fact that preventing fragmentation of the Internet is an important part of the initiative of the UN Secretary-General to create a Global Digital Compact. In his opinion, the upcoming intergovernmental negotiation process should lay down general principles that are designed to define fair liability criteria for global digital platforms, and give states the opportunity to independently regulate national segments of the Internet.

    Summarizing, Roman Chukov concluded that in matters of fragmentation, global community needs to turn to the opinion of the Global South. “We are trying to avoid a situation when fragmentation is the result of a lack of technology and critical infrastructure in developing countries. Therefore it is necessary to promote international cooperation so that all countries have the necessary resources to maintain a stable Internet connection”.

    IGF 2023 Lightning Talk #151 How deep is your fake: Online Fraud Techniques

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    The amount of fake information on the Internet has been growing rapidly since 2021, increasing exponentially. This is certainly due to the growing number of international conflicts.

    Calls to Action

    There is an increase in the use of deepfake technology, although for now the number of deepfakes used for political purposes is not significant, over time this type of disinformation will pose a serious threat to national security. So there is a need not only for regulatory measure but also actions in the area of fact checking.

    Session Report

    Timofey V, head of the development department for strategic directions of ANO Dialog, gave a master-class on how to identify and combat different types of fakes on the Web. In particular he emphasised the growing number of deepfakes on the Internet and their danger to the national security of countries.

    Most notably he shared with the audience the product of independent analysis, condected at ANO Dialog, statistics on fakes since 2021, which demonstrated that from 2021 to 2022 number of fake news almost doubled - from 1,725 to 3,995 misinformation topics.

    Timofey V highlighted that in just this year alone 3034 fake topics have already been discovered (data excluding copies and reposts on social networks) and that it is necessary for civil society to coordinate their efforts in fact-checking activities.

    IGF 2023 WS #308 Public-Private Data Partnerships in the Global South

    Updated:
    Data Governance & Trust
    Key Takeaways:
    1. Public-private data partnerships have tangible benefits, in moments of discontinuity or crisis. But building relationships requires time and trust building, relying on informal relationships and intermediaries. Standard operating procedures may help. Data interoperability also needed. 2. Public-private data partnerships may require cross border data sharing. The regulation of data protection and privacy are important. Political will is key.
    Calls to Action
    1. Public organizations should prioritize initiatives based on multiple locals needs, including but not limited to, the country’s developmental level, skills, and culture. 2. Private sector: Standard operating procedures should be developed to facilitate monitoring and communication in initiatives, including clear contact points, timelines for periodic updates, resource planning, and fostering an interactive ecosystem.
    Session Report

     

    IGF 2023 - Day 1 - Workshop Room 3 - IGF 2023 WS #308 Public-Private Data Partnerships in the Global South

    This session focused on public-private data partnerships in the Global South, highlighting the practical challenges and possibilities of collaboration between the public sector, private sector, and civil society to achieve Sustainable Development Goals (SDGs).

    The session began with an introduction, emphasizing the importance of data for monitoring and achieving SDGs. Philipp Schönrock (Director - CEPEI) discussed the need for a supportive environment to foster data partnerships and cited examples of successful initiatives that required building trust and iteratively refining the value proposition.

    Isuru Samaratunga (Research Manager - LIRNEasia) summarized key findings from a research on public-private data partnerships in the Global South. The study involved mapping initiatives and conducting in-depth case studies. The study revealed that not all SDGs were equally prioritized, with climate actions, sustainable cities, and good health and well-being being the most focused in the Global South. The study also emphasized the importance of standard operating procedures and legal frameworks, and the role of brokerage entities in facilitating partnerships.

    Mike Flannagan (Corporate Vice President, Global Customer Success - Microsoft) mentioned that Microsoft exemplifies the approach of balancing revenue generation with philanthropic activities. The company aligns its work with the SDGs, offering substantial discounts and donations to nonprofits and introducing the Microsoft Cloud for Nonprofits. The collaboration between the public and private sectors hinges on building trust, establishing clear objectives, and finding common ground. It is essential to create incentives and frameworks that mutually benefit both parties, harmonizing their goals and efforts.

    Darlington Ahiale Akogo (Founder and CEO of minoHealth AI Labs, MinoHealth AI Labs, KaraAgro AI) mentioned that public-sector entities have extensive reach and assets, such as government extension officers in every district, which private-sector startups lack. The main opportunity lies in leveraging public-sector assets for data and solution creation. Challenges arise due to differences in communication and procedures between the public and private sectors, creating a language barrier. Clear incentives are needed to overcome these challenges. He mentioned successful partnerships in agriculture that collected data across multiple African countries for disease and pest data sets and a partnership in healthcare for interpreting medical images. These partnerships enabled access to large datasets, which would not have been possible without the collaboration. Data protection laws are an important consideration in these partnerships, particularly in healthcare. Further, cross-cultural understanding, access to diverse data resources, and a focus on ethical considerations are pivotal in the context of data sharing and AI research, particularly in a global context. Successful public-private data initiatives require substantial investments in building trust and adaptability. The development of proof of concepts, as well as continuous iterations, plays a vital role in shaping the outcomes of these initiatives, He mentioned.

    Dr. Mona Demaidi (Entrepreneur, women’s rights advocate and Lecturer at An-Najah National University) emphasized the importance of international collaboration in AI research due to the need for diverse data resources, pooling computational and talent resources, promoting cross-cultural understanding, and addressing ethical concerns. These collaborations help AI research to have a global perspective and address universal challenges. Challenges in international collaboration include the lack of structured pipelines for data sharing, varying legal frameworks, and the need for transparency in data use and deployment. Data privacy and security are also important considerations.

    Rodrigo Iriani (Senior program Manager- The Trust for the Americas) emphasized the importance of building trust and establishing proof of concepts in public-private data initiatives. He mentioned challenges in the Latin American and Caribbean region, where the data ecosystem has limited participation from the public and private sectors. Furthermore, there is a need for capacity building in digital skills and data literacy, promoting co-creation processes and developing local solutions to local problems.

    The online and in person audience asked questions related to challenges in forming international partnerships and data interoperability. The discussion then focused towards international standards for data sharing. A participant raised concerns about the lack of uniformity in data regulations, licensing, and accessibility in different countries. The questions centered on how to establish international standards for data sharing and data governance.The speakers replied by highlighting the benefits of accessing diverse data resources, pooling resources, promoting cross-cultural understanding, addressing ethical concerns, and ensuring data privacy and security.

    As the session neared its end, the audience engaged in a poll that revealed that a lack of incentives (71%) was the primary reason preventing private sector data sharing, followed by the low capacity of governments (24%) and policies that prevent data sharing (5%).

    Helani Galpaya (session moderator, CEO of LIRNEasia) summarized key takeaways: public-private data partnerships have tangible benefits, especially during crises. However, building these partnerships requires a significant time investment and the establishment of trust. Implementing standard operating procedures can enhance these collaborations. These partnerships may also require cross-border data sharing, underlining the importance of strong regulations for data protection and privacy, all of which depend heavily on the presence of political will for their success.

    Live engagement through LIRNEasia's Twitter occurred during the session (#sustainabledevelopment, #IGF2023, #datapartnerships).

     

     

     

     

     

     

     

    IGF 2023 Networking Session #65 The road not taken: what is the future of metaverse?

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    The main topics raised during the discussion were the ethical aspects of transferring a person’s personality to the digital world, regulation of IT companies, as well as the technical possibilities for the development of metaverses.

    ,

    Particular emphasis was also placed on one of the most controversial topics - the so-called “digital immortality”. The very structure of personality is under threat, since a person may eventually cease to understand whether he has a physical body and become aware of himself.

    Calls to Action

    Metaverse enabling technologies can help economies of some countries, especially developing ones. For this reason, it is important to take into account the views of countries in the Global South on how the development of new technologies such as metaverses or VR should be regulated. Since they will have a direct impact on the people living in those states.

    ,

    The speakers concluded that metaverses have both positive and negative sides, and there is still no unity in the expert community regarding their attitude towards this technology. However, the relevance of regulating metaverses will undoubtedly only grow.

    Session Report

    Speakers of the session included the founder of the NFT marketplace Apollo42, Daniil Mazurin, and the Deputy Chairman of the Russian Church's Synodal Department for Church Relations with Society and the Media, Vakhtang Kipshidze.

    The main topics raised during the discussion were the ethical aspects of transferring a person’s personality to the digital world, regulation of IT companies, as well as the technical possibilities for the development of metaverses.

    “The Metaverse is a world created by a man who claims to be flawless. However, we, people of faith, believe that the real world still has its shortcomings, and accordingly, these imperfections will sooner or later also become part of the virtual space. Therefore, it is necessary to understand exactly what values we will transfer to metaverses from the real world,” Vakhtang Kipshidze spoke about the Church’s approaches to the phenomenon of virtual worlds.

    He placed particular emphasis on one of the most controversial topics - the so-called “digital immortality”. The very structure of personality, in his opinion, is under threat, since a person may eventually cease to understand whether he has a physical body and become aware of himself. “Sometimes people become obsessed with metaverses, and this is a direct path to the violation of individual freedom, so obsessions must be fought together,” he concluded.

    Daniil Mazurin suggested that metaverses in the future will help the economies of some countries, especially developing ones. For this reason, it is important to take into account the views of countries in the Global South on how the development of new technologies such as metaverses or VR should be regulated. Since they will have a direct impact on the people living in those states.

    “The other problem with metaverses is the hardware. At the moment, in order for an ordinary user to get into the metaverse, it is necessary not only to have a powerful computer that can run the programmes, but also additional tools like VR glasses etc. For now, this is a very expensive and complex process, but in the near future this will, of course, change,” he stated.

    The discussion received a lively response from the audience. A representative from the Pakistani tech community drew attention to the problem of regulating metaverses. The legal side of the issue is still in the so-called “gray zone”: the personal data of users of metaplatforms is not protected, which means their use can lead to the theft of a “digital avatar”. However, a representative of the European Union objected to this, saying that regulation actually already exists. However, it is not reflected in one specialized document, but is part of the general regulation of digital platforms: from protecting user data to the operation of recommender systems.

    The speakers concluded that metaverses have both positive and negative sides, and there is still no unity in the expert community regarding the attitude towards this technology. However, the relevance of regulating metaverses will undoubtedly only grow.

    IGF 2023 Open Forum #81 Cybersecurity regulation in the age of AI

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    1.Focus on Producers rather than on end users 2.Sectoral Regulation, data protection and risk based approach.

    Calls to Action

    1. Cooperation and Multi stakeholders approach - harmonized AI certification schemes. 2.Flexible standards in order to promote the use of new technologies.

    Session Report

    Open forum

    Octobrt 11th, 10:15 (Kyoto)

    Session Report

    With the rise of AI-powered cybersecurity tools and techniques, there is a growing concern that malicious actors could use these tools to carry out more potent cyber-attacks.  As AI increasingly integrates into our daily lives there is a need to ensure that AI technologies are developed and used safely.  

    The overlap between AI and cybersecurity raises a few challenges, some of which could be addressed through effective regulation. Thus, to ensure that AI systems are cyber robust, it is important to establish clear standards and balanced regulations. The question faced by cybersecurity regulators across the globe is what such standards and regulations should include.

    Key Issues Raised

    1. Is the current cybersecurity toolkit sufficient to deal with threats to AI systems or to the data used for it?
    2. How can cybersecurity regulation help promote an ecosystem in which AI systems are cyber resilient and maintain their functionality in the face of cyber-attacks?
    3. What should governments be doing in the regulatory space to improve cybersecurity of AI systems? Is AI too dynamic for regulation?
    4. The risks of over regulation.

    Presentation summary

    Mr. Zaruk  (Israel) - there are 3 points of connection – resilience of AI models, using AI for defense, and defending against AI-based attacks. Resilience of AI models – the INCD focuses on common libraries models but need tailored model for AI algorithms. The same way we do with other IT domains. INCD has established a National lab with Ben Gurion University, it has online and offline platform, for self-assessment of ML models, coordinated with academic world, govt and tech giants. A 2nd domain is using AI for defense – most tools and products use some of AI.  We understand the power of AI and what it can offer. So, the INCD promotes innovation in that field. Their role as regulator is not to interfere, but rather to assist the market.  AI helps scale for critical tasks – we use AI to assist and mediate between human and machine.

    The last domain, maybe the most complex, is to defend against AI-based attackers.

    Dr. Al Blooshi (Dubai) – we want to enable CI to use new technologies AI model security vs security of consumers – these are totally different at the end of the day it's like any software that we used in the past, but the difference is how it's deployed and used. Re: security of AI consumers (i.e., end-users). We focus on end-users instead of producers. Here too we need to look at how it's used, data privacy, in what context?

     Policy standards around AI models – OECD principles, NIST AI security standard, EU policies recently – this is progress! We need to develop basic principles and best practices – secure by design, supply chain security: these remain relevant, but we should have one additional layer on top - Ai-specific, then there should be a sector-specific layer (transport, health, banking) – we need to work with the regulators of these sectors. Strongly believe in risk-based approach. Having too much control will limit innovation and use while too little control will not enable across-the-board security. We are developing an AI sandbox for govt; Also have clear guidelines on cloud privacy, which includes AI. No need to reinvent the wheel.

    There are competing international models, we need harmonized AI certification scheme, put out something in cooperation with the WEF.

    Mr. Hiroshi Honjo (Japan) - OECD and NIST frameworks that help define the risks. We look at privacy. LLM are getting data from somewhere. The question is where is the data from, who owns it? Like cross-border issue of data flows – which laws and regulations apply to that data, like cloud data. Risk of data being compromised, Risk management.

    Harmonization - For a private company, lack of harmonization has high costs.

    Ms. Gallia Daor (OECD) - In 2019 OECD was first intergvt org to adopt AI principles. They describe what trustworthy AI is, 5 principles, 5 recommendations. This includes the principle of robustness, security and safety – throughout the lifecycle of AI+ systematic risk management approach.

    Since then, we've given tools for countries to help implement them - we have AI policy observatory + metrics, trends, etc.; also work on gathering expertise – over 400 experts from different countries and disciplines + catalog of national AI tools.

    OECD's work on digital security – foundational level: principles for risk management; strategic level for countries; market level -how we can work on misaligned incentives; technical level – vulnerability treatment, good practices for disclosure, protecting vuln. The intersection between the two – we need to focus on digital security of AI systems (e.g. data poisoning) and also how AI systems can be used to attack (e.g .genAI can be used for large-scale attacks).

    Fragmentation is a problem; int'l orgs can help in that regard -mapping different standards and frameworks, finding commonalities, convening different stakeholders, and advancing the metrics and measurements.

    Daniel Loevenich  (Germany) -  Germany is looking at EU perspective on AI. EU standards orgs do a good job focusing on the AI Act standardization request. Germany is looking fwd to implementing procedures and infrastructures based on our conformity assessment.

    Technical system works – especially for embedded AI – we address them with engineering analysis; in case of a distributed IT system, we have special AI components or modules (e.g cloud-based services) – we need to look at this as part of supply chain security. We do that by mapping out these applications/sector-based risks which may be regulated by standards, down to technical requirements for AI modules. Lots of stakeholders are competent and responsible to address these risks.

    The overall issue is to build a uniform AI evaluation and conformity assessment framework. This is a European approach, and it is the key issue in the AI standardization roadmap. So, what do we do next?

    Based on existing cyber conformity assessment infrastructure, we try to address these special AI risks as an extension to existing frameworks.

    We want to promote the use of technologies, don't want to prescribe, prefer to recommend.

    Standards are good because they give companies flexibility. Would like to offer 3 schools of thought:

    1 -technical (and sector-agnostic); 2 - sector specific; 3 – values-based.

    IGF 2023 Lightning Talk #125 mCitizen as a digital assistant of every citizen

    Updated:
    Data Governance & Trust
    Key Takeaways:

    1. mObywatel 2.0 is an evolution of the well-known mObywatel application. This means a evolution from the previous concept of a digital wallet for documents, a significant change in functionality and purpose towards services.

    ,

    2. Transition to Agile (Scrum) Methodology: The organization recognized the need for a change in their project management approach and shifted from the traditional waterfall method to the agile Scrum methodology. This transition was driven by the need to respond quickly to changing business requirements, involve stakeholders, and engage citizens in the development process.

    Calls to Action

    1. Prioritize the evolution and adaptation of existing digital services to meet citizens changing needs and expectations. At the same time, ensure compliance with European regulatory standards to simplify data exchange. By implementing new functionalities and goals, you can significantly increase the value provided to users.

    ,

    2. Build a Skilled and Diverse Team: Invest in assembling a team with the right mix of skills and expertise required for your projects. A well-rounded team can contribute to the success of your projects.

    Session Report

    Report

    The initial part of the presentation covered the core concepts of the new app version and the fresh identity document.

    The objective is for the mObywatel application to become the citizen's primary point of contact with the government, effectively functioning as a digital assistant. The new application version is a response to citizen's needs. Every service and feature underwent usability testing, and we continued to make changes and enhancements until reaching version 2.0. The most significant change involves introducing a completely new identity document known as mDowód.

    New Digital ID card

    mDowód is entirely new, electronic identity document that differs from an standard ID card in terms of its series and number, date of issue, and expiration date. Thanks to the new law, it will be possible to use it to confirm identity in almost every place - for instance, at the office, in court, at the clinic, at the post office, or with a notary.

    Document Verification

    The first method utilizes device-to-device verification, utilizing QR codes and cryptography to enhance security. The second includes visual verification on mobile phones, integrating dynamic elements like flags and dates to prevent screenshot fraud. The third is specifically designed for system integration and is primarily focused on meeting the requirements of financial institutions, improving the accessibility of remote verification processes.

    The second section of the presentation elaborated on the operational methods and the shift towards agile methodologies.

    Why is Agile used in the public sector?

    A flexible approach was required to quickly adapt to changing business needs. In this Agile framework, stakeholders were involved, and regular feedback was obtained in short cycles. This approach helped in keeping a close eye on progress and making the project responsive to stakeholder needs. Emphasis was placed on transparency, allowing citizens to observe the work in real-time. This demonstrated a commitment to openness, responsiveness, and delivering valuable services to the public.

    Change in the release management process

    A change in the release management process was implemented for greater efficiency and responsiveness. The release process for both mobile apps and backend services was automated. This allowed for swift development and testing of 30 to 40 versions of mobile apps every day. A new application with new features and improvements was released to citizens every two weks. 

    Involving citizens

    The input of citizens is highly valued and is an integral part of product development. Comprehensive research is conducted on products and features, including usability testing, where products are actively used to identify issues and enhance the user experience. In-depth interviews are conducted to gain a deeper understanding of user opinions and experiences. Observations of user behavior are used to identify patterns and issues. Focus groups are organized to engage in moderated discussions with users to gather a wide range of opinions and ideas about products and services. All of these methods ensure that citizen feedback is prioritized, and continuous improvements are made to their offerings.

    IGF 2023 Open Forum #166 The African Union Approach on Data Governance

    Updated:
    Data Governance & Trust
    Key Takeaways:

    The AU’s Data Policy Framework paves the way for a common continental approach for realising the strategic value of data for all Africans, while simultaneously shaping continental debates about more equitable data governance practices that support sustainable development in accordance with the SDGs. As such, African countries can use the common agenda provided by the Framework to more actively participate in and shape global discussions about dat

    ,

    Given the potential significance of cross-border data flows and digital economies for Africa’s Free Trade Continental Area (AfCFTA), the implementation of the AU’s Data Policy Framework at national levels is crucial for ensuring that African countries can reap the benefits of processes of digitisation and datafication, while mitigating the risks that also accompany these processes. As such, there is a need for the AfCFTA Secretariat to take into

    Calls to Action

    Enable the development of more relevant capacity-building exercises to enable relevant policymakers, regulators, civil society, private sector and other stakeholders to participate meaningfully in global, regional, and national discussions and deliberations that will shape the future global data governance landscape.

    ,

    Engage and encourage relevant stakeholders including policy makers at continental, regional and national levels, to support African countries in the implementation and domestication of the AU’s Data Policy Framework, as appropriate to local needs and contexts.

    Session Report

    Session Report

    Open Forum 166: The African Approach on Data Governance

    Date:                   11th October 2023

    Time:                  10:15 – 11:15 am

    Moderator:         Alison Gillwald (Research IT Africa)

    Reported by:      Paul Kithinji (GIZ)

    Name of Panellists

    Alison Gillwald                Executive Director of Research ICT Africa (RIA)

    Souhila Amazouz                           Senior ICT Policy Officer, African Union (AU) Commission

    Alexander Ezenagu                       Director Continental Free Trade Agreement, AfCFTA Policy and Development Centre

    Trudi Hartzenberg                        Executive Director Trade Law Centre (TRALAC)

    Liping Zhang                                 Chief, Science and Technology and Innovation Policy Section, UNCTAD

    Paul Baker                                      CEO, International Economics

    Martin Wimmer                            Director, General Development Policy Issues, Germany Federal Ministry for Economic Cooperation and Development

    The purpose of this open forum was to discuss the African Union’s Data Policy Framework (DPF), including its implementation. The purpose of the Framework, as introduced by the session moderator, Alison Gillwald (moderator), is to enable all AU Member States and its citizens to realise the benefits of data as strategic assets. The DPF should, panellists pointed out, be read in conjunction with developments pertaining to the Africa Continental Free Trade Agreement (AfCFTA) and its protocols relevant to digital trade that are currently under negotiation.

    Following a brief introduction by the onsite moderator, Souhila Amazouz, Senior ICT Policy Officer at the AU Commission, provided relevant background to and contextualised the development of the DPF. She explained that the Framework aims to set the priorities, vision and principles with regards to data in order for Africans harness its transformative potential. To do so, the DPF provides for certain principles including (but not limited to) trust, fairness, accountability, and cooperation among all African members states. The Framework further calls for investments in relevant digital infrastructure to promote connectivity, the establishment of legal and regulatory frameworks, and the development of institutional arrangements. Now in its implementation phase, the DPF is supported by an Implementation Plan which has been validated by Member States and includes a Capacity Self-Assessment Tool to help countries gauge their various levels of readiness as well as identify the support they need for successful domestication of the Framework. She explained that the AU Commission has also developed Guidelines for Integration of Data Governance in the AfCFTA Digital Trade Protocol this year as a part of its efforts to promote the harmonisation of data governance provisions at a continental level. These guidelines are now at the disposal of Member States and negotiators engaged in the development of the AfCFTA Digital Trade Protocol and its annexures.

    Next, Trudi Hartzenberg, the Executive Director of the Trade Law Centre (TRALAC), provided an update on the status of the ongoing negotiations of the AfCFTA Digital Trade Protocol. She explained that digital trade became a priority of the AfCFTA negotiators in 2021. While their discussions were initially focused on e-commerce, it has since expanded to cover digital trade aspects more generally. She explained that the draft Digital Trade protocol that is currently being developed by the Committee on Digital Trade is, as a result, exceptionally comprehensive and addresses a multitude of aspects, including data governance. It also considers the uneven levels of development of policy, laws, and relevant institutions across the continent. This is reflected by chapters on market access and the treatment of digital products, the facilitation of digital trade, the broader data governance agenda and business and consumer trust, amongst other topics. In terms of next steps, she explained that the Draft protocol is expected to be reviewed by senior trade officials, who will present provisions to the Council of Trade Ministers at the end of October 2023. It is expected that the negotiations shall culminate in adoption by the AU Assembly later this year. She concluded to point out that the adoption of the Digital Trade protocol is closely related to and should be read in conjunction with the other policy frameworks, including the DPF, being developed or implemented with the aim of establishing a digital single market in Africa.

    Paul Baker, the founder and CEO of International Economics and the Chairman of the African Trade Foundation, subsequently elaborated on his role of assisting the AU Commission in developing Guidelines for the Integration of Data Governance in AfCFTA Digital Trade Protocol. He reiterated the importance of data governance in the realisation of the AfCFTA, and explained that under the AU Commission leadership, a set of useful Guidelines have been developed to ensure that data governance features appropriately in the negotiations of the Digital Trade protocol. The Guidelines seek to offer model clauses for use by negotiators, taking into consideration global trends, best practices and similar provisions in trade agreements at continental, regional and country levels. The Guidelines also address cross-border data transfers, the protection of personal data, open data, interoperability across jurisdictions, inclusivity and other special considerations for countries that are underdeveloped.

    Next, Alexander Ezenagu, Director of the AfCFTA Policy and Development Centre, proposed a closer look at the level of development as indication of data governance in Africa. He argued that the penetration and adoption of technologies such as mobile telephony and Internet access help to indicate the availability of consumer data. He posited that for digital trade to flourish, there must be considerable access to the Internet, mobile telephony, and other similar technologies, coupled with a conducive regulatory environment that enables technology adoption. Additionally, promoting awareness of data governance principles is also critical at promoting the DPF’s adoption.

    Approaching the discussion from a global, intergovernmental perspective, Liping Zhang, Chief of UNCTAD’s Science and Technology and Innovation Policy Section, briefly elaborated on the organisation’s work on data governance and commended the AU Commission for its work on helping to harmonise data governance approaches in Africa. She noted that the agenda set by the DPF will also enable African Member States to participate in ongoing discussions in different global policy fora as far as data governance is concerned. She also described UNCTAD’s work on attempting to pave the way towards a more coordinated approach between processes aimed at addressing data governance within diverse systems and noted that a significant challenge is a lack of capacity building that can deny developing countries opportunities to participate equally in data governance discussions that have global repercussions.

    In addressing global challenges facing the implementation of the DPF, Martin Wimmer, the Director General of Development Policy Issues at the Germany Federal Ministry for Economic Cooperation and Development (BMZ), reiterated the German government’s commitment to support the AU Commission in implementing the DPF in a number of African countries, both by technical and financial means.

    In conclusion, it was established that the framework paves the way for a common continental approach for realising the strategic value of data for all Africans, while simultaneously shaping continental debates about more equitable data governance. Additionally given the potential significance of cross-border data flows and digital economies for the AfCFTA, the implementation of the DPF at national levels is crucial for ensuring that African countries can reap the potential benefits from digitalisation and datafication. However, for this to happen, there must be an enabling environment where relevant capacity-building exercises to equipe policymakers, regulators, civil society, private sector and other stakeholders to participate meaningfully in global, regional, and national discussions and deliberations. Further to this, there should be increased engagement and encouragement of relevant stakeholders including policy makers at continental, regional and national levels, in their efforts to support African countries in the implementation and domestication of the DPF, as appropriate to local needs and contexts.

    IGF 2023 Day 0 Event #193 How to build trust in user-centric digital public services

    Updated:
    Data Governance & Trust
    Key Takeaways:

    Trust is essential for the uptake of digital public services. Transparency, user-centricity, data security and citizen engagement throughout the development and deployment of digital public services play key roles in building trust in digital public services. 2. There is a trade-off between user friendliness and data privacy, also for AI implementation. AI has the potential to improve services, i.e. through automated translation

    Calls to Action

    Digital public services enable citizens to communicate with their governments directly and efficiently. Governments need to recognize this opportunity but also invest in capacity building, upskilling and citizen engagement. 2. Some of the main challenges that need to be addressed are the siloed or fragmented model of service delivery. Instead, there should be a whole-of-government approach to ensure reliability and user centricity.

    Session Report

    IGF 2023 Day 0 Session: How to build trust in user-centric digital public services

    IGF Sub-theme: Data Governance & Trust

    Date and time: Sunday, 8 October 2023 | 14:45 to 15:45 JST

    Workshop Room 9, Kyoto International Conference Centre

    Session Report

    The roundtable discussion titled “How to build trust in user-centric digital public services” focused on the role that trust plays in the delivery of digital public services (DPS). Government trustworthiness and robust data governance as prerequisites for building trust among citizens. The speakers in this session represented a diverse range of countries that have made remarkable progress in digital government in recent years and shared their lessons learned.

    First, panellists stated what they believe to be the biggest challenge to building trust in digital government.

    Gautham Ravichander, Head of Strategy at eGovernment Foundation in India stressed the need for reliable digital public services. DPS and transparency should be a priority for politicians worldwide. A seamless "Phygital" approach, integrating online and on-the-ground services, will ensure a consistent experience for all. According to Dr. Rudolf Gridl, Director-General for Central Services of the Federal Ministry for Digital and Transport (BMDV), services must strike a balance between being user-friendly, reliable and customizable while also ensuring data privacy. He believes that user-friendliness is crucial for users to adopt and use the services effectively. Valeriya Ionan, Deputy Minister for Eurointegration at the Ukrainian Ministry for Digital Transformation, defined trust as confidence in the appropriateness of the service. She reinforced that institutional trust is crucial and therefore security is an essential aspect of this discussion. For Luanna Roncaratti, Deputy Secretary of Digital Government, Ministry for Management and Innovation in Public Services in Brazil, one of the biggest challenges in creating trust is a siloed and fragmented model of providing public services, a notion that originates from a traditional public model of governance organisation rather than what citizens want.

    The session was moderated on site by Christopher Newman and online by Sascha Nies from Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH.

    Following the initial statements, Gautham Ravichander, whose organisation supports governments in building platforms for better service delivery, shared what he and his organisation have learned from working on digitalization with various levels of government.. Mr Ravichander stated that it is not only the software that has to work, but there needs to be a focus on capacity building. It is crucial to prioritize making and fulfilling promises. Public services must be dependable, and timelines need to be communicated clearly to citizens. He further emphasized that education and training are essential. In terms of security and privacy, data collection needs to be minimised. Files should be queried by API without the intervention of a human administrator. 

    Dr. Gridl stated that Germany values data protection, with citizens highly concerned about it. However, it can be observed that citizens tend to be willing to share data with private companies but not with the government. Building trust through secure data handling and specific, transparent purposes is essential. Recent debates in Germany over a digital public transport ticket also highlight the importance of digital inclusion and data protection. While it is vital, but it must be balanced with user-friendliness to gain citizens' trust.

    Following this, Valeriya Ionan was asked about Ukraine’s “state in a smartphone” app, Diia, which allows citizens to carry digital documents like their driver’s license or their passport on their phones. She stated that Ukraine is a frontrunner in the development and use of digital passports and envisions new, convenient services. Her department introduced Diia as an integrated application with multiple digital features before the war and quickly added services post-war, such as integrating TV and radio, tracking internally displaced people, and offering financial aid for damage due to Russian missiles. This innovation required technical expertise and especially trust-building. Ukraine boasts the world's fastest business registration and other quick services, emphasising the need for basic digital skills through education. Regular communication, citizen involvement through beta testing, and user-friendliness are pivotal for trust, not just the product itself.

    Next, Luana Roncaratti discussed Brazil's focus on citizen-centricity, which is a key principle of its Digital Government Strategy. The country strives for digital inclusion and a user-centric approach, using surveys and a holistic approach to identify pain points in usage. They use simple language and clear design, offer a user feedback API, and have VILIBRA, a sign language translation service. The focus remains on providing proactive services to ensure inclusion for all.

    Finally, the panel was asked about the possibilities and risks of Artificial Intelligence (AI) in the provisions of DPS. Luana Roncaratti argued there need to be four areas of action to generate trust in DPS, namely transparency, the possibility to make requests for review when citizens feel they are being discriminated against by AI, investments in data protection and the assessment of risks. Valeriya Ionan stated the need to publish recommendations on balancing regulation and innovation. Dr. Gridl argued that trust is achieved by implementing AI step by step. Gautham Ravichander highlighted the opportunities AI has to change DPS for the better, i.e. in translation in a country like India with many languages, or for medical services.

    Then the audience online and in the room were invited to ask questions. This round highlighted further aspects that should be considered when discussing trust in DPS, such as the exclusivity of DPS for i.e. immigrants, Ukraine’s data-in-transit approach as opposed to storing citizen data permanently, cybersecurity and the role of foreign cloud computing servers.

    In summary, the event emphasised the essential role of trust in establishing and maintaining DPS. Key principles for trust-building mentioned in the discussion included transparency and user-centricity. The lessons learned spanned practical data protection measures, communicating and educating citizens, also on risk factors including AI, and effective cybersecurity considerations in the context of cloud computing.

    IGF 2023 Open Forum #168 Creating digital public infrastructure that empowers people

    Updated:
    Key Takeaways:

    There needs to be a holistic approach to governing digital public infrastructure, that includes the public sector, the private sector and civil society. DPIs are not only about building ecosystems but about creating the conditions for Stakeholders to work together efficiently.

    ,

    Governments face similar challenges when it comes to DPI. Countries should learn from each other on how to govern, use and implement DPI solutions and focus on Knowledge sharing

    Calls to Action

    We need to develop and use a common understanding of DPI, that also includes the demand side – what do citizens actually want?

    ,

    Some DPIs are common goods, and their governance is the responsibility of the international community and international approaches should be aligned accordingly.

    Session Report

    IGF 2023 Open Forum: Creating Digital Public Infrastructure (DPI) that empowers people

    IGF Topic: Harmonising Global Digital Infrastructure

    IGF Sub-theme: Global Digital Governance & Cooperation

    Date and Time Wednesday, 11 October 2023 | 17:45 to 18:45 JST

    Workshop Room 7, Kyoto International Conference Centre

    Session Report

    The Open Forum titled “Creating Digital Public Infrastructure (DPI) that empowers people” posed the question of how different actors can cooperate to realize DPI in different countries and contexts. The panelists shared insights on how their countries implement DPI and how peer learning can improve interoperability. The session aimed to exchange lessons from different countries on DPI implementation and international cooperation in the field.

    The session was opened by Dr. Irina Soeffky, Deputy Director-General for National, European and International Digital Policy of the Federal Ministry for Digital and Transport, Germany (BMDV). She acknowledged India’s G20 presidency’s role in bringing the topic of DPIs to the table and shared that the some of the major initiatives in Germany are to create an ecosystem of mobility data in public-private partnerships and the GovStack initiative, which promotes building blocks for the digital transformation of government.

    The session was moderated on site by Aishwarya Salvi and online by Torge Wolters of Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH. Each speaker responded to the following questions:

    1. Considering the diverse approaches of countries to DPI, we recognise that DPI is an evolving concept that may not be limited to sets of digital systems, actors or processes and implementation solutions could be tailored to specific country contexts. What role does each actor play in the DPI ecosystem of your country and how does the government strike a balance between differing needs and interests of different stakeholders?
    2. From your country’s experience, what are the lessons that other countries could learn to build DPI that empowers people? And what is the role of international cooperation in fostering the interoperability of DPIs and promoting peer learning?

    Dr. Pramod Varma, CTO of EkStep Foundation in India, stated that in e-government projects, there's a shift towards emphasizing the demand side, with a focus on DPI as essential building blocks. Rather than creating new infrastructure, the approach is minimalist, leveraging existing systems. It is important to understand the contexts to create long-lasting solutions. Examples like Unified Payment Interface (UPI) and Google Pay demonstrate the effectiveness of this approach. He stressed the need for solutions that are tailored and specific. Overall, he considers the demand side ecosystem more critical than the supply side in evolving digital governance.

    Mark Irura, technical advisor at GIZ Kenya, addressed the role of the private sector and civil society as well, citing the problems arising through a lack of communication between the demand and supply sides. In his view, a long-term strategy is important for implementing digital solutions, as they can quickly become legacy systems. To improve governance, he suggests raising awareness on data handling among citizens, strengthen procurement skills for digital public goods and invest in robust infrastructure to ensure efficient resource allocation and prevent rapid obsolescence of tools.

    Adriana Groh, co-founder of the Sovereign Tech Fund stated that also base software infrastructure needs to be protected. A large portion of this base software that is used to develop software is in critical shape and only maintained by a small number of people. This software, as a critical basis for DPI needs to be governed as a common good through an international effort.

    Valeriya Ionan, Deputy Minister for Eurointegration at the Ukrainian Ministry for Digital Transformation, highlighted that governments face common challenges, such as securing data protection, interoperability, and enhancing digital literacy. Solutions already exist and work well; Estonia's X-Road, the country’s data infrastructure, is a notable example. To enhance global digital solutions, there should be more alignment and knowledge sharing. Sharing these experiences bilaterally or in international fora would be valuable. Further, access to digital transformation education is essential to create new opportunities. Cooperation and networking are key to leveraging existing information and achieving more together.

    Finally, the audience online and in the room were invited to ask questions. This round highlighted further aspects that should be considered for DPI governance, such as ensuring citizens' trust, skills and understanding of what DPI is and the necessity to learn from other countries' mistakes and success stories.

    In summary, this session highlighted that there needs to be a common understanding of what Digital Public Infrastructure means. Further, we need to include holistic approaches, where the demand side is respected, to see what the community needs, and make institutional changes to ensure effective implementation

    IGF 2023 Lightning Talk #141 The new European toolbox for cybersecurity regulation

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    European Cyber security regulation landscape is divers but seeks to consolidate and to close regulatory blindspots

    ,

    International trade law does also interfere with cybersecurity regulation

    Calls to Action

    Assess the human factor off it security regulation

    ,

    Don't forget the endusers

    Session Report

    The lightning talk covered recent regulatory efforts by the European Union concerning cyber security. The talk covered the NIS-2 directive, the Cybersecurity Act, the Cyber Resilience Act as well as the AI Act.
    The talk elaborated on the core methods and tools those regulation aims to foster IT-Security. It especially emphasized the core role of risk management.
    The talk also further elaborated on two selected challenges and issues with cybersecurity regulation.
    First, it conducted a case study of the IT security requirements and regulations of Digital Identities. The case study demonstrated the complex interdependencies of different regulations, stakeholders (users, private sector, government), as well as the technical infrastructure. This example was used to emphasize the difficulties of effectively ensuring or fostering Cybersecurity in a complex environment.
    The second issue demonstrated was the subjectivity of risk management. It was argued that the result of risk management, which in the IT-Security law are actual technical protection measures to be implemented, heavily relies on the perspective of the risk-assessing entity. Current international efforts like the UN Cybercrime Convention were briefly discussed.

    During the Discussion, several aspects and different perspectives were brought up by the audience:

    1. One Participant brought up the issue that especially the market-focused regulatory efforts by the EU do not consider interactions with international trade law. Because the CRA contains several product requirements, those might pose an issue with existing trade agreements. Future regulatory efforts should consider those interactions more.
    2. One Participant brought up the issue that existing regulation mainly focuses on the security of devices but not on the people using those. Within risk management, this can lead to blind spots for risks for certain stakeholders. One stated example was the safety of IOT devices in the context of domestic violence. If a person does not have access to a victim's home but is still the account owner for the smart home system, he could use it to, e.g., turn on the heating to demonstrate power. Because he is the rightful owner of the account, this would not be, per definition, a security risk.
    3. Another participant pointed out the market effect of the CRA. As security is a quality of a good that is difficult to asses by the average customer, common requirements prevent market failure through information asymmetries.
    4. One more aspect that was briefly discussed was the security of critical components. The question if the European legislator considers those was positively answered by a participant.
    IGF 2023 WS #422 Exploring Blockchain's Potential for Responsible Digital ID

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    Blockchain is often being used to incorporate trust and integrity in digital solutions, however it is important to move away from techno-solutionism. We ought to steer away from building blockchain solutions that rely solely on its intrinsic value. Use of technology should be guided by a clear understanding of why, where and how this technology will help solve societal issues.

    Calls to Action

    Create toolkits and ex-ante assessment frameworks to build responsible blockchain based digital identity systems. It is imperative to have an evaluation criteria built on the bedrock of human rights respecting principles to assess the need and impact of building blockchain based solutions. Leverage existing platforms/ fora to convene multi stakeholder discussions across borders and disciplines to build inclusive technology solutions.

    Session Report

     

    Joanne D’Cunha (Centre for Communication Governance), the moderator, explained that the workshop sought to understand the implications of incorporating technologies like blockchain into digital identity systems. The session explored whether blockchain could help address existing concerns and how it should be used to advance responsible digital identity systems. The first part of the workshop covered existing uses and challenges with digital identity and discussed the purpose and possible benefits of adopting blockchain in digital identity systems. 

    Harry Rolf (Tech Policy Design Centre) began by highlighting the importance of ensuring trust in institutions administrating digital identity systems. He explained that the concerns with emerging technology like blockchain are not solely related to the use of the technology. He pointed to broader concerns associated with the  lack of trust in the administrator of the technology that should be given proper consideration. He also stated the approach to considering how blockchain could be used for digital identity, should be one that is both cautious and curious. 

    Following this conversation, Nathan Paschoalini (Data Privacy Brasil) discussed the challenges with digital identity systems in Brazil highlighting risks such as privacy, data breach, exclusion, etc. He explained that although the Brazilian government is experimenting with using blockchain for the digital identity systems to address some of the existing challenges, the technology in itself has limited function in fixing issues and preventing harms that stem from deep-structural inequalities in societies.

    Diving deeper into the concerns with the use of digital identity systems, Mustafa Mahmoud (Namati, Citizenship Program) highlighted the realities of the implementation of digital identity systems in Kenya. He pointed to  the issue of digital identity systems creating multiple identities and that these systems do not interact with each other. While the use of digital identity in itself is not an issue, the manner of implementation and the lack of access to such systems could create systemic exclusion.

    Kaliya Young (Self-Sovereign Identity Expert) provided a different perspective to the discussion by highlighting that digital identity systems are not limited to only providing legal identity. She explained that across various forms of digital identities, it is important to ensure that users have agency over their digital identities. She discussed that it is important to empower users with such autonomy and that a way to do this could be to share their digital identities digitally through a decentralised system where there is no sole authority. 

    Swati Punia (Centre for Communication Governance) supplemented Kaliya’s points and highlighted that incorporation of blockchain in digital identity systems could help address certain gaps. She explained that while trust and integrity could be promoted through use of tech like blockchain - it is often how, where and when the tech is deployed which will ensure whether the tech guarantees trust and integrity. 

    In the second part of the workshop, we examined more closely the use of blockchain with digital identity and how we can move towards having responsible digital identity systems. In expressing his thoughts about the societal implications of combining the technologies, Mustafa supported Swati’s point on examining the purpose of use of any technology.  He explained that  blockchain based digital identities could be used to aid in decentralisation of information across multiple institutional departments and to support authentication of only necessary identity information. However, he emphasised the need to consider how the technology will be inclusive, accessible and the potential use cases, as incorporation of technology such as blockchain alone would not be an adequate solution to existing societal challenges.

    Swati added that the development and deployment of blockchain technology in digital identity systems ought to be guided by clear principles and values such as security, user-control, privacy, equitable access, inclusion and sustainability. She explained that these factors could guide the creation of toolkits or ex-ante assessment frameworks to build responsible blockchain based digital identity systems. She also highlighted the need for an evaluation criteria built on the bedrock of human rights respecting principles.

    In discussing how some of the principles mentioned above could be embedded into blockchain based digital identity systems, Kaliya expressed the need for collaboration amongst all kinds of stakeholders. She highlighted the importance of leveraging existing platforms / fora to convene multi-stakeholder discussions across borders and disciplines to build inclusive technological solutions. Such discussions between key stakeholders from diverse social contexts is key for designing inclusive and sustainable solutions.  

    Nathan further added to the discussion, the need for checks and balances in the use of digital identity systems. He highlighted that to establish safeguards, it is crucial that  digital identity systems are human centred and rights-based in order to  ensure inclusion, data protection, privacy, etc. He further explained that the use of open standards and free and open source software should be encouraged to develop transparent and trustworthy blockchain-based digital identity solutions.

    Finally, Harry highlighted the importance of particularly ensuring tech developers are included in discussions of embedding principles, values and standards into the technology. He explained that it is crucial for all forms of stakeholders to be involved in the process of determining technological standards and that the creation of inclusive tech standards will help ensure responsible development and deployment of technology. 

    The session included interactive live polling with questions that aimed to capture audience understanding of the use of digital identity, to which most participants indicated general awareness. A few questions assessed audience perception on the benefits of blockchain-enabled digital identity systems and their potential to achieve positive outcomes and also embed human-centric values. This question received mixed responses from the audience. Further, in the Q&A session, we received questions on balancing the use of emerging technologies with digital literacy concerns, how to assess the need for blockchain in specific use cases etc. One specific question was on how blockchain could be used to make parallel digital identity systems work. The panellists answered that blockchain could be useful for legal digital identity; especially when there are parallel government departments with interconnected data. Through the decentralised ledger, the multiple databases of information could interact seamlessly.
     

     

    IGF 2023 WS #292 Revitalizing Universal Service Funds to Promote Inclusion

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    The session highlighted that disbursment levels of universal service funds are still very low. The good news is that the cost of building networks keeps getting lower. In Europe the narrative is mostly about fair share debate between telcos and content providers. Some countries have a universal service obligation rather than a universal service fund.

    Calls to Action

    Need for transparency on how universal service funds are being spent - there are ongoing issues around transparency, impact and sustainability of some of the projects that the USFs undertake. There is a need to create a common language, evidence and bring together best practices.

    Session Report

    This workshop focused on the need to rethink universal service funds and how to use them for expoanding connectivity.

    The moderators were Jane Coffin and Carlos Baca (online moderator) and the rapporteur Senka Hadzic - all three onsite. The workshop itself was hybrid.

    Onsite participants in this session were: 

    • Konstantinos Komaitis, non-resident fellow at the Lisbon Council & at the DFRL
    • Josephine Miliza, regional policy coordinator for Africa at APC
    • Sol Luca de Tena, connectivity solutions specialist at Giga - UNICEF

    Online participants were:

    • Nathalia Foditsch, director of international programs at Connect Humanity
    • Ben Matranga, managing partner at Connectivity Capital
    • Teddy Woodhouse, international policy manager at Ofcom

    It was a very diverse panel with speakers representing various stakeholder groups: civil society,  private sector, government, technical community and intergovernmental agencies.

    Some important aspects that were raised are challenges regarding transparency and dormancy of universal service funds, need for blended finance, need for regulators to be adaptive.

    The first speaker, Nathalia Foditsch, highlighted the very low disbursement rates of existing universal service funds. For example, Brazil’s universal service fund had been dormant for 20 years, and only recently started being used for broadband expansion. However multistakeholder participation is needed when deciding who is eligible to apply for funds.

    Konstantinos Komaitis reflected on discussions around infrastructure happening in Europe, which are around the fairshare debate. Telcos are requesting compensation from content providers (large traffic generators) for using their infrastructure. This model may affect the way the Internet is designed to operate. 

    Ben Matranga noted that connecting the unconnected is not a technological problem, but rather about coordination and access to capital. Connectivity capital uses blended finance with private sector partners to expand access in remote parts of the world. The cost to build networks nowadays is extraordinarily cheaper than a decade ago.

    Teddy Woodhouse from Ofcom explained that the UK regulator Ofcom has a universal service obligation, rather than a universal service fund. There is a nuance between public policy role and regulator’s role to fix USFs. He also pointed out the importance of market structure: competition vs. monopoly.

    Josephine Miliza from APC brought up the process that took place in Kenya. Kenya is an example where there has been good collaboration between the regulator and civil society organisations. In the new strategy the regulator aims to establish 150 community networks across the country. Many African countries are looking at the Kenyan model: Malawi and Zimbabwe are seriously considering it. 

    Sol Luca de Tena explained how UNICEF Brazil and Giga joined forces to advocate for the reform of the Brazilian USF, which had accumulated over $24 billion over 20 years. Giga also establishes processes to support procurement and management of high volumes of contracts, which are also necessary for efficient use of USFs.

    There were several comments and questions raised on site, which were addressed by the panelists.

    Importance of collecting evidence and best practices has been a common thread throughout this session, as well as the need for transparency about how universal service funds are being spent. Jurisdictions often copy each other's policies and regulations, which can result in greater inclusion (in case of Malawi and Zimbabwe looking at the Kenyan example of expanding the pool of beneficiaries to include community networks), but also lead to exclusion (in case countries like India and South Korea start following the European example regarding the fair share debate).

    IGF 2023 Lightning Talk #38 Place and role of women in cyberspace

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:
    Woman/girls need to be supported from very early stages of their life to go forward to the education and jobs in IT sector. This kind of support should be given both by state and by companies. Professionalism, knowledge, experience have no gender. The society will profit if women will join on the equal rights and level the cyberspace in every aspect of it.
    Session Report

    Report:

    Lightning Talk #38: Place and role of women in cyberspace

    During the lightening talk the subject was presented according to background paper. Key issues from the presentations were barriers which limit women's use of the internet

    • Pay gap

    Given the global gender pay gap, women tend to have lower incomes compared to men, making it harder for them to allocate a significant portion of their earnings to purchasing smartphones or paying for internet access.

    • Accessibility

    The cost of an internet connection is a significant barrier for women in accessing the internet. In many low- and middle-income nations, the cost of internet services remains relatively high compared to average incomes. As a result, women may struggle to afford internet services, hindering their ability to go online and participate in the digital world.

    • Lack of devices:

    Women globally possess fewer devices, such as smartphones, compared to men. One contributing factor is the higher cost of these devices, which can make them less affordable for women in low- and middle-income countries. Research indicates that the cheapest new smartphones are still relatively costly for many individuals, with an average price of $104 per unit. The disparity in device ownership has implications for internet usage patterns. For example, a smaller percentage of female mobile phone owners in these regions have access to smartphones compared to basic calling phones. This technological barrier restricts the range of activities women can engage in while online, limiting their ability to fully participate in cyberspace.

    • Privacy and security:

    Women tend to be more concerned about online security and privacy compared to men. In many countries, women express greater apprehension regarding the protection of their personal data. These concerns may be rooted in cultural factors, gender-based violence, or experiences of online harassment and stalking. As a result, women may be more cautious about sharing personal information and engaging in certain online activities, which can impede their ability to freely navigate cyberspace and fully utilize its resources.

    • Education and skills:

    Disparities in educational attainment contribute to keeping women offline. Unfortunately, a gender education gap persists globally, with adult men having higher rates of educational attainment than adult women. This disparity extends into the digital age, as women may have less exposure to digital skills training within traditional educational settings. The lack of educational opportunities and digital skills training hampers women's ability to navigate and utilize online platforms effectively, creating a digital divide.

    In the second part of the presentation some polish solutions and project which counteract to those gaps ware presented.

    • Campaign Girls for polytechnics! and Girls to Strict! is a pioneering and at the same time the largest project promoting technical, engineering and science courses (STEM) among young women in Poland and Central and Eastern Europe. The most important goal of these programs is to break stereotypes in thinking and encourage secondary school students to take up technical and science studies.  
    • NEW YOU in IT - Professionalism has no gender - The first edition of this four-month program conducted by the Central Information Technology Center (associated with the Ministry of Digitial Affairs) was aimed at providing free education to participants and motivating women who want to join the IT industry.
    • #CyberStrong media campaign, which tells about women who have achieved success in the IT industry and in managing the state's cybersphere.
    • Poland has also introduced new regulations to the Labor Code, guaranteeing parents and caretakers additional holidays and leaves, access to flexible forms of employment, including home-office, as well as extra protection from dismissal.
    • Close to 300 Ukrainian women who arrived in Poland as a result of the war have expressed their interest in participating in HERoes in IT, a free career transition program that educates them in the field of manual testing. The record number of applications confirms that Ukrainian women are eager to develop their digital skills, even considering a complete career change.

    After the presentation some comments were raised by participants. Comments were very positive. Moreover some of participants shared the information about some similar programs in their countries.  

    IGF 2023 Open Forum #96 How to enhance participation and cooperation of CSOs in/with multistakeholder IG forums

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    The need to address the capacity of civil society organisations to meaningfully engage in the international multistakeholder forums, especially in the standard setting to promote human-centric digital policy and technologies.

    ,

    Address the underrepresentation of voices from the Global South in the global internet governance policy processes and preserve IGF as a space for civil society engagement.

    Calls to Action

    Provide long-term support to the civil society to engage in the international multistakeholder fora.

    ,

    Create space for civil society organisations to create impactful networks, align their actions and strategies, and build on each other's expertise to meaningfully impact policy processes.

    Session Report

    IGF 2023 Open Forum #96 

    How to enhance participation and cooperation of CSOs in/with multistakeholder IG forums

     

    1. Event Description: 

     https://intgovforum.org/en/content/igf-2023-open-forum-96-how-to-enhance-participation-and-cooperation-of-csos-inwith

     Background paper

     2. When and where:

     In-person: 14:45-16.15 JST (05.45-7.15 UTC / 07.45-09.15 CET) Workshop room 7, Room K 

     3. Speakers: 

     Mr.Peter MARIEN, Teamleader Digital Governance, Unit F5 – Science, Technology, Innovation and Digitalisation European Commission, Directorate General for International Partnerships (DG INTPA),

    Mrs.Tereza Horejsova,  GFCE outreach manager, IGF MAG member

    Mr.Viktor Kapiyo, Member of the Board of Trustees, Kenya ICT Action Network (KICTANet),

    Mrs.Marlena Wisniak, Senior Advisor, Digital Rights, European Center for Not-for-Profit Law Stichting (ECNL)

    Moderated by: Mrs.Pavlina Ittelson, Executive Director Diplo US

    Online moderation: Mrs.Shita Laksmi, DiploFoundation

     

    4. Participants:

     In-person: 20 M/F ratio 40/60 (estimate)

    Online: 6,  M/F ratio 50/50

    5. Content: 

     The session aimed to explore ways to improve and enhance the engagement of civil society organisations (CSOs) in multi stakeholder forums, identify the challenges they face, and discuss strategies for bringing the perspective of Global South CSOs into international multi stakeholder forums, including ITU, ITF, and ICANN standardisation processes.

    The Civil Society Alliances for Digital Empowerment (CADE) project led by DiploFoundation in partnership with nine global organisations was introduced. Funded by the European Commission, DG INTPA, the CADE project aims to:

    • Increase the capacity of CSOs to engage in global multistakeholder internet governance processes.
    • Strengthen cooperation between CSOs from the Global North and Global South.
    • Advocate for enhanced CSO participation in international internet governance forums.

    Speakers insights:

     The representative of the European Commission, Peter Marien, highlighted the EU's interest in digital governance with a human-centred development approach. He emphasised the need for a multilateral and multistakeholder approach in global digital governance, noting  the current limited participation of CSOs in these processes, citing challenges such as lack of capacity and know-how. He stressed the importance of aligning discussions with the UN Charter of Human Rights. Current collaborations between the EU and ITU, IGF, UNESCO and OHCHR were also mentioned. 

     The IGF MAG member, Tereza Horejsova, emphasised the essential role of CSOs in representing individual interests in internet governance. She also advocated for a culture of multistakeholderism to improve policy-making and underlined the need to preserve CSOs' independence in the internet governance process. She concluded by highlighting the unique opportunities for CSOs in the IGF.

     The European Center for Not-for-Profit Law Stichting (ECNL) representative, Marlena Wisniak, discussed the importance of meaningful multi stakeholder engagement, calling for proper resourcing and capacity-building for CSOs, including marginalised communities. She addressed power imbalances between stakeholders and the need for mechanisms for safe participation, emphasising the importance of CSOs participating for increased transparency and accountability. 

     The representative of Kenya ICT Action Network (KICTANet), Victor Kapiyo, called attention to the multi-stakeholder approach to ensure meaningful outcomes for local communities, pointing out bureaucratic and financial hurdles Global South organisations face when participating in global processes. He highlighted the need for improved technical capacity and resources to bring unique local perspectives into global processes. 

     Discussions: 

     The panellists discussed the importance of capacity-building, resources, and know-how for CSOs to engage effectively in international governance bodies. They shared the view that collaboration between CSOs from the Global North and Global South could be an enabler for  effective participation from the latter. Local partnerships were also highlighted as a means to bring unnoticed issues to light. The significance of coalition and collaborative approaches was underscored, along with the need for better coordination among CSOs themselves.

     Challenges identified included the fragmentation of forums discussing internet governance, a lack of capacity among CSOs and governments, particularly in understanding the human rights impact of technological advancements, and the dominance of strictly technical spaces by Global North organisations.

    Recommendations:

    • Build trusted relationships with legislators, demonstrate expertise, and anticipate counter-arguments when presenting CSO views.
    • Enhance coordination among donors to prevent overwhelming CSOs with requirements and improve access to information about formal rules and practices for participation.
    • Foster better coordination among CSOs themselves.
    • Encourage funders to appreciate the work dynamics of Global South organisations.

    The CADE project was introduced as a promising initiative to address these challenges and foster meaningful CSO engagement in global internet governance.

     6. High-level summary in one sentence: 

     Civil society organisations, especially from the Global South, face barriers to entry into the global multistakeholder internet governance policies and have a need for increased capacity building, transparency of policy processes and creating spaces that would allow for coordination, network building and coordination to impactfully engage in the global multistakeholder internet governance processes.

    IGF 2023 WS #69 Manga Culture & Internet Governance-The Fight Against Piracy

    Updated:
    Data Governance & Trust
    Key Takeaways:

    It was encouraging to know that so many people, not only the members of publishing community, are aware of manga piracy. It is necessary for stakeholders in many fields to cooperate.

    Calls to Action

    If a language is not supported at the time of authorized distribution, manga fans who only understand that language will inevitably have to rely on pirated copies. Manga publishers should actively promote multilingual support for authorized editions as part of their efforts to fight against piracy.

    Session Report

    "Manga Culture & Internet Governance - Fight Against Piracy"

    The following five speakers presented their arguments and proposals on the topic from their respective standpoints.

     

    Moto Hagio, Manga artist/Nicole Rousmaniere, Research Director, Sainsbury Institute for the Study of Japanese Arts and Cultures, UEA/Andy Nakatani, Senior Director of Online Manga, VIZ Media/Jun Murai, Distinguished Professor, Keio University/Kensaku Fukui, Attorney at Law.

     

    Moderator Kensaku Fukui, to start the discussion, introduced how Manga is accepted around the world, and continued with the current situation of Manga piracy, how the piracy sites operate, the results achieved in the fight against Manga piracy which has been going on for years, and the new problems being faced every day.

     

    Nicole Rousmaniere reviewed the Manga exhibition at the British Museum which she curated.

    She reported that what she understood from the Manga exhibition was Manga culture has been embraced by all generations of all races, and that the majority of the visitors took away an emotional, rather than intellectual, outcome, and that Manga has the power to transcend borders and boundaries. 

     

    She stated, “Manga is one of the most precious treasures of Japan and becoming something really really special worldwide. For the future of Manga, we need to protect it, we need to protect the artist, we need to protect their ability to work with the publishers, and piracy is something that endangers the thriving industry”.

     

    Andy Nakatani introduced that there are English translated pirated Manga chapters released even before the official release of the original Japanese version.  To combat that, publishers created various official Manga platforms which releases English translated chapters at the same time the official Japanese versions are released.  However, the impact of piracy on the industry and the artist is still obvious, he stated, not only potential revenue loss but devaluation of the perception of what Manga really is, and all that Manga artists put into their work. 

     

    Manga Artist Moto Hagio emphasized how painful and sad it is for Manga artists that they are not compensated when their works are read by pirated Manga.  She also introduced her motivation for becoming a Manga artist, "I became a Manga artist because I knew that in the world of Manga, there is a world where people work together, comfort each other, and trust each other, and I wanted to inherit such a world” and that is still the world of Manga that she would like to seek.  As for opinions on piracy, Hagio argued that she learned there is justice in this world by reading Manga.  The choice between reading the legitimate version or pirated version of Manga may be a matter of how we live.

     

    Jun Murai, recalled discussions with the ITF and WIPO on how to handle intellectual property in the Internet space at the dawn of the Internet, and stressed that it is still important to collaborate among industries and Internet organizations.  Especially, he says, in the fight against piracy, which simply is a crime, it is necessary to collaborate with people from many fields.  In Japan, for example, the publishing industry is working together with multiple industries, including internet and telecommunication industries, legal experts, law enforcement agencies, Internet organizations and governments around the world to combat piracy.  Collaboration is the key, and the multi-stakeholder spirit of IGF is very important to the fight against piracy. 

     

    The following comments were made by the audience.

    • It is good to take a firm stand against those who steal money, but please don't be too hard on Manga fans.
    • I hope that Manga will be made available in many languages as cheaply and as timely as possible, with the involvement of fans.
    • Opinion of Moto Hagio was excellent and I understand that we must support Manga, but please remember that there are people in the world who can only read Manga in a timely manner through pirated copies (due to language problems).
    • One of the problems may be that intellectual property is not properly understood. To protect the right holders, it may be important to educate people.
    • In countries where there are no legitimate platforms, piracy is a reality, so I think that one of the measures against piracy can be taken by developing legitimate platform.
    • How many young people are involved in the fight against piracy? If they know that they may not be able to read Manga in the future because of piracy, the younger generation may rise up and fight by sharing their skills.

     

    Comments from the audience suggested the importance of accessibility and awareness. Discussion of the speakers emphasized the power of Manga, which piracy is taking away,  and the importance of collaboration.  The Organizer of this session, Japanese publishers, took these opinions seriously and understood that they need to continue their efforts to deliver legitimate versions to people around the world in a timely and reasonable manner, and that they should work together with more stakeholders than ever before in the fight against piracy.

    IGF 2023 WS #149 VoD Regulation: Fair Contribution & Local Content

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    While the significance of regulations such as Fair Contribution and Local Content Contribution is understandable, excessive regulations may lead to the withdrawal of VODs from a country. To strike a balance, restrain implementation, clarify importance, and use evidence for decisions. Whole stakeholders must improve data transparency and respect accountability to the public when seeking fair contributions.

    ,

    The possible future regulations for VODs include rules on information provision during disasters and restrictions on cross-media ownership when they expand their content from entertainment to news, etc. The possibility of developing into what could be called "Platform Neutrality" was presented.

    Calls to Action

    To Regulatory authority
    Regulations on VODs may cause the market to shrink, so when implementing such regulations, the purpose and effectiveness of the regulations should be thoroughly discussed with all stakeholders based on open and precise data.

    Session Report

    This workshop discussed the regulatory landscape and its impact on VoDs (Video on Demand operators), focusing on “Fair Contribution” and “Local Content Contribution.

    Fair Contribution: 

    "Fair contribution" denotes the monetary or collaborative support that should be provided by OTT firms, particularly big techs that dominate VoD market, towards the enhancement of broadband networks that underpin internet video services, or an EU policy debate relating to this issue. (Prof. Jitsuzumi)

    Local Content Contribution: 

    "Local content contribution" refers to various initiatives undertaken by video streaming platforms to promote local (domestic) content. Since around 2020, the European and British Commonwealth countries have been imposing local content requirements, including local content quotas, prominence obligations, and financial contribution obligations on video streaming platforms. This is done with the intent of safeguarding their traditional audiovisual media industry and culture from global platforms, particularly the U.S. VoD giants. (Dr. Yonetani)

    Dr. Jitsuzumi gave the history of the discussion on Fair Contribution and the latest situation in each country. Dr. Yonetani presented on Local Content Contribution, including examples from various countries. Mr. Mizukoshi offered speculation on the current situation in Japan, where there is little discussion of these regulations and the reasons for this. In response, Ms. Cho introduced the fact that these two debates are active in Korea and explained the background and status of the Netflix vs. SK Broadband lawsuit that triggered the global Fair Contribution debate.

    Two points of discussion were exchanged with the participants.
     

    Q1) Are there any potential drawbacks or unintended consequences of introducing Fair Contribution and Local Content Contribution?

    The discussion resulted as follows:
    While the significance of regulations is understandable, excessive regulations may lead to the withdrawal of VoD from a country. To strike a balance, regulators have to restrain implementation, clarify importance, and use evidence for decisions. All stakeholders must improve data transparency and respect accountability to the public when seeking fair contributions.

    Q2) What new regulations, if any, will be imposed on major VoDs operators like Netflix?

    The discussion resulted as follows:
    Possible future regulations for VoDs include rules on information provision during disasters and restrictions on cross-media ownership when they expand their content offerings from entertainment to news and more. The concept what we might term "Platform Neutrality" has been introduced.

    Mr. Mizukoshi agreed that with the exception of the Universal Service Fund rule, related regulations should be kept to a minimum. Thus, the overall opinion can be summarized as follows:
    Regulations on VoDs may cause market shrinkage. Therefore, they should be implemented with their purpose and effectiveness thoroughly discussed by all stakeholders based on open and precise data.

    IGF 2023 WS #95 Robot symbiosis cafe

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    Balance between the economic efficiency required by robots and the job satisfaction required by people

    Session Report

    1 Robot symbiotic cafe
    1.1 “Robot Symbiotic Cafe” began in June 2022 as a result of the roundtable discussion “Keihanna Residents” hosted by Kyoto Prefecture. A demonstration experiment was conducted in February 2023.
    1.2 "Robot Symbiotic Cafe" is a project that allows people with severe disabilities who are unable to go out to remotely control robots via the Internet to create future employment opportunities.
    2 Member introduction
    2.1 NPO Kyoto For Life (hereinafter referred to as Kyoto For Life): Based on the Act on Comprehensive Support for Persons with Disabilities, Kyoto For Life mainly guarantees a minimum wage to people with intellectual, physical, and mental disabilities. A project that provides a place for people with disabilities to work with confidence and pride, allows them to enjoy joy through work, and supports economic independence that contributes to people with disabilities ``working better and living better.'' We are working on this.
    2.2 Kyoto Prefecture Department of Commerce, Labor and Tourism, Manufacturing Promotion Division (Kyoto Prefecture): We are working on promoting various cutting-edge technologies, and robots are one of them. The Keihanna Robot Technology Center was opened in 2019 to support the development of next-generation technology, encourage small and medium-sized enterprises and start-up companies in Kyoto Prefecture to enter the robot industry, and improve the convenience and comfort of society through robots. Working towards a purpose.
    2.3 Keigan Co., Ltd. (hereinafter referred to as Keigan): Our company is a robot startup company founded in Kyoto Prefecture in 2016. Our mission is "Quick and Easy Robot for Everyone." When we first started our business, we developed the ``KeiganMotor'' as a ``motor that makes it surprisingly easy to make robots.'' This motor has been well received by customers in universities and research and development fields. Since then, we have been providing robots that use motors, such as conveyor rollers and AGVs, in response to feedback that they would like to use them on factory production lines. In 2022, we will release the autonomous mobile robot "KeiganALI", which is widely used in factories, warehouses, restaurants, etc. It is characterized by its high degree of customizability, allowing it to not only transport things but also play a role in communication, depending on the user's requests.
    3 Demonstration experiment
    3.1 Robot operation
    3.1.1 Kyoto for Life: Introducing robots, the latest technology, in the industry for supporting people with disabilities is an unprecedented attempt, but being able to remotely control robots with simple operations is a great opportunity for employment. I would like people in organizations that support people with disabilities to know more about the possibilities that may lead to the possibility of
    The disabled people who participated in the demonstration experiment (hereinafter referred to as the pilots) made the decision after consulting with an organization that supports shut-ins. The pilot is confined to his home due to a severe disability, so he needs a robot that can be operated remotely from home. Also, since the pilot has handicap, he is unable to move the robot with complicated operations, so it was necessary to make the robot move as simple as possible.
    3.1.2 Keigan: The pilot is handicapped, so he decided to use foot pedals instead of a keyboard or mouse. In addition, we have combined the tables for serving meals into one to make it easier to operate with the Foto Pedal.
    3.2 Customization
    3.2.1 During the customization work, improvements were made based on the pilot's requests while directly discussing operating methods at his home. The pilots were very happy and took the training to fly the aircraft, but they also found it tiring to fly for long periods of time, so we further improved the user interface.
    3.2.2 This robot is based on the serving robot that we sell to restaurants, and we were able to reduce costs by using a commercially available foot pedal that can be easily purchased.
    3.2.3 The various people with disabilities (pilots) targeted by this project each have a variety of disabilities, and as pilots become accustomed to operating robots, the tasks they can take on increase, so their demands on robots change rapidly. The important thing is to clarify what kind of robot we want to create, so we discuss what kind of system is best while communicating with the pilot. The task was to define the requirements and organize the cost and time required for development while evolving the robot.
    4 Impressions/reflections, etc.
    4.1 Kyoto Prefecture: “Exhibition of individuality”. Creating a robot that can do all the tasks requires development time and costs, but by skillfully combining areas where humans operate the robot and areas where the robot does the work automatically, robots can do things that humans cannot do. This creates the possibility that humans can complement tasks that robots are not good at. By combining the human domain and the robot domain, I would like to work with everyone to create robots that are warm and expressive of human individuality.
    4.2 Kyoto for Life: The pilots were very pleased with this experience, saying, ``I'm happy to be able to operate a robot'' and ``I want to serve more customers.'' In addition, a support organization for hikikomori people was pleased with the results of the demonstration experiment, saying, ``I was surprised that they had the will to do it themselves, and I'm very happy.'' Through our efforts in demonstration experiments, we hope that in the future, people with disabilities will not only be able to move robots, but also interact with people through robots, participate in society, work for themselves, receive a salary, and live independent lives. I hope that society will accept this as a matter of course.
    4.3 Keigan: Until now, our company has been in the position of making robots, and our engineers have focused on making robots using cutting-edge technology. However, in this project, I had the valuable experience of designing a robot to increase the ``purpose'' in life and work for people with disabilities.
    5 Future developments
    5.1 Kyoto Prefecture: Our department is considering increasing the number of collaboration partners. We will consider collaboration with welfare departments within the prefectural government, collaboration between the administrative bodies of each city in Kyoto Prefecture, and collaboration with actual welfare sites. At the same time, we would like to encourage businesses in the prefecture to become aware of our efforts and encourage them to participate, while conducting public relations activities.
    5.2 Kyoto for Life: Through the use of robots, we hope to create new job opportunities for people with severe disabilities who have difficulty leaving their homes. First, we would like to consider the possibility of customizing the system for people with severe mental and physical disabilities. Next, for the future, we would like to have students from support schools cooperate with us in training pilots, developing technology, and creating systems. We would also like to collaborate with families of people with disabilities, life support businesses, etc. so that we can support people with disabilities who work with robots with confidence and pride. Finally, in order to increase the number of partners, we would like to spread this initiative by reaching out to other welfare facilities for people with disabilities. I believe that by having people involved in employment support see how a disabled person actually works as a pilot, their mindset will change.
    5.3 Keigan: So far, we have created the robot that we talked about today based on the technological assets that we have. In the future, we would like to utilize our technological capabilities and mobility to provide jobs for people with disabilities by providing robots customized to the individual pilots' requests. I want to create robots that not only improve productivity, but also give everyone a sense of purpose in life and work.

    IGF 2023 Networking Session #111 Meet&Greet for those funding Internet development

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    For the Internet to continue to grow and develop to achieve the SDGs, there is a need for those funding Internet development to coordinate and articulate their efforts so that the funding made available is not only supporting project activities/initiatives with concrete deliverables but that is also contributing to long term stability and development of organizations involved in Internet governance discussions.

    ,

    Organizations that fund and support the IGF are concerned about the lack of funding for its stability and expressed their support for the continuation and strengthening of the IGF.

    Calls to Action

    Organizations funding Internet development raised the importance of engaging with the organizations doing research and implementation to better understand their funding needs and the challenges they face, to explore opportunities to design finance mechanisms through consultative approaches that respond to real needs.

    ,

    Organizations that fund Internet development encouraged funding recipients to work to structure clearer narratives about their vision for the Internet, and what funding is required to achieve it and to communicate effectively.

    Session Report

    This session broke into groups to discuss the challenges for funders of Internet development initiatives.

    Group A:
     

    A key challenge for one funder was finding activities which are aligned with the donors vision/interest. Although grants terms are one-year generally, usually donors look for impact that can be measured. This focus on (immediate) impact that creates burdens for grantees, and it can be difficult to measure, especially in the short term.

    A funding recipient said that they understand that donors want to measure impact. but then, in addition to the tools used for implementing activities, recipient organizations then need to use other/new tools to measure impact. A one year grant requires quick results and that’s not possible most of the time. In that grantee’s case, they have dedicated staff for reporting and evaluation, but that is not a common situation for most recipients.

    Group B:

    A key discussion point was around how both donors and recipients can build skills to manage the grants received and to do it well. How can this be formalised/standardised?

    Some organisations have the potential to generate their own funds, but are not leveraging this enough. There needs to be more focus on this from the recipients (and skills development). This means there has to be consideration of multiple income streams, and whether grants alone can sustain the organization.

    Funders may need to consider how they can provide the tools to develop business models and marketing skills for their funding recipients.

    In some cases, they can help the recipient organisation to get to the next level. It is easier in smaller countries where everyone knows everyone. An example is for funders to have access to a pool of resources that can offer low-bono work to orgs that may need legal advice, marketing, website development, or give small extra funds to improve key skills at that point in time.

    Funders can look at how to provide access to creative designers to help communicate techy concepts translated into clear ideas and messages to the rest of the world.

    Fellowships train people and ask them to be ambassadors. This is empowering.  But as a funder, it is not a self sustaining initiative as the funder will always have to find funds each period to pay for travel and attendance to conferences. They are growing their skills, but then the funder needs to consider how they can leverage those skills, such as alumni programs.  There are certain initiatives/models that will always require funds. But is there another funding model?

     

    The question was asked: What about multi-year funding?

    One funder said they used to do single-year funding, but now they do larger grants over two years. They partner upfront, to help guide it with the organization and work with them. Rather than a competitive process they strategically selected partner recipients. They then encourage and support them to seek multiple sources of funds.

    Group C:

    There was some discussion around the stability of recipient organizations – short term funding with short term goals doesn’t allow them much scope to invest in their ongoing sustainability.

    The question was asked how much of that was related to funding – sustainability is also affected by external factors or unstable political environments however in this case it was a question of financial stability.

    There was discussion around the balance between investing for measurable impact, versus investing long term in the core viability of organizations and keeping them sustainable. These two goals are often in tension.

    The session is available at:

    https://www.youtube.com/live/Jx1KjQepkOg?si=LldFPmphJE6rFDs3&t=19652

     

    IGF 2023 WS #477 Framework to Develop Gender-responsive Cybersecurity Policy

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    1. Cybersecurity unlike other policy issues is complex and requires a multifaceted approach involving different stakeholders and strategies. As a result, there are specific intersectional challenges.

    ,

    2. Gender in cybersecurity is not only a women's and or tech issue. It's part of a power structure. It is urgent to create secure environments online for everyone, and at the same time, embrace intersectionality. This is why bringing gender in every step of the design is needed, in order to impact greatest number of people. Systemic approaches to cybersecurity are needed.

    Calls to Action

    1.Reevaluate the concept of security overcoming the masculine framing of it.

    ,

    2. Educate engineers and their constituencies on a narrative and language that is conducive to considering the gender dimensions of cybersecurity, and to raise awareness about the political complexities of building consensus about it.

    Session Report

    IGF 2023 "Framework to Develop Gender-responsive Cybersecurity Policy" (WS #477) | October 11, 16:30-17:30 JST.
    Moderated By: Veronica Ferrari, Global Policy Advocacy Coordinator (APC)
    Speakers: Kemly Camacho, Co-founder and general coordinator, Sulá Batsú CooperativeL, Jessamine Pacis, Program Officer, Foundation for Media Alternatives, Grace Githaiga, Co-Convenor of the Kenya ICT Action Network (KICTANet), David Fairchild, First Secretary, Permanent Mission of Canada.
    Online Moderator: Pavitra Ramanujam, Asia Digital Rights Lead (APC)
    Rapporteur: Karla Velasco, Policy Advocacy Coordinator of APC's Women's Rights Programme.

     

    There is an increasing recognition in international, regional and national debates about the fact that different social groups are in different positions when dealing with cybersecurity threats. However, few countries have fully integrated gender considerations into their national cybersecurity policies. At the global level, although there is consensus on the need to bridge the digital gender gap and promote diversity in cybersecurity, clear guidance on mainstreaming gender into cyber norms is still lacking. This session discussed how to integrate gender perspectives in cybersecurity policy at national, regional and international levels. Speakers from diverse stakeholders covered issues ranging from what do we mean by a gender approach to cybersecurity, how to deal with the challenges in integrating this perspective, to examples of how cybersecurity policy directly affects women and diversities in different regions of the world, and what is the status of the integration of gender in national and international cyber policy debates, among other issues.  

    The moderator started the session by unpacking what is a gender approach to cybersecurity and providing an overview of APC framework for developing gender-responsive cybersecurity policy. It was restated that gender in cybersecurity is not only a women's issue; it's intrinsically linked to power dynamics. A gender approach to cybersecurity entails recognizing the diverse risks and varied impacts, encompassing intersectional factors like race, ethnicity, religion, class, and the perceptions and practices of different groups and individuals. It embodies a comprehensive, systemic approach that integrates gender considerations at every stage of design and implementation to maximize its impact on a broader spectrum of people.

    The first speaker, Kemly Camacho, in her intervention, addressed the main issues that a gender perspective on cybersecurity should consider in Central America and the status of the integration of a gender perspective in cybersecurity policy in the region. Camacho provided an overview of Sula Batsú’s experience engaging in national cybersecurity strategies discussions in Costa Rica, where the organization participated in its creation and in the monitoring and evaluating the action plan. Camacho shed light on the importance of looking at government’s budget allocation to cybersecurity to see the possibility of “doing things and of forming big coalitions for advocacy that should be based in social movements. Camacho also stressed the importance of raising awareness of the gender implications of cybersecurity at the very beginning of the policy discussion.

    The session continued with Grace Githaiga that, based on her experience and work at national, regional and international levels and direct work on cyber capacity-building for groups that experience disadvantage, explored what are the main intersectional challenges that policymakers should consider when shaping holistic cybersecurity policy. Githaiga also addressed the question of how can policy makers effectively address these intersectional challenges that consider gender, but broader inequality-related concerns. The speaker highlighted how cybersecurity is a complex, multifaceted policy issue that involves different domains and stakeholders. Githaiga spotlighted the lack of participation of people affected by these policies and that groups most impacted and in position of marginalization should be meaningfully involved in the cybersecurity discussions.

    Next, Jessamine Pacis, addressed how increasingly laws that, while theoretically drafted to protect people, are ultimately used to censor, criminalise women and LGBTQIA+ people around the world and shared FMA research and advocacy work around this in the Philippines' context. Pacis shared the problems from a gendered perspective that the Philippines cybercrime law presented and, in particular, what strategies the organization put in place to engage in cyber policy discussions to bring feminist perspectives. It was highlighted that a key strategy for advocacy involved collaborating with various networks, including children's rights groups, to ensure coordinated efforts. Since this law presented challenges on multiple fronts, Pacis emphasized and in agreement with the previous speaker, that a multifaceted approach is necessary. Lastly, Pacis mentioned their collaboration with a senator who is a strong advocate for gender issues, emphasizing the importance of forming coalitions and identifying champions as a key strategy.

    Finally, David Fairchild provided a comprehensive overview of how gender considerations appear in multilateral processes on cybersecurity such as the UN OEWG. The speaker stressed some of the crucial factors that a gender perspective on international cybersecurity should consider moving forward. Fairchild highlighted the importance of what happens between the negotiations and that this is “a long term work” and we need to keep pushing. Fairchild underscored the need for human rights and gender considerations in technical and engineering policy spaces and the significance of reinforcing standards and international frameworks within these technical spaces to enhance awareness and compliance with human rights and gender principles.

    The session continued with a Q&A, where audience members inquired and shared, among other issues, about the role of the private and the technical sectors in integrating a gender perspective to cybersecurity. One main take away of the discussion was the need to educate engineers and their constituencies on a narrative that is conducive to considering the gender dimensions of cybersecurity, and to raise awareness about the political complexities of building consensus about it.

    Speakers and the audience provided valuable insights on the main obstacles they have encountered when advocating for these considerations at national, regional or international levels, and how a framework like the one APC developed could offer solutions or support the various stakeholders in integrating a gender perspective into cybersecurity policy and norms and what future work on this agenda should focus on. Feedback during the session on APC framework positively recognized its focus on reshaping discussions about security and its efforts to reform policies and perspectives of various stakeholders, including government and private sector actors.

    The speakers and the moderator closed the session with a strong emphasis on the need for systemic approaches to cybersecurity. They stressed that due to its complexity, cybersecurity requires a multifaceted strategy involving diverse stakeholders and tactics. They underscored the importance of building coalitions and identifying champions to address these challenges effectively.

    Looking ahead, speakers highlighted the need to reassess our conventional understanding of security since current security frameworks often have a masculine bias. It was emphasized that adopting a gender approach to cybersecurity goes beyond increasing women participation in the IT sector or even incorporating gender provisions into policies.

    To conclude, speakers acknowledged the progress in incorporating gender perspectives into cybersecurity policy making but emphasized that there is still much work to be done. They highlighted the importance of further research, data collection, educational initiatives, and raising awareness as key elements for future work in this agenda.

     

    IGF 2023 Launch / Award Event #27 Education, Inclusion, Literacy: Musts for Positive AI Future

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    Higher education must play a key role in educating humanity to prepare for the AI revolution. Digital literacy must become a core part of the higher education experience.

    ,

    The six principles set forth in the statement released at this session and developed through a global collaboration can provide a framework as institutions around the world develop policies and protocols that fit with their mission and values.

    Calls to Action

    Following this IGF session, the statement on higher education and AI should be distributed widely to reach as many institutions as possible.

    ,

    Higher education administrators and faculty members should be encouraged to carefully study the six principles of the statement and consider their relevance and use as policies and practices are developed at their own institutions. Higher ed should be considered a leading force in wise development of AI technologies.

    Session Report

    With artificial intelligence broadening its impact on all aspects of life, Elon University leaders have coordinated development of a statement of principles to guide higher education institutions as they prepare humanity for the revolution brought about by this rapidly evolving and groundbreaking technology.

    The statement was co-authored by Elon President Connie Ledoux Book, Elon scholar-in-residence Lee Rainie and Professor Divina Frau-Meigs of Sorbonne Nouvelle University in Paris and has generated feedback and support from higher education organizations, leaders and scholars from around the world. (Comprehensive details at: https://www.elon.edu/u/ai-higher-education/)

    The authors were joined by fellow scholars on Oct. 9, in Kyoto, Japan, at the 18th annual United Nations Internet Governance Forum. They led a discussion about the multitude of ways higher education institutions can develop artificial intelligence literacy and commit to serving society’s best interests as these technologies continue to expand.

    Book explained that the six principles offered in the statement embody a call for the higher education community to be an integral partner in development and governance of AI.

    “The statement provides a framework for leaders at colleges and universities around the world as they develop strategies to meet the challenges of today and tomorrow,” Book said during the session, which generated in-person and online attendees from around the globe. “At Elon University, faculty are adapting the statement as they create policies on AI and design new approaches to teaching and learning.”

    The six foundational principles that are outlined in the statement are:

    1. People, not technology, must be at the center of our work
    2. We should promote digital inclusion within and beyond our institutions
    3. Digital and information literacy is an essential part of a core education
    4. AI tools should enhance teaching and learning
    5. Learning about technologies is an experiential, lifelong process
    6. AI research and development must be done responsibly

    The statement issues a call for the higher education community, and not just those within traditional technology fields, to be deeply involved in the development of governance mechanisms for AI, mechanisms that should be crafted by multiple stakeholders.

    “Educators in all fields are well suited to provide intellectual and ethical guidance, conduct much-needed research, serve as trustworthy watchdogs and be advocates for learners, teachers and society,” the statement reads.

    Rainie joined Elon this year as scholar-in-residence after serving for 24 years as director of internet and technology research at the Pew Research Center. He has also been a research partner for Elon’s Imagining the Internet Center for more than 20 years. At the session in the Kyoto International Convention Center, Rainie explained that the six principles bring time-tested truths to artificial intelligence and are essential for maintaining human rights, human autonomy and human dignity.

    “Clearly, we are at a singular moment now as AI spreads through our lives,” Rainie said. “In the past, tools and machines were created to enhance or surpass the physical capacities of humans. The advent of AI brings technologies that enhance or surpass our cognitive capacities.”

    Rainie has begun his work at Elon with a research survey of global experts and the general public in the United States to explore the views of both groups about how the benefits and harms of AI may unfold in the years to come. That work, which will be released in early 2024, builds upon the decades of work of the Imagining the Internet Center to catalogue the insights of hundreds of experts about how the digital revolution impacts humanity.

    Rainie noted that past surveys have generated a wide range of answers to questions about the digital revolution, but there is a universal purpose that can be seen driving those answers. “They want us to think together to devise solutions that yield the greatest possible achievements with the least possible pain,” Rainie said.

    With the introduction of new generative AI tools such as ChatGPT in late 2022 and the increasing integration of AI technologies into a broader range of platforms, discussion around its long-term impact has exploded. Given the increasing complexity of AI systems and the newness of the technology, those discussions have often foster panic upon many in the population.

    “As researchers, we have to resist the panic, the current panic about AI systems and the fact they might produce superintelligence that is more intelligent than us,” said Frau-Meigs, who serves as the UNESCO chair for Savoir-Devenir in sustainable digital development. “We need to lift fear and anxiety. … we as universities have to come up with solutions for learners worldwide.”

    Frau-Meigs said it is important to promote media and information literacy first to create a familiarity with concepts and issues that allows larger segments of the public to move on to AI literacy. “We want to leave a space for understanding and for adoption,” she said.

    Joining the statement authors for Monday’s session were other scholars from a range of disciplines who examined how higher education can proactively engage in AI governance and development.

    Alejandro Pisanty, a member of the Internet Hall of Fame and professor of internet governance and the information society at the National Autonomous University of Mexico, said the ongoing development of AI is having an impact on higher education by drawing talent away from university research centers. “First-world countries are seeing what we have suffered from in developing countries for decades, which is brain drain,” said Pisanty, a member of the Internet Hall of Fame.

    That makes it important for higher education to take a proactive role in the development and governance of AI in the years ahead. “The highest cost we would incur is the cost of doing nothing,” he said.

    Francisca Oladipo, vice chancellor and professor of computer science at Thomas Adewumi University in Nigeria similarly warned against inaction and the temptation to think that the issues surrounding AI are only related to this in fields related to technology and computer science.

    “In Nigeria, we have viewed AI as something for computer people, but that’s no longer the case,” Oladipo said. “We need to be more inclusive to embrace everyone because the application of AI is across every field.”

    Eve Gaumond, a law researcher at the University of Montreal Public Law Research Center is focused on artificial intelligence in higher education, freedom of expression in a digital context and access to justice. She said it is crucial that those developing AI systems in higher education have a deep understanding of the technologies and ask good questions. “Oftentimes ed tech looks like modern snake oil,” Gaumond said. “And modern snake oil can have real negative impacts. The datafication of students’ lives can discourage them from engaging in meaningful, formative experiences, and it’s especially worrisome when we know that the data starts being collected as early as primary level and continue following them through high school and university.”

     

    Also participating in the session:

    • Siva Prasad Rambhatla
      Retired professor and leader of the Centre for Digital Learning, Training and Resources, University of Hyderabad, India
    • Wei Wang 
      Member of the IGF Dynamic Coalition on Data and Artificial Intelligence Governance; teaching fellow, Fundação Getulio Vargas (FGV) think tank in Brazil; University of Hong Kong School of Law doctoral student
    • Renata de Oliveira Miranda Gomes
      IGF 2023 Youth delegate representing Brazil; recently earned a master’s degree in communication at the University of Brasilia
    IGF 2023 Lightning Talk #93 Tech Policy Atlas: Your One Stop Shop for Internet Policies

    Updated:
    Digital Divides & Inclusion
    Session Report

     The 2023 Internet Governance Forum in Kyoto highlighted the global challenge of regulating digital technologies, with countries exploring various approaches. The Australian National University Tech Policy Design Centre’s vision is to help develop best practice tech policy that enriches society.

    To support this vision, we've established the Tech Atlas, a comprehensive public database encompassing global tech policies, strategies, legislation, and regulations. Our goal is to make it the primary resource for independent researchers and governments with limited resources. The Atlas serves as a tool for understanding diverse approaches to tech policy and regulation, identifying best practices, and exploring opportunities for harmonisation. 

    As countries introduce a range of tech regulations, the Tech Atlas simplifies the process of tracking and accessing them. Key discussions at the IGF, including artificial intelligence, cybercrime, and online safety, are categorised in our database for easy navigation within the Atlas. 

    We understand access to well-documented policies is crucial for researchers, enabling the identification of trends, evidence-based studies, and the formulation of actionable advice for governments and industries. The Atlas aims to be a go-to platform for tech policy information. 

    No one knows a jurisdiction as well as their citizens, and that’s why we want to ask for the help of others around the globe. We rely on users to spread the word that the database exists, and we’ve made it as easy as possible for people to make corrections and contributions. These entries are verified by our researchers, and we then upload it onto the Atlas! 

    Visit the Atlas, explore the database, and contribute to advancing global understanding of tech policies. The expertise of our user’s is invaluable in shaping a better technological future for humanity. 

    Key Outcomes 

    • The Atlas is a tool for evidence-based research into global tech policy. It will facilitate better research into tech policy from industry, government, and civil society. 

    Call to Action 

    • If you can, please contribute any new policies or regulations to the Atlas. We rely on user contributions to keep it updated. 

    IGF 2023 Day 0 Event #42 “Trusted Personal Data Management Service(TPDMS)” Program

    Updated:
    Data Governance & Trust
    Key Takeaways:

    TPDMS aka “Personal Data(Trust)Bank” is a mechanism that reduces the information asymmetry.

    ,

    TPDMS Qualification Example:Participation of Individuals(Controllability),Data Ethics Board

    Calls to Action

    Human-Centric Approach and Information Bank

    ,

    Enhanced Data Access and Trusted Data Intermediary

    Session Report

    Session Presentation

    Nat Sakimura from Information Technology Federation of Japan (IT Renmei) provided presentation of “Trusted Personal Data Management Service (TPDMS) Certification Program”.

    In G20 Osaka Leader’s Declaration (2019) , G20 countries can facilitate data free flow and strengthen consumer and business trust. In order to realize the safe and free flow of data, the Ministry of Internal Affairs and Communications of Japan has established the Information Bank system.
    Information Bank is seen as a third way, different from the "CRM" in which a single company manages customer information and the "VRM" in which individuals manage personal information. Under this scheme, Information Bank, which has been certified according to guidelines set by the government, holds personal information in trust and provides support for data distribution and data utilization.
    The guidelines established by the government have the following characteristics:
    1)Certification Criteria
    2)Model Terms and Conditions
    3)Governance    
    Especially in governance, it is important that both the certification body and each information bank have a Data Ethics Board. This makes it possible to protect individual privacy and gain trust in data distribution and data utilization.

    The IT Renmei certifies Information Banks according to these guidelines. The certification, called TPDMS (Trusted Personal Data Management Service), has the following characteristics:
    T: Third Way for Personal Data Ecosystem
    P: Participation of Individuals(Controllability)
    D: Data Free Flow with Trust
    M: Multi-stakeholder Governance
    S: Soft Law(Co-regulation)by Public-private Initiative
    Information Bank, certified by TPDMS, is a Personal Data (Trust) Bank that receives trust from individuals and supports the distribution and utilization of data. In return, in the unlikely event that an individual is disadvantaged, such as due to information leakage, they will act as the primary point of contact and be responsible for compensation. Through this, the aim is to dispel individual concerns and realize smooth data utilization.
    Personal Data (Trust) Bank's Data Ethic Board ensures that the above is achieved by reviewing data collection methods, data utilization purposes, data provision to third parties, etc.
    In addition, for certification, we collaborate with ISO's Security Management standards and Privacy Enhancement standards and provide certifications that are comparable to global standards.

    Discussion

    Kazue Sako (Vice Chairperson, MyData Japan) agrees and supported IT Renmei’s action. She also recognizes the same issues and is working to realize safe and secure distribution of personal information led by consumers through the activities of MyData Japan.
    Among them, she asked why such good initiatives have not become more widespread.
    IT Renmei believes there are several reasons for this, but the most important one is that data portability has not yet been legislated in Japan and is not fully recognized as a consumer right.

    Christian Reimsbach-Kounatze,(OECD Secretariat) said that Japan is leading the way in this field.
    In other countries, data brokers distribute data against people's wishes and use it for commercial purposes.
    He said that there is a big difference between Japan's Personal Data (Trust) Bank, which support data distribution on the consumer's side.
    On top of that, he presented the issue of:
    ・Will Personal Data (Trust) Bank be useful even in Europe, where data portability is allowed?
    ・Is it possible for small and medium-sized enterprises to realize this? 
    ・Furthermore, if we are to leave discretion to individuals, how much discretion can we leave to them, and will they be able to understand and exercise it? 

    Summary
    ・TPDMS is a mechanism that helps reduce the information asymmetry among the data economy participants by implementing transparency, accountability, and controllability by individuals. 
    ・TPDMS Certification Scheme formed by Public-Private partnership will help the trust formation by removing the need to verify by each participant.

    IGF 2023 Lightning Talk #117 Promote next-gen internet governance via youth-led research

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    Youth-led research serves as a potent catalyst for empowering and involving children and youth in internet governance discussions. It presents a proactive approach to youth engagement and empowerment.

    ,

    Developing a sustainable model is essential to facilitate the broad adoption of the youth-led research approach in internet governance, ensuring the longevity and effectiveness of these initiatives.

    Calls to Action

    Support youth participation in internet governance through diverse and innovative approaches, fostering a more inclusive and sustainable governance framework.

    ,

    Create platforms for youth voices at local, national, regional, and international levels, enabling a dynamic exchange of ideas and perspectives among young people and different stakeholders.

    Session Report

    The lightning talk was co-organized by the eHelp Association, the Cybersecurity Youth Committee, and the Child Research Officers (CRO) aged 11-18. 

    The objectives of the session were to:

    i) provide a platform for young people to express their concerns about internet issues directly

    ii) share the key insights gained from adopting the youth-led research approach to engage the younger generation in the Internet governance domain, and more importantly, to share key takeaways from both the program coordinator and participants’ perspectives

    iii) allow young people to present their future plans for supporting peers’ involvement in the Internet Governance discussion, and 

    iv) create opportunities to co-manage and co-design this sharing experience with young people in an International internet policy discussion platform

    The youth-led research approach consists of three key components: i) Children and youth researcher training and research preparation, ii) Presentation at the Annual Forum, including Youth Summit in China, and iii) Exchange of views at international conferences and visits, such as Asia Pacific Regional Internet Governance Forum.

    During the session, CROs shared their insights and addressed questions including, What are the internet issues that concern children and youth researchers the most? How was the research experience and how has it influenced them? What are the next steps in the Internet Governance journey?

    From the sharing, key elements of the youth-led research approach were emphasized, including: 

    i) Inquiry-driven: The approach is centred around the interests of young researchers and is designed to ignite curiosity. 

    ii) Authentic and relevant: The research topics are chosen by young people which align with their daily life experiences. The relevance creates a sense of urgency when addressing real-life challenges. 

    iii) Platform for expression: It provides a valuable opportunity for young researchers to express their thoughts, perspectives, and concerns. 

    iv) Meaningful participation and view exchange: The approach encourages active participation and facilitates the exchange of diverse viewpoints with different stakeholders from diverse backgrounds. 

    v) Sustainable engagement: It aims to go beyond one-off initiatives, fostering a culture of ongoing involvement and commitment. 

    vi) Starting with small steps: It encourages young researchers to take initial steps and create a ripple effect, inspiring their peers to get involved as well.

    Regarding the impact of the approach, it demonstrated a significant impact on different stakeholders. For children and youth, it fosters the development of important skills such as perspective-taking, logical thinking, and evidence-based reasoning. It also helps them develop a sense of ownership and active participation in shaping a better internet. By engaging in research, they contribute to the co-construction of an internet environment that meets their needs and aspirations. For adults, including government officials, corporate representatives, youth workers, educators, and parents, the approach offers valuable insights and helps them gain a better understanding of the actual needs and concerns of children and youth. By valuing the voices of youth and children, it enables adult stakeholders to make informed decisions and take appropriate actions. For example, policymakers can develop comprehensive policies that address the challenges faced by young people in the digital realm. Similarly, program coordinators can design initiatives that effectively promote the well-being and empowerment of children and youth in the online world.

    The session ended with a Q&A from the floor, one of the guests proposed utilizing online crowdfunding methods to support the follow-up initiatives by youth researchers. This method would not only raise public awareness but also provide the necessary funding for operational projects. Another suggestion put forward was to explore opportunities for collaboration with regional Internet Governance platforms. By partnering with these platforms, it would be possible to enhance the research efforts and reach a broader audience, thereby maximizing the impact of the youth-led research approach.

    IGF 2023 DC-IUI Advancing rights-based digital governance through ROAM-X

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    The Dynamic Coalition on IUIs session placed a central focus on the upcoming revision of the framework. Panelists highlighted the necessity of adapting the framework to the evolving digital landscape, recognizing the need for relevance and continuous improvement.

    ,

    Challenges in the digital governance landscape, such as disinformation, AI, and cybersecurity, were framed within the context of the impending revision, highlighting the framework's need for adaptability to address emerging issues.

    Calls to Action

    UNESCO calls upon the international community to unite within the DC on IUI to extend the global reach of ROAM-X assessments, thereby constructing a universal internet rooted in the foundational principles of ROAM. UNESCO invites active participation in the ongoing revision of the IUI Framework, fostering a truly inclusive and diverse multistakeholder engagement.

    ,

    UNESCO prompted all global stakeholders to embrace the multistakeholder approach to oversee evidence-based research with a diversity of perspectives, identify areas for improvement, and contribute to shaping an Internet that serves and empowers all.

    IGF 2023 WS #165 Beyond universality: the meaningful connectivity imperative

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    Universal and meaningful digital connectivity - the possibility for everyone to enjoy a safe, satisfying, enriching, productive and affordable online experience - is key for enabling digital transformation and achieving the SDGs. Achieving universal and meaningful digital connectivity requires policy makers to embrace the concept and include it in national digital policies and policy plans.

    ,

    Good quality data on all aspects of universal and meaningful connectivity are essential to inform and monitor digital policies, highlight to policy makers where the digital divides in a country are, and how severe they are.

    Calls to Action

    Policy makers need to adopt universal and meaningful connectivity, set targets, and include it in national digital strategies.

    ,

    Policy makers should use good quality data on universal and meaningful connectivity in their digital policies, and in the absence of these data request these data from the relevant statistical agencies in the country and fund data collection.

    Session Report

    The objective of the workshop “Beyond universality: the meaningful connectivity imperative” was to inform the audience how universal and meaningful connectivity is defined; how it can help reaching underserved communities; which are some of the targets and baseline indicators needed to assess where a country stands; and the importance of including the concept in national policy plans.

    The session aimed to answer two policy questions:

    A. How can governments and stakeholders ensure universal and meaningful digital connectivity for all citizens, particularly those in underserved and marginalized communities?

    B. How can policymakers establish robust measurement frameworks and indicators to accurately assess the progress, impact, and effectiveness of initiatives aimed at achieving universal and meaningful digital connectivity?

    The panel provided the audience with perspectives from very diverse countries, namely Lithuania, Bangladesh and Brazil, as well as information on how the European Commission partner with other parts of the world.

    The workshop started with a recorded message from Dr. Cosmas Luckyson Zavazava, the Director of the Telecommunication Development Bureau of the ITU, which was followed by a short introduction to the project on promoting and measuring universal and meaningful connectivity.

    The first panellist was Agne Vaiciukeviciute, Vice-Minister of Transport of Communications, Lithuania. Lithuania is a country that in a relatively short period reached most of the targets of universal and meaningful connectivity. The Vice-Minister explained how policy played a pivotal role in getting there. She spoke about the role of public libraries in skills development, the broadband deployment in the whole country and the fact that Lithuania has the lowest prices in Europe. Various NGOs have initiatives to help citizens in using the Internet. Very important is the collaboration between civil society, government and the private sector.

    Mr. Alexandre Barbosa, Head, Center of Studies for Information and Communications Technologies (CETIC.br), Brazil gave a presentation on how solid data can inform policy makers on where the country stands with respect to UMC, where the digital divides are in a country and which are the vulnerable groups. He also mentioned that technology targets are moving targets. He said: “What is good today may not be not enough tomorrow”.

    The next speaker was Mr Peter Mariën, Directorate-General for International Partnerships, European Commission. He introduced Global Gateway, which is a program through which the EU is strengthening connections between Europe and the world and helping partner countries address the digital divide and further integrate into the global digital ecosystem. He said that: “We need to make things happen in the field. But ne need to do the basic homework: what is it that we need to do?”

    The last speaker was Mr. Anir Chowdhury, Policy Advisor a2i Program, Bangladesh, who joined the session online. Bangladesh is a large Asian country, a country that still has a considerable journey to go towards UMC, but where connectivity is considered important. There are many initiatives underway in the country, and 98% of the country’s population has access to a 4G network, however, only half of the population is using the Internet. The regulator has put a cap on the price of Internet access, but handsets that are too expensive form an important barrier. Another barrier is skills, as highlighted by Mr Chowdury: “Connectivity is important, but digital skills and service design are equally important.”

    IGF 2023 Open Forum #20 Benefits and challenges of the immersive realities

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    Exploring the Metaverse's expansive scope, distinctive features, and intricate complexities. Acknowledging the need to mitigate potential risks and address human rights challenges. Emphasizing the importance of awareness, education, and collaborative efforts to shape its development while upholding democratic principles. CoE is dedicated to safeguarding rights in the digital age, working along with other concerned stakeholders in this.

    Calls to Action

    Call to Action: Metaverse challenges demand multidisciplinary collaboration and multi stakeholder approach for tailored solutions. Explore existing instruments, legislate as/if needed, develop new (socio)technical and other standards to support implementation. Uphold and promote human rights, rule of law, democracy in the digital age, without stifling innovation.

    Session Report

    The Council of Europe organized an Open Forum during the Internet Governance Forum (IGF) 2023, focusing on the transformative potential of the immersive realities such as the Metaverse and its implications for human rights, the rule of law, and democracy. The session aimed to shed light on the complex issues surrounding the Metaverse and engage industry representatives, human rights organizations, and partner organizations in a dialogue about its opportunities and challenges. The session tackled the ongoing work on the joint comprehensive report prepared by the Council of Europe in collaboration with the Institute of Electrical and Electronics Engineers (IEEE).

    The following key messages emerged during the panel discussions:

    Understanding the Metaverse and its components: The term "Metaverse" is often used interchangeably with immersive realities and virtual worlds. Virtual worlds and immersive realities are components of the broader Metaverse. While virtual worlds and immersive experiences already exist in consumer applications, such as VR gaming and social platforms, the Metaverse offers vast application potential, including government use and the exercise of human rights and fundamental freedoms, such as political participation through e-voting and virtual governments.

    Defining Features of the Metaverse: The Metaverse is characterized by several key features, including a sense of presence, immersiveness, persistence, convergence of physical and virtual worlds, and seamless interconnectedness. These features rely on underlying technologies such as VR headsets, AI, blockchain, 5G, and IoT. These technologies enable the technical implementation of the Metaverse's features but should not be confused with the Metaverse itself. The Council of Europe recognizes the need to address both existing harms and potential risks associated with the Metaverse, along with its benefits.

    Complexity and uncertainty: The Metaverse's evolution is marked by complexity, involving enabling technologies like AI, VR, and blockchain. Lessons learned from these technologies can inform our understanding of the Metaverse's implications. Raising awareness, educating stakeholders, and fostering dialogue between policymakers and technologists are essential to identifying needs, gaps, and feasible measures.

    Challenges to Human Rights, Rule of Law and Democracy: The Metaverse presents unique challenges related to identity, privacy, safety, security, protection of vulnerable populations, access, inclusion, freedom of expression, censorship, labor, environment, and the rule of law. These challenges include issues like data privacy, digital territoriality, and enforcement. The intrusion of immersive technologies into personal spaces raises concerns about mental privacy and autonomy.

    Some of the concerns we already experienced are expected to be exacerbated, some new considerations will come up, while we will have to rethink how we define, perceive and safeguard for example identity, reality, choice, consent, agency, data or how we distinguish public from private space, in view of the intrinsically pervasive nature and increased breadth of sensing which is necessary for the immersive technologies to drive key functionalities. The same unprecedented nature and volume of data collection and the intrusion to our homes, our network and minds creates greater risks for example to bystander privacy, freedom of thought and related mental privacy and mental autonomy for example, with new frontiers in manipulation and other existing risks. Implementation, compliance and enforcement are also expected to be more complex due to the complex nature of immersive realities.

    Behavioral and space management: The Metaverse represents a shift from content and agent management to behavioral and space management. This shift necessitates a balance between maintaining existing considerations and adapting to the unique dynamics of the Metaverse.

    While we don’t know how the metaverse will evolve and to what extent, we do know that it is complex, with a series of enabling technologies like AI, VR and blockchain. We can use some of the lessons learned from these technologies and do some further research to better understand the different technology implications across different areas. In that respect, awareness raising and education will also be key to create common understanding as well as related skills, while a dialogue between policy makers and technologies can help identify needs, gaps and feasibility of considered measures, along with the assessment on roles and responsibilities different profiles.

    Regulation, standards, and legislation: While the Council of Europe has developed a robust framework of instruments and recommendations in the areas of human rights, the rule of law, and democracy, the Metaverse presents unique challenges that may require further exploration and tailored solutions. Collaborative efforts involving the Council of Europe, member states, civil society, and other stakeholders will be essential to address these challenges effectively and uphold fundamental values in the digital age.

    Technical standards provide flexibility and adaptability, while legislation establishes red lines and reinforces rights protection. Policymakers must strike a balance and prioritize areas where legislation is necessary.

    Stakeholder involvement: Collaboration among different stakeholders is essential to establish common language and principles for the Metaverse. Ensuring that the Metaverse remains human-centric and aligned with values, human rights, the rule of law, and democracy requires ongoing partnership and involvement.

    In conclusion, the Open Forum emphasized the need to recognize the potential and challenges of the Metaverse. It highlighted the importance of continued dialogue and collaboration among stakeholders to shape the Metaverse in a way that upholds human rights, the rule of law, and democratic principles while harnessing its potential for societal benefits.

     

    IGF 2023 Lightning Talk #106 Addressing the changing cybersecurity landscape

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    The world is witnessing unprecedented cyber threats to businesses, governments, and individuals. Google develops built-in, automatic protections, many of them using AI applications, that detect and block threats before they ever reach users. Google is also a trusted local partner for governments and enterprises, offering cybersecurity certificate programs and helping underserved organizations to learn cybersecurity skills and protect themselves

    Calls to Action

    Learn how Google protects people from cyberattacks  https://safety.google/intl/ja/cybersecurity-advancements/

    ,

    Prepare for a career as a cybersecurity analyst with a professional certificate from Google  https://www.coursera.org/professional-certificates/google-cybersecurity…

    Session Report
    1. Around 45 people attended the session, in which Shane Huntely shared Google’s approach and key initiatives in cybersecurity protection: Google is committed to keeping people safe online by integrating cutting-edge AI technology. Secure by Default products such as Google Play, Gmail, and Chrome protect billions of users from cyberattacks. In October, Google launched Google Career Certificates and APAC Cybersecurity Funding to bolster the number of cybersecurity professionals. The Threat Analysis Group continuously monitors attackers. Google will continue to make efforts as a trusted partner to enhance online safety.  In the Q/A session, he answered the questions about diversity improvement, partnerships, etc.  
    IGF 2023 DCCOS Risk, opportunity and child safety in the age of AI

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    • Children have the right to safe, inclusive age-appropriate digital spaces; these must be in line with the evolving capacities of each child.

    ,

    • To create such a digital environment for children, we need a broad focus and to take into account the perspectives of children, parents, educators, the private sector, research, policymakers and civil society as much as possible.

    Calls to Action

    • We call upon all IGF stakeholder and members of the DC Child Rights to engage their communities, including children and youth, in ensuring a child rights based-approach in their work.

    ,

    • We call on the IGF community to join the DC Child Rights to continue and enhance the dialogue initiatied at IGF Kyoto 2023.

    Session Report

    Session Report 2023: Risk, opportunity and child safety in the age of AI

    Dynamic Coalition on Children's Rights in the Digital Environment

    Key Issues raised

    1. Obstacles to Child Rights in Practice: Ensuring child rights – as outlined in the General Comment No.25 (UNCRC) - requires a constant balancing exercise based on the best interests of the child; regulation often prioritizes certain rights over others, such as protection over empowerment, socio cultural differences exist; research including with children is essential to assess and mitigate risk. 
    2. Lack of knowledge about the impact of technology on society: There is an urgent need for improved understanding by children and adults in all settings of the risks and applications of AI and other advanced technologies in their lives.
    3. Regulation, corporate responsibility and diverse consultation: Regulation and codes of practice are urgently needed to ensure action against illegal content, conduct and contact such as child sexual abuse material; this needs to come alongside evidence-informed commitments and transparent processes.

    Key Intervention themes

    1. The view and evolving needs of children: Deutsches Kinderhilfswerk highlighted the results of a meta-research focused on children aged 9-13. Reflecting a key principle of the GC 25, younger children need a strong social support system, while older children are more comfortable using tools such as blocking and reporting but also need information on how to report people, how to block people and how to protect themselves from uncomfortable or risky interactions.
    2. Public Awareness: ChildFund Japan presented a recent public opinion survey for people aged 15 – 79 in Japan. The results highlighted the internal conflict between human rights, especially child rights and freedom of expression.  Asked about computer/AI generated CSAM, some respondents consider it to be acceptable as a method to prevent ‘real’ crime, demonstrating a lack of understanding of the multifaceted harm of CSAM. 20% of respondents said they did not know about the risk of AI, or even understand AI, which points to a need for much better education and awareness. 
    3. Participation and Trust: The APAC Youth IGF perspective called for more youth voices in internet governance spaces. Even young adults cannot advocate fully for younger generations that face more complex challenges, risk and harm that previous generations. It is natural for children to turn to parents and caregivers, but many adults do not grasp the complex nuances of the risks. Fostering trust and ensuring fundamental digital knowledge are essential steps in creating a reliable and ethically responsible digital environment for the younger generations.
    4. Investment in prevention alongside regulation: The CJCP highlighted that relying on children’s rights even as they are contained in the CRC and in General Comment 25 is complex today due to interpretation. Often, emphasis is placed on specific rights rather than equitable embracing of all child rights. Regulation is essential but alone will not resolve the challenges presented by emerging technologies, immersive spaces, and AI.  States must put proportionate investment into prevention – education and awareness-raising.
    5. Risk Assessment that embraces diversity to inform design: Microsoft highlighted the necessary balance between regulation and outcomes-based codes that can offer more flexibility for different services to navigate inevitable trade-offs and achieve the safety and privacy outcomes that are desired. Risk assessment and needs analysis also means improving ways to understand impact on issues through an age-appropriate and gender lens among others. This requires greater consultation with children themselves to inform debates such as those around age assurance.

    Participatory Discussions

    • Children’s Rights in Policy and Practice: Children have the right to participate in a digital environment and to be protected when using digital services. The collection of data by services, the sharing of information without children's consent by parents or others and the risks of interacting with peers can affect children's well-being and influence their use of digital services. Children need (more) media literacy and parents, educators and other adults should be aware of children's rights and also acquire knowledge about the digital environment. Platforms play a central role, and we need to overcome the contradiction between providing age-appropriate, safe services for children without knowing the age of users of the service.
    • Regulation: Regulation takes a long time, which we do not have in the fast-changing landscape. All sectors need to be aligned on what child rights are, and tech companies must make transparent commitments based on risk assessment and mitigation that is differentiated by service and product based on safety-by-design principles and practice. Regulation is complex, and there are particular challenges in taking into account the individuality of each child and their developing capacities. It should not be forgotten that existing regulation and the way it is implemented can also lead to further disparities. Regardless of this, it seems appropriate to hold service providers accountable and give them guidance on providing safe services for young people that ensure the rights of the child.
    • Geographical differences also impact more deeply on children’s rights. Participants from Brazil, Bangladesh and Afghanistan highlighted the challenges that these countries have with technology, access and capacity building. Despite international standards, they are not applied equally. Evidence shows that some children, especially from the Global South have a much lower level of protection than those from the Global North. Young people themselves call for clear definitions and scope about online safety to frame the conversation equally. Furthermore, the evolving capacity of children in different contexts is largely influenced by different contexts in which they live. 
    • Research: More research is needed about children’s experience and resilience. This will enable solutions that meet children's best interests and guard against legislation and policy that are out of step with reality. Safe spaces where children can implement their own ideas without being affected by product guidelines or market-driven interests are now created by civil society organisations, communities and families. Therefore, we need research to inform policy and practice on safe social spaces, e.g., in the Metaverse.

     

    Participant suggestions regarding the way forward/ potential next steps

    • Use the IGF platform to help align, localize and make inclusive approaches: To create safe digital environments for children and young people, it is essential to understand how risks are regulated in different parts of the world. This also requires caution against applying existing legislative or policy approaches from one location to another without proper analysis of local context and culture.  It is also important not to underestimate different needs, for example around gender and disability; doing so may result in services that fail to prioritize the best interests of every child. And to bring children's rights and their interests more strongly into discussions about the design and regulation of the internet, relevant exchange and cooperation with other Dynamic Coalitions should be expanded. Parallel to this, the existing DC Child Rights should also intensify its work to facilitate child rights debate within and across the IGF community, also serving as a reference point for others.
    • Improve public understanding of and participation in debates around technology and society: Many people do not understand how existing and new technology such as Artificial Intelligence works. Improved education and awareness are needed for both children and adults. Governments, civil society and tech companies must do much more to bring children and young people into the center of these debates, in a meaningful and outcome-driven way. Their generation-specific experiences will ensure a sustainable approach to tech policy and internet governance at this pivotal moment.
    • Consider clearer guidance on what the balance of rights, as well as trust and transparency look like for different people in the digital world: There is an urgent need to build and improve trust - trust in algorithms, in institutions, in companies, and in each other whether youth peers or adults – to address the key challenges presented by today’s digital world.  And trust must be built on transparency of design, decision-making, processes, and action that can enable informed public debate.
    IGF 2023 Networking Session #44 Meeting Spot for CSIRT Practitioners: Share Your Experiences

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    Cybersecurity practitioners with different perspectives should be interested in sharing what information they deal with in their own activities, and acknowledging what they can and cannot compromise will increase their awareness of participating in cybersecurity discussions.

    Calls to Action

    Participants agreed to promote the IGF more to cybersecurity practitioners not only to enhance collaboration with other stakeholders but also focus on diversity within the cybersecurity cluster to encourage mutual understanding.

    Session Report

    Prior to discussing the presented guiding questions, the moderators, Hiroki Mashiko, Bernhards Blumbergs, Adli Wahid, and Masae Toyama as well as participants introduced themselves to get to know each other. Toyama mentioned the background story of session proposal. Her idea was to break the situation of fewer chance to meet CSIRT practitioners in IGF and create a place for them to attract themselves and a wider technical community. She said that while CSIRTs play an important role in keeping secure and stable cyberspace, their voice in IGF is not yet loud enough amongst various stakeholders.

    She explained how the networking sessions ran. All participants invited to stand up and walk freely in the room to talk to someone they not spoken to yet. One of the features of the session was that participants were encouraged to pick coloured sticky note(s) to show their own stakeholder group, and this facilitated to foster the conversation in terms of ice break.
    Of the 10-minute session, seven minutes are allocated for short discussion to answer a guiding question and three minutes for the exchange of comments from the virtual and physical venues. It was repeated three times with different questions.
     
    Mashiko actively addressed the onsite participants and encouraged them to comment. Blumbergs and Wahid interacted with the online participants by summarising the views of participants and sometimes adding their own views as well. The instructions were carefully explained to those who came in the middle of the session so that they could enter the discussions smoothly.

    After that, the programme proceeded to three rounds of discussion session. Comments shared by participants are including but not limited to:
    Question 1: When do you feel that your commitment to cybersecurity is creating and sustaining an open, free and secure Internet?
    - Meeting different stakeholders through IGF is my commitment.
    - It is a difficult time to achieve a secure, open and free internet all at once. Practitioners are forced to balance the three on a daily basis. It is necessary to interact with people in countries that are not open.

    Question 2: What international (geo)political issues prevent CSIRTs from an open, free, and secure digital cyberspace in engaging with cybersecurity? If we cooperate, how can we address this?
    - A government CERT feels that the outflow of personnel and data out of the country is a problem.
    - Practitioners in other sectors should be made aware of the environment inside and outside their organisations that discourages information sharing between practitioners.
    - A gaming company's CERT always considers the possibility of internal or user backlash if the vulnerability is made public and the game rules are changed.
    - A university CERT considers whether it makes sense to disclose vulnerabilities outside the stakeholders.

    Question 3: To promote cybersecurity, what is a key message you would like to convey at this IGF which is attended by a wide range of stakeholders?
    - Cybersecurity practitioners are varied: national, academia, private sector from dealing with public infrastructure and product security.  All deal with different information in different styles.
    - Cybersecurity practitioners with different perspectives should be interested in sharing what information they deal with in their own activities, and acknowledging what they can and cannot compromise will increase their awareness of participating in cybersecurity discussions.
    - In multi-stakeholder fora, strengthening information sharing and capacity building is often the conclusion of the sessions, but how can it be properly assessed, or what other solutions are there?

    Moderators thanked the participants for raising the issues and reaffirmed the need for such a place for exchanging views. In the end, participants agreed to promote IGF more to cybersecurity practitioners not only to enhance collaboration with other stakeholders but also focus on diversity within the cybersecurity cluster to encourage mutual understanding.

    IGF 2023 WS #62 Data Protection for Next Generation: Putting Children First

    Updated:
    Data Governance & Trust
    Key Takeaways:

    Data governance policies must address the unique needs and vulnerabilities of children in the digital age. This includes promoting transparency, accountability, and the protection of children's digital rights. Children themselves must be given seats at the table and enabled to aid in designing these policies.

    ,

    The debate on age-verification and age-appropriate design must contextualise the socio-cultural backgrounds that each child may come from.

    Calls to Action

    Governments are urged to prioritize the creation of data governance policies that specifically address the unique needs and vulnerabilities of children in the digital age. They should ensure that the policies are designed in collaboration with experts in child development, privacy advocates, and children themselves, with a focus on age-appropriate design and user-friendly language.

    ,

    Tech companies must adopt and enforce age-appropriate design principles in their products and services. This means developing child-friendly terms and conditions, ensuring understandable privacy settings, and creating digital environments that respect children's evolving capacity. The private sector should proactively minimize data collection from children and ensure that the data collected is used only for legitimate purposes.

    Session Report

    The session focused on the need for the early incorporation of children's rights into legislation, the inclusion of children in decision-making processes, and the consideration of their rights from the beginning stages of legislation. The speakers also advocated for the active participation of children in shaping policies that affect their digital lives, arguing that involving children in policy-making processes allows for better addressing their unique insights and needs. Noting that children are infact "Internet natives", the speakers emphasized that they may have a better understanding of privacy protection due to growing up in the digital age, which, in turn, challenges the assumption that children are unaware or unconcerned about their digital privacy.

    Age-Appropriate Design Codes and Data Minimization

    The need for regulatory and educational strategies to protect children's privacy rights was a recurring theme in the session. Age-appropriate design codes, proposed by Professor Sonia Livingstone, aim to ensure that digital products and services respect and protect children's privacy, considering their age and understanding. By focusing on age-appropriate design, these codes acknowledge the connection between privacy and other essential rights of children. The idea is to create digital environments that respect children's evolving capacities and needs, emphasizing that privacy is not a standalone right but intricately linked with their overall well-being. 

    The DotKids Initiative: Promoting Child Safety Online

    The .Kids top-level domain, as presented by Edmon Chung, exemplifies efforts to promote child safety online. This initiative is unique as it aligns with the principles outlined in the Convention on the Rights of the Child. It provides a platform for reporting abuse and restricted content, thereby creating a safer digital space for children. DotKids serves as a notable example of how innovative approaches, such as domain-specific regulations, can be employed to prioritize children's rights and safety in the digital realm.

    USAID's Role in Promoting Digital Innovation

    USAID's significant role in technological innovation and international development cannot be overlooked. With its global reach and vast network across 100 countries, USAID actively supports initiatives that enhance digital literacy, promote data literacy, improve cybersecurity, bridge the gender digital divide, and protect children from digital harm. Their digital strategy, launched in 2020, underscores their commitment to ensuring the development of secure and inclusive digital ecosystems. As part of their efforts, they work towards protecting children's data by raising awareness, aligning teaching methods with educational technology (EdTech), and implementing data governance interventions in the public education sector.

    Balancing the Opportunities and Risks in Digital Tools

    The speakers illuminated how digital tools can both empower and endanger children in the digital environment. While these tools facilitate essential tasks like birth registration, case management, and data analysis, they also expose children to various risks, including cyberbullying, harassment, gender-based violence, and exposure to inappropriate content. The negative consequences of these risks, such as restricted perspectives and impaired critical thinking skills, were noted.

    Educational Initiatives and Data Governance Frameworks

    It was emphasized that awareness, advocacy, and training for data privacy protection are essential to counter these risks. Children's digital education was highlighted as a critical aspect of promoting responsible and ethical digital citizenship. Collaborative efforts between USAID and the Mozilla Foundation to provide ethical computer science education in India and Kenya represent a significant step in this direction. Ethical education helps ensure that future generations of technologists consider the societal and ethical impacts of their work. Furthermore, the integration of children's rights into national data privacy laws was advocated as a vital measure to protect children's privacy and well-being.

    Empowering Youth Advocates for Data Governance

    The importance of empowering youth advocates in the field of data governance and digital rights was also recognized. By engaging and supporting young voices in shaping data governance and digital rights agendas, we can better address the evolving needs and challenges faced by children in the digital era.

    Suggestions

    The discussions acknowledged the tension between the decision-making authority of adults and children's understanding of their best interests. The remedy proposed is to amplify children's voices in the digital society and introduce discussions on digital and data rights in formal education institutions. Recognizing children as stakeholders in internet governance was highlighted, and the need for a children's Internet Governance Forum (IGF) was emphasized. Such a forum can help raise awareness, build capacity, and encourage positive changes in the digital realm, benefiting children worldwide.

     

    IGF 2023 Open Forum #65 Effective Governance for Open Digital Ecosystems

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    Nations are at a crossroads, making directional decisions. To mitigate the risks of DPI, protective measures must be designed before digital systems take shape. As countries develop their DPI, the urgency to incorporate DPI safeguards intensifies. If left unchecked, the lack of proper safeguards can result in unprecedented vulnerabilities.

    ,

    Regardless of nomenclature, prioritizing empowerment and safety for individuals should always be at the heart of digital systems. Ensuring a user-centric approach in the digital realm not only fosters trust but also enhances the overall user experience. When systems prioritize individual rights and well-being, they create an environment where innovation and freedom can thrive without compromising security.

    Calls to Action

    There is the need for a global effort to create a universally agreed framework for DPI safeguards.

    ,

    Such an initiative must be meaningfully inclusive and be tailored to the needs of all stakeholders.

    IGF 2023 Day 0 Event #49 Advancing digital inclusion and human-rights:ROAM-X approach

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    IUI remains a unique tool for informing evidence-based, inclusive digital policies. Multistakeholder participation within the ROAM-X framework enhances ownership and collaboration, crucial for tackling complex Internet governance challenges. Given the evolving digital landscape, ongoing revisions of the IUI ROAM-X framework are essential to address emerging challenges and enhance the assessment process.

    Calls to Action

    Stakeholders from the government, academia, civil society, technical community and all the other stakeholders are called upon to actively engage in the IUI assessment process to ensure diverse perspectives are considered in formulating digital policies. Global stakeholders are invited to support the revision of the IUI framework, fostering multistakeholder participation as a driving force for positive change in the digital ecosystem.

    IGF 2023 WS #535 War crimes and gross human rights violations: e-evidence

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    Reliance on open-source intelligence as source for evidence of war crimes and gross human right violations has quickly become a new feature of recent armed conflict across the world, especially in the context of Russia’s war of aggression against Ukraine. The overall framework of principles of criminal justice, evidentiary rules, and human rights safeguards should still apply and provide guidance on use of such evidence against offenders.

    ,

    Global standards such as the Budapest Convention on Cybercrime and its 2nd Protocol lay down a framework of regulations and safeguards applicable to open-source intelligence as part of investigations into war crimes and related offences. Implementation of these rules and applying in practice, as well as ensuring international cooperation on e-evidence across borders, are equally important to covert it into actionable and admissible evidence

    Calls to Action

    Open-source intelligence and information should be used & applied as part of overall criminal justice framework, include applicable safeguards and guarantees, should such information be used as evidence in criminal procedure on war crimes or other offences. Use of open-source evidence as criminal justice evidence is still a learning process for everyone involved, thus strong capacity building action is required to improve & strengthen these skill

    IGF 2023 Day 0 Event #19 Hack the Digital Divides

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    There are many digital divides addressed by WSA and its global community of digital impact ventures and their founders - be it the gender divide, access divide, content divide.We heard concrete examples from young entrepreneurs, who address these divides in their local and regional context. In order to support those kind of purpose driven entrepreneurs, we need other business and funding models than the silicon valley type of growth concept.

    ,

    Independent content producers face huge struggles that their impact solutions can be found. 54% of the global marketing budget goes to 5 tech companies in the US and intransparent algorithms make it hard to find relevant and useful content. WSA provides a diverse and global showcase of digital best practice solutions supporting the achievement of the UN SDGs, solutions by local entrepreneurs for local challenges, but contributing globally.

    Calls to Action

    Let's focus on the Tunis Action plan and the transformation into a knowledge society.

    ,

    Let's put more action into WSIS action line C9 - media, an area that radically transformed since 2003 and where the WSIS multi-stakeholder communtiy should put more focus in its review.

    Session Report

    HACK THE DIGITAL DIVIDE by WSA

    The digitalization affects all parts of our lives. It supported children in homeschooling during the pandemic, offers citizen engagement, provides information, and makes us more efficient.
    However there are challenging digital divides, in terms of access, content or gender.

    WSA is a global activitiy initiated in 2003 in the framework of the UN World Summit on the Infomration Society, aiming to narrow digital divides and support the transformation into a knowledge society through the promotion of best practice solutions of digital innovations and local content for the UN SDGs.
     

    Three WSA winners shared their solutions how to narrow digital divides.
    Matias Rojas, CEO and Founder from SociaLab in Chile shared the centralization of access to funds and networks in urban areas. Talent is equally distributed in society, but not ressources. SociaLab bridges divides between rural areas and cities and supports entrepreneurs to be economically successful but combined with ecological and social impact.

    Tiffany Tong, Founder and CEO from Aloi shared how her company bridges the divides in terms of financial access through microfinancing tools.

    Aloi is a software platform for digitally monitoring loan expenditure and repayment through verified merchants and deposit points. Like an automatic audit, Aloi tracks financing flows using simple phones and without mobile internet.

    Gloria Mangi won the WSA Young Innovators award more than 10 years ago with her project the African Queens Project, aiming to bridge the knowledge gap for African women and women in tech, sharing inspirational strories, rolemodels and providing a support community.

    The discussion with the present audience at the IGF has been highly engaging and people shared the different digital divides they are facing in their countries/communties.

    The panellists discussed the discrapancy between usage and coverage. This led to a discussion about the difficulty for app developers and content producers to make their solutions visibilty, dealing with algorithms of big tech monopoly. How to deal with the algorithm in order to access relevant information, high quality content and avoid fake news or hate speech.

    The panellists discussed also with the audience the need of new models fitting purpose driven entrepreneurs. The silicon valley model does not work, we need different growth models, focussing on the impact growth instead of a financial growth.
    Impact instead of Exit.
    Collaboration instead of Competition.
    Local innovation instead of Global Disruption.
    Social Franchising and Knowledge Sharing.

    WSA chairperson Prof. Bruck shared: WSIS lost half of his focus.
    The side of narrowing the digital divides is addressed, but the WSIS process tends to forget about the transformation society. We have technology success, but knowledge failure.
    Hate speech, mis-information - these are things that we need to take seriously.
    WSIS has started pre-SDGs, during the MDG (Millenium Development Goals) agenda - the benefit of the UN SDGs is that it has within its key indicators clear KPIs.
    In the Tunis plan of action there is action line C-9 - media.
    Media is a completely different kind of landscape today than back in 2003, due to the digital platform monopolization.
    54% person of global advertising budget goes to 5 American companies. Papers need to close down, business models fail and we are having a huge loss in terms of media diversity.

    IGF 2023 Day 0 Event #23 On how to procure/purchase secure by design ICT

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    If the internet and ICT are to become more secure and safer, procurement by large organisation can be a powerful tool, that is fully underused. A narrative needs to be developed that will make individuals in decision- taking positions and those in procurement offices take a decision which includes ICT security by design.

    Calls to Action

    IS3C announced a consultation on a document containing the most important and urgent internet standards, organisations should demand when procuring ICTs. https://docs.google.com/document/d/1ZC6PBHOREbObHUgopAkPQbIWC_EgLQ8nDyD…

    Session Report

    The Dynamic Coalition on Internet Standards, Security and Safety Coalition organised a day 0 event on procuring ICT products, services and devices secure by design.

    In an introduction IS3C's coordinator Wout de Natris explained how governments and industry can become more secure by design. The most powerful tool any large(r) organisation has over manufacturers and developers is its buying power. When they start to demand that open, security-related internet standards and ICT best practices are built in by design, most likely industry will adhere to this demand. As a trickle down effect, ICT will become more secure for all users. However, IS3C's research shows that buying power is seldom applied where ICT security is concerned.

    Next, the overarching themes from IS3C's research were presented. They are:

    1. Governments, and most likely other organisations as well, do not use their purchasing powers to procure secure ICT and IoT products, services and devices;
    2. There is insufficient cooperation between governments to coordinate on ICT security regulation or advise. This makes it hard for industry to adhere to commonly set standards;
    3. Open standards, created by the technical community are not recognised by by far most governments. This results in the public core of the internet to remain unprotected;
    4. Better cooperation between governments will lead to a better protected and safer internet for all.
    5. The lack of a level playing field for industry results in an insecure ICT environment as products are as a standard to be released on the market insecure by design.
    6. The lack of demand for security by design from society as a whole leads to a lack of incentive for industry to manufacture and develop security by design in ICT.
    7. Governments can be the big driver for security by design to procure according to this principle
    8. There is a world to win where cybersecurity is concerned that is currently fully underused.

    The floor was opened for questions and comments, but none present had experience with this way of working.

     

    David Hubermann, chair of IS3C’s WG 8 on DNSSEC and RPKI deployment, explained the importance and uniqueness of open internet standards. For many ordinary things in life countries or regions have their own standard. E.g. currency, electricity voltage, sockets and plugs, the side of driving on the road, etc. On the internet however, the standards are the same everywhere. When these standards were first created, security was not an issue. Since, security has become a major topic. The technical community has come up with solutions for the insecurity in standards. For the DNS system, the domain names, it is a set of security extensions called DNSSEC, for the routing on the internet it is called RPKI, Resource Public Key Infrastructure.

     

    The  strange situation occurs that security is within grasp, if only it was deployed by, mostly, the internet industry. This is insufficiently the case, leaving everyone on the internet exposed to threats and attacks, including those who have deployed, as the "dark side" can use these flaws for their nefarious purposes. The focus of the technical community has mostly been focused on the technique. It has become clear that this does not convince the people in decision-taking positions to agree to deployment.

     

    To change this, IS3C's WG 8 has formed a team of experts who will provide advise to all who have to convince their managers to deploy with a new set of arguments, that move away from the commonly used technical ones. The result is expected in the winter of 2024.

     

    People present were invited to join this work and IS3C in general. From the comments received, it was cleat that the message came across loud and clear but not how individuals present could contribute to this challenge.

    IGF 2023 WS #237 Online Linguistic Gender Stereotypes

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    Online linguistic gender stereotypes is a prevalent issue in many regions. It impacts the future of many young girls.

    ,

    These stereotypes may lead to violence against women.

    Calls to Action

    We need more comprehensive capacity building so the youth are well educated on this topic.

    ,

    More conversations need to be held at the regional and international level so that people are aware of the dangers of online linguistic gender stereotypes.

    Session Report

    Executive Summary:

    This report synthesizes WS #237 focused on the profound impact of linguistic gender stereotypes in online spaces, with a particular emphasis on their consequences for youth and marginalized groups. The discussion underscores the pressing need for collective action via a multistakeholder approach to address this issue comprehensively.

    Introduction:

    The conversation revolved around the pervasive presence of linguistic gender stereotypes and hate speech online, primarily affecting women and gender-diverse individuals, highlighting the following key points:

    Linguistic Gender Stereotypes and Hate Speech: The discussion initiated by addressing online linguistic gender stereotypes and hate speech, emphasizing their grave implications, especially for women and gender-diverse individuals.

    Gender Digital Divide: It was emphasized that linguistic gender stereotypes contribute to a gender digital divide, alienating and subjecting women and gender-diverse individuals to aggressive hate, affecting their self-esteem and participation in online platforms.

    AI and Language Models: The discourse delved into the role of AI, particularly language models like ChatGPT, in perpetuating stereotypes and biases, emphasizing the importance of diverse training data.

    Education and Empathy: The significance of education in addressing these issues was underscored, with a call to teach inclusivity and diversity from a young age to foster empathy and understanding.

    Platform Responsibility: The pivotal role of online platforms in countering hate speech and stereotypes was discussed, with suggestions for improved design, privacy measures, and transparent algorithms.

    Community Engagement: The panelists advocated for more community involvement and open dialogues, recognizing the importance of engaging all stakeholders in addressing these challenges.

    Breaking Stereotypes: Breaking linguistic gender stereotypes was acknowledged as a formidable but essential task, necessitating narrative change, empathy promotion, and targeting younger generations to drive change.

    International Legislation: While international legislation was deemed important, it was acknowledged that breaking stereotypes remains intricate.

    Support and Recognition: A need for greater support and recognition for those affected by online hate speech, given its impact on mental health and self-esteem, was highlighted.

    Key Recommendations:

    The panelists recognize the urgency of addressing linguistic gender stereotypes and hate speech in online spaces. They propose a multifaceted approach that encompasses education, platform responsibility, community engagement, and breaking stereotypes to create a more inclusive and respectful online environment, particularly for women and gender-diverse individuals. The following recommendations emerged:

    Education as the First Step: An emphasis on education, particularly regarding the language's impact on self-worth, should be placed, with a focus on youth.

    AI and Language Models: The need for more diverse training data for AI, and active dialogue between linguists and the technical community, to ensure inclusivity in internet interactions.

    Platform Design and Multistakeholder Approach: A demand for better platform design and the incorporation of inputs from all stakeholders, following a multistakeholder approach, to tackle these challenges effectively.

    Youth-Centric Education: Educational systems should instill best practices in social media content consumption and production, safeguarding the youth from falling victim to stereotypes.

    Conclusion:

    The discussion unambiguously emphasizes the imperative of combating online linguistic gender stereotypes and hate speech in the digital realm. The proposed strategies, including youth education, diversifying AI training data, improving platform design, and fostering inclusivity, are seen as essential steps toward creating a more equitable and harmonious online environment. By embracing these recommendations, we can hope to bridge the gender digital divide and protect the well-being of youth and marginalized groups in the online world.

     

    IGF 2023 Day 0 Event #76 Can Digital Economy Agreements Limit Internet Fragmentation?

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:
    Digital Economy Agreements employ a “modular” architecture that treats issue sets differently based on their technical properties and associated interest configurations but under an integrative umbrella framework., The DEAs agreed to date have included provisions encouraging cooperation relevant to Internet fragmentation. They are promising additions to the institutional ecosystem but not a panacea for fragmentation.
    Calls to Action
    A wider range of states and stakeholders may wish to explore the use of modular agreements to institutionalize dialogue and cooperation on Internet and digital economy issues, including those of relevance to fragmentation., Industrialized countries could explore ways to create greater incentives for developing countries to participate in such arrangements.
    Session Report

     

    This session focused on the “modular” architecture in digital economy agreements (or ‘DEAs’) and related institutional innovations, and their potential role in addressing various policy concerns pertaining to internet fragmentation. In particular, the session focused on fragmentation that is governmental in origin, i.e., resulting from government policies that continually impede the exchange of packets between willing endpoints and interfere with the interoperability and uniformity of Internet functions.

    DEAs are a recent addition to the digital regulatory framework. Since 2020, governments mostly in the Asia-Pacific region, led by Singapore, have entered into a number of DEAs focusing on digital-only issues. The DEAs negotiated to date are 2020 Digital Economy Partnership Agreement (‘DEPA’) between Singapore, New Zealand, and Chile; 2020 Singapore-Australia Digital Economy Agreement (‘SADEA’); 2022 United Kingdom – Singapore Digital Economy Agreement (‘UKSDEA’); and 2023 Korea – Singapore Digital Partnership Agreement (‘KSDPA’). Additionally, the 10-member Association of Southeast Asian Nations has launched negotiations for a Digital Economy Framework Agreement likely to be concluded by 2025. 

    DEAs contain disciplines on a broad variety of issues including transparency, supply chains, inclusion, identities, cross-border data flows, forced data localization, online customs duties and the trade treatment of digital products, business and trade facilitation, e-invoicing and certifications, the protection of source code, cybersecurity, consumer protection, privacy and data protection, open government data, standards and interoperability, fintech and e-payments, innovation and regulatory sandboxes, artificial intelligence and support for small and medium-sized firms. They also provide various avenues for engagement and institutionalised dialogues with different stakeholders including private sector, industry setting bodies, civil society organisations, business associations, and academia. 

    A key distinguishing architectural feature of DEAs is the “modular” architecture, which treats issue-areas differently based on their technical properties and associated interest configurations but under an integrative umbrella framework. This modularity allows for flexible treatment of different sets of issues. For instance, for certain traditional areas pertaining to the digital economy, parties can agree to adopt binding rules or soft law norms, while in other emerging areas of digital regulation, parties may choose to engage in trust-building exercises through programs of ongoing dialogue and collaboration to enable future consensus. The latter approach is likely to be a lot more effective in dealing with embryonic areas such as regulation of artificial intelligence, fintech and open data ethics. Additionally, this modular approach, enabling differentiated paths for issue-areas, opens up possibilities for heterodox patterns of engagement and more inclusive multistakeholder participation in ongoing programs. 

    In addition to DEAs, other institutional innovations appear to be moving towards a similar modular architecture. For instance, the European Union has started undertaking a series of Digital Partnerships such as the 2022 EU-Korea Digital Partnership, 2022 Japan-EU Digital Partnership, and the 2023 EU-Singapore Digital Partnership. Although these instruments are not treaties like DEAs, they adopt a modular approach, fostering multi-track/multi-issue collaborations and multistakeholder engagement. Similarly, the work being conducted under the aegis of the EU-US Trade and Technology Council and the Connected Economy pillar of the Indo-Pacific Economic Framework for Prosperity also emulates similar modular features. These instruments also align with other recent initiatives such as the Japanese government’s ‘Data Free Flow with Trust’ approach in the Group of 7 and the Group of 20, including the new Institutional Arrangement for Partnership on DFFT, wherein different working groups will be set up to deal with distinct digital issues. 

    The deliberations in this session highlighted the potential role of DEAs, digital partnerships, and other institutional innovations in developing a common baseline stance on different aspects of internet fragmentation by institutionalising dialogues on different policy areas including barriers to cross-border data flows, data localisation and variation in internet-based processes and user experience arising from domestic digital regulations. These developments are particularly instrumental as the binding nature of international trade agreements have often prevented fostering meaningful consensus on several aspects of cross-border data regulation. 

    To date, however, several of these DEAs and partnerships have been formed by relatively like-minded and digitally developed countries. Nonetheless, the panel noted the possibility of extending or scaling up this kind of cooperative framework to a wider group of countries, given their modular, flexible architecture. For instance, these instruments provide different avenues to achieve regulatory interoperability in different areas, undertake joint work programs, and engage the global multistakeholder community in productive working relationships. However, they have inherent limitations given that they are contingent on the political will of the participating countries and have involved very few developing countries and no LDCs to date. More importantly, geopolitical realities and deep ideological divides on certain core aspects of internet and digital regulation are likely to hinder achieving deep consensus on difficult issue-areas. Therefore, the panel concluded that these new instruments are best understood as promising additions to the larger international institutional architecture needed for global Internet and digital governance, rather than a panacea for internet fragmentation. 
     

    IGF 2023 WS #108 A Decade Later-Content creation, access to open information

    Updated:
    Avoiding Internet Fragmentation
    Key Takeaways:

    In the ten years since the 2013 IGF panel, where there was a concern about content delivery. We have experienced an explosion of content between professional creators and user generated content. the success over these concerns turned out to be a different challenge as we didn’t foresee oncoming user generated content contributing the new paths for the creators to use and ingest content with streaming, and new applications.

    Calls to Action

    The networks will need to continue to reconfigure to enable all levels of connectivity including content creation.

    Session Report

    Content Creation, Access to open information Panel at IGF Kyoto 2023

    The Internet has transformed the way we create and consume content. In the past decade, the number of content creators has exploded, and the tools for creating content have become more accessible than ever. This has led to a new golden age of creativity and consumption.

    However, the rise of user-generated content means the networks have had to adjust the dynamic elements of the Internet infrastructure to seamlessly support a wide range of new applications.

    Panelists at the IGF Kyoto 2023 Content Panel discussed the following key points:

    • The Internet has enabled a new era of content creation and consumption.
    • The number of content creators has exploded, and the tools for creating content have become more accessible.
    • Network operators seemly adjusted to a dynamic Internet infrastructure that supports a wide range of new applications to create and promote commercial and user-generated content.
    • One of the unexpected developments in content creation over the past decade is that "literally everyone is creating content, and the tools that are allowing you to create content have multiplied exponentially."
    • Many different services now allow people to be part of the copyright regime that was once exclusive to commercial entities.

    The panelists also discussed how networks have had to cope with content creation and consumption changes. For example, networks have had to invest in new technologies to support the increased traffic volume and the new types of content being created. Networks have also had to develop new policies and procedures to manage user-generated content and protect content creators' intellectual property rights.

    The Content Panel at IGF Kyoto 2023 explored the many challenges and opportunities the Internet presents for content creation and consumption. The Internet has enabled the change in how we create and consume content forever. Networks and policymakers will need to continue adapting to current reality with perpetual shifts in software and consumer applications to ensure that the Internet remains a vibrant and open platform for creativity and innovation.

    IGF 2023 Open Forum #72 GC3B: Mainstreaming cyber resilience and development agenda

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:
    Cyber capacity building leads at the intersection of cyber and digital development., More efficient cyber capacity building requires more interaction between these two communities.
    Calls to Action
    The GFCE will organise the GC3B conference and bring the issue of cyber capacity building more on the agenda, globally and regionally. , The Accra Call to be published at the conference will provide a framework for action on cyber capacity building.
    Session Report

    18th Annual Meeting of the Internet Governance Forum  

    Workshop GC3B – ‘’Mainstreaming Cyber Resilience in the Development Agenda” 

    October 12th 2023  

    Summary Report   

    Global Forum on Cyber Expertise (GFCE) Secretariat 

     

    Summary 

    The workshop on Mainstreaming Cyber Resilience in the Development Agenda was held in view of the Global Conference on Cyber Capacity Building (GC3B). The aim of the session was to engage the stakeholder community around the outcome document of the Conference, namely: the Accra Call. The Action Framework aims to promote cyber resilience across development agendas as well as contextualize cyber capacity building within broader development goals, engaging both the cybersecurity as well as development communities, to commit to effective and sustained action on cyber capacity building (CCB). Tereza Horejsova from the Global Forum on Cyber Expertise (GFCE), and on behalf of the co-organizers of the GC3B,  organized and moderated the discussion. Participants were asked to share perspectives on the outcome document based on the question of what barriers need to be overcome to better connect CCB with development goals, to elevate the role of development in CCB and vice versa, that lend themselves into particular action items? 

    Presenters  

    • Tereza Horejsova – Global Forum on Cyber Expertise (GFCE) 

    • Pua Hunter –  Office of the Prime Minister Cook Islands 

    • Liesyl Franz – United States Department of State’s Bureau of Cyberspace and Digital Policy (CDP) 

    • Chris Painter – President of the GFCE Foundation Board  

    • Allan Cabanlong – GFCE Southeast Asia Hub Director 

    Context 

    On 29-30th November 2023, the first global conference on cyber capacity building will take place in Accra, Ghana. Co-organized by the Global Forum on Cyber Expertise, the Cyber Peace Institute, the World Bank, the World Economic Forum, and hosted by the Government of Ghana, the Conference will convene decision makers from all over the world on the title: Cyber Resilience for Development. From high-level government leaders and practitioners, international organizations and academia, development community actors, to experts on cyber security and on capacity building, as well as the private sector, the Conference aims to connect across sectors and regions to increase cooperation among siloed communities.  To that end, the Accra Call – a global framework with concrete actions to support countries in strengthening their cyber resilience – will be made available for endorsement by the multistakeholder community.  

     

    Presentation 

    Liesyl Franz, from the United States Bureau of Cyberspace and Digital Policy (CDP), placed United States involvement in cyber capacity building (CCB) projects over the years into the current landscape and noted that growing global demand requires greater coordination of such efforts. To that end, Liesyl noted the GC3B’s unique opportunity  for donor countries, recipients, implementors, private sector and academia, to come together and discuss the current state of CCB given limited global resources and rapid digital growth. The Conference will foster conversations about global demands and distribution, bring together relevant actors and contribute to bridging the gap between cyber and (digital) development groups. In support of this effort, the US has indicated participation through a high-level interagency delegation led by Mr. Fick, the Ambassador of Cyberspace & Digital Policy, to engage with the multistakeholder community on these questions and make meaningful steps towards a global digital future.  

     

    Pua Hunter, from the Prime Minister’s Office of Cook Islands, highlighted regional Pacific efforts and how these in turn reflect the need for coordinated, global action. The region is experiencing considerable momentum around CCB, with nations in the region receiving support from International Organizations, whilst leveraging regional and global networks, to develop national cyber capacities. However, the need to manage these efforts effectively are crucial in ensuring intended benefits of initiatives are reaped. Previous experiences with development partners reiterate the need to contextualize national as well as regional needs.  

    Chris Painter reiterated the GFCE’s role in navigating the donor, recipient, and implementor landscape. As the demand for CCB increases, so does the necessity to inspire stakeholders to work towards integrating cyber resilience in international and development agendas. Cyber resilience is integral to the Sustainable Development Goals (SDGs), as these are often undergirded by digital and cybersecurity frameworks. Thus, the Accra Call, a working document that aims to mainstream cyber resilience through actionable items.  

    The Accra Call for Cyber Resilient Development 

    • An Action Framework built on four key building blocks that draws from existing shared commitments and ongoing relevant efforts in international fora and processes  

    • With an aim to elevate cyber resilience across international and national development agendas as well as promote cyber capacity building that supports broader development goals and effectively serves the needs of developing countries 

    • Entailing concrete actionable items,  followed by recommendations, with a focus on ‘how’ stakeholders can engage and commit 

    •  To serve as a tangible document, a blueprint and a motivation for action and as a voluntary commitment to garner political will for sustained capacity building 

    Participant feedback 

    Participants raised several questions related both the Conference and the Outcome document. In response to a concern about the regional focus of the Conference, the GFCE noted the Conference’s theme of connecting to regional perspectives to global discussions and resources. Allan Cabanlong from the GFCE South East Asia Hub noted that the Conference aims to spark global engagement and inspire all regions to take action. Thus, engagement from all over the world is crucial in linking these spheres.  

    Participants discussed implementation and training modalities, which influence the effectiveness of CCB projects. Terms from the development community such as continuous education, accountability, and the train-the-trainer concept were highlighted. Awareness of these approaches will affect the absorption capabilities of nations in terms of knowledge and skills retention, as well as institutional development. 

    To take stock of progress on the Accra Call, commitments in the document will serve as a baseline to be continuously assessed and amended as necessary. As for concerns for the Rule of Law, the preamble of the Accra Call, the GFCE’s values on a free, open, peaceful and secure digital world, as well as efforts to include stakeholder and development community values, principles, and standards included into the document were stressed.  

    Conclusions 

    The Global Conference on Cyber Capacity Building (GC3B) program will reflect the content of the Accra Call, while the Accra Call cements commitment post-Conference. The Conference will thus foster conversation among stakeholders to elevate and bring attention to the urgency surrounding national and regional cyber capacity building. The aim of both is to leverage existing structures for elevating cyber resilience within development. The benefits of these commitments will be reflected in the social contribution that cyber resilience brings in terms of economic and digital benefits which are necessarily undergirded by valuing and committing to cyber security and cyber resilience.   

    IGF 2023 Town Hall #128 The International Observatory on Information and Democracy

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    A comprehensive understanding of existing knowledge and identifying knowledge gaps was deemed necessary. Aspirations were shared for the Observatory to spotlight approaches from different countries and foster a global perspective (especially from the Global South) through expert collaboration. The aim is to publish a report that delves beyond major headlines, considering nuances across political differences and continents.

    ,

    There was an emphasis on the importance of understanding the situation before implementing regulations. It was noted that a general panic to create solutions without a proper understanding of the issue is causing concern. The need to ensure the correct implementation of existing tools was highlighted, underlining the importance of the meta-study approach being proposed by the Observatory to align with the desired policy outcomes.

    Calls to Action

    The Observatory is looking for researchers with an academic or civil society background having produced relevant research on these topics (AI, data governance, media in the digital age). Working groups are expected to start their work mid-November with the launch of a global call for contribution. The first Observatory report will be published in late 2024.

    ,

    The Observatory's essence is collaboration and highlights the need to use a multi-stakeholder approach. The Observatory therefore encourages all relevant stakeholders including private companies to engage with them on working towards producing state of the art research on the information and communication space and its impact on democracy in order to better inform policies and decisions.

    Session Report

    Michael Bak, the newly appointed Executive Director of the Forum on Information and Democracy, greeted both the in-person and online audience, expressing gratitude for their participation in the session. The session marked the official launch of the International Observatory's first research cycle and the unveiling of the priority themes for the inaugural Observatory report, which will be published by the end of 2024 with the aim of providing a comprehensive synthesis of international academic publications addressing critical questions at the nexus of information and democracy. An interactive panel discussion featuring members of the Steering Committee unveiled the priority themes for the Observatory’s inaugural work: AI, media in the digital age, and data governance. Expected in late 2024, this first report will provide a comprehensive synthesis of international academic research addressing critical questions at the nexus of information and democracy. This much anticipated work fills a critical gap in the global policy architecture by providing a common understanding of the state-of-art research and evidence that exists around the impact of technology on our democracies and information ecosystem.

    Five members of the Steering Committee highlighted the relevance of the Observatory's work in their region and exchanged views on the priority themes. Jhalak Kakkar (India) Executive Director, Centre for Communication Governance, National Law University Delhi, shared her view on the aspects/angles most pressing and urgent to address with regard to the South Asian context. Courtney Radsch (USA) Director of the Center for Journalism and Liberty at the Open Markets Institute discussed the most important cross-cutting issues and methodological considerations to be addressed by the Observatory in her view. Jeanette Hofmann (Germany) Research Director at the Alexander von Humboldt Institute for Internet and Society, spoke about high quality journalism and disinformation. Ansgar Koene (Belgium) Global AI Ethics and Regulatory Leader at EY, underlined the importance of collecting research and finally Nnenna Nwakanma (Côte d’Ivoire) Digital Policy, Advocacy and Cooperation Strategist and former Web Chief Advocate at the Web Foundation, addressed the regulation of the online space in a context of evidence scarcity in different regions. The panelists then exchanged views on how different sectors can contribute to the evaluation of the information space and its impact on democracy. They also discussed the OID’s contribution to the work of different stakeholders involved in evidencing and finding solutions to the information chaos and how the OID’s meta research production can help inform policy decisions within both the public and private sector.

    Courtney Radsch opened the discussion stressing the essential need to learn from the evidence it has been collected, especially on the ground, and how it impacts different regions. She emphasized how the majority of research is from the global north, therefore there is a lack of knowledge on how technology and policies often developed in the EU and US have an impact in shaping the information ecosystem and the political economy around the world. What is needed is a comprehensive understanding of what is known and where the holes are in these topics. Jhalak Kakkar also shared her aspirations for the Observatory to be able to spotlight approaches from other countries and reach a global overarching view of different experts coming together. In this regard, Nnenna Nwakanma envisioned a report that would go beyond the big headlines and look at the nuances, across political differences and continents. She also underlined the importance to respond to needs, no need to build brides in the desert.

    Nnenna Nwakanma continued highlighting the importance to understand before regulating. She underlined the panic wave that is currently moving the production of regulations, focusing on coming up with solutions rapidly instead of understanding the matter first. Ansgar Koene echoed her and stressed the urge to check if the tools we have are implemented correctly. He stressed a meta study approach, as the one the Observatory will take, is going to help achieve the desired outcome of the policies we are producing. In this regard, Jeanette Hofmann affirmed the need to stop chasing new regulations every year but ground our work on previous research. She agreed with Angsar and shared her hopes that a meta study will encourage comparative work. Deborah Allen Rogers, an online participant, took the floor at the end and stated the need to challenge the way research is funded and the highly political business model associated with it.

    IGF 2023 Launch / Award Event #9 University Diploma South School on Internet Governance

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    Value of Schools on Internet Governance in understanding the importance of capacity builing in this field

    ,

    The involvement in universities creating formal diplomas for Internet Governance capacity building

    Calls to Action

    A future enhancing of the program with partnership with other universities

    ,

    The use of the prepared content for new online education tools focused on Internet governance

    Session Report

    The Launch of the University Diploma on Internet Governance and Regulations was presented during this session by members of the University of Mendoza, Argentina and founders and Directors of the South School on Internet Governance SSIG.

    The South School on Internet Governance has made a partnership with University of Mendoza which allows those fellows who effectively completed all the evaluations of the pre-training and the one week hybrid SSIG to access a research phase with tutors from the University of Mendoza. If they complete the research they get a University Diploma in Internet Governance and Regulations.

    All these activities are offered at no cost for fellows and in three languages: spanish, english and portuguese.

    During the launch session Dr. Olga Cavalli explained the evolution of this partnership with University of Mendoza and the first pilot experience in 2022.  

    After that Authorities and professors of University of Mendoza, Argentina, participated virtually:

    Osvaldo Marianetti, Postgraduate Director of University of Mendoza

    Professor Carolina Gonzáles and professor Mariela Ascensio also shared their comments and support to this new initiatives. 

    On site there was Professor Claudio Lucena from Universidad da Paraíba, Brasil, who explained the importance that the South School on Internet Governance had for him in enhancing his understanding and relationships with experts from the Latin American Region. 

    Mark Datysgeld who was a fellow of the SSIG and who participated as expert in the SSIG 2023 15th edition, also stressed the importance of the school in the capacity building focused on the Internet.

    The session finished with comments from SSIG fellows who were participating on site and virtually.

     

    IGF 2023 Open Forum #6 Development of Cyber capacities in emerging economies

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    - the initiative from UN to create a cyber-blue-helmet group, that as the blue-helmet should have a cyber-security purpose to maintain peace in cyberspace - the creation of specific cybersecurity careers in universities,something that now is scarce and mainly done by certifications

    Calls to Action

    - exchange of experience among professionals from the same country and region, recognizing that the problem is trans fronteers - the importance of understand some technical concepts for policy makers

    Session Report

    This Open Forum was focused on the importance of capacity building related with cybersecurity in developing economie.

    In the Open Forum there were on site and remote participants.

    On site there were:

    - Mr Christopher Painter, Director of the GFCE Global Forum on Cyber Expertise

    - Ms Olga Cavalli, Argentina Cibersecurity Director

    - Mr Mark Datysgeld, ICANN GNSO Council Member

    - Mr Cláudio Lucena, Privacy expert form Universidad da Paraíba

    On remote there were:
    - Ms Sandy Palma, CEO Honduras cibersegura

    - Mr José Cepeda, member of the Parliament of Spain

    The dialogue was very active and in general all participants agreed on the importance of capacity building in cybersecurity and rise awareness.

    One of the difficulties that were explained during the presentations is the lack of human resources trained in cybersecurity, which is a global problem but is more problematic in developing economies,as many trained resources go abroad to work in developed countries.

    One important aspect was mentioned in relation with several new regulations that are promoting the sharing of information about the attacks. Sometimes and in order to protect reputation, some attacks are not properly informed to CERTS or regulators, making it more difficult to evaluate the impact and to understand the type of attack and learn how to solve the damage. Sharing information helps to stop attacks and to learn from experience that can be shared.

    Parliament member José Cepeda from Spain shared information about an initiative where he is involved about the creation of a specialized group of "Cyber helmets" like the UN peace helmets, but focused on helping solving cyber breaches and cyber attacks at the global level.

    Mark Datysgeld explained the importance of some technologies like DNS SEC in lowering risk in the use of the Domain Name System.

    Cláudio Lucena and Sandy Palma explained the challenges of developing economies in considering cybersecurity as a high priority issue and challenges to retain and train human resources focused on cybersecurity.

    Christopher Painter explained the different activities that the GFCE is doing all along the globe, with special focus in developing regions, and the positive impact that these activities are having in all countries.

    Olga Cavalli moderated the session and also informed the audience about a new university career in Universidad Scalabrini Ortiz in Buenos Aires, Argentina, which has created the first university career totally focused on cybersecurity, being the first in South America.

    There were several comments and questions raised on site and remotely, which were addressed by the panelists.

     

     

    IGF 2023 Town Hall #165 Exploring the Risks and Rewards of Generative AI

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    In response to challenges emerging from GAI, all stakeholders should prioritize responsible AI development, implement robust cybersecurity measures, establish clear accountability mechanisms, and promote international collaboration. Addressing these concerns is crucial to mitigate the potential issues and risks associated with AI in national security and information dissemination.

    ,

    The AI digital divide remains a persistent and pressing challenge for GAI, as it may amplify inequities between different global populations.

    Calls to Action

    Remain cautious of heuristics used to determine authentic content, as AI technology improves. These will quickly become outdated.

    ,

    Focus on encouraging creativity and critical thinking, especially for youth and in civic education. This will help ensure that GAI has a more positive impact on global societies.

    Session Report

    Report on the 2023 IGF Town Hall Session: Exploring the Risks and Rewards of Generative AI

    Introduction

    The 18th annual United Nations Internet Governance Forum (IGF) held in Kyoto, Japan, featured a diverse range of sessions, including one Town Hall sesson entitled “Exploring the Risks and Rewards of Generative AI.” The session addressed the growing interest and concerns related to Generative AI (GAI) and its impact on content creation, ethics, education, cybersecurity, and global equity. The panelists in this session provided valuable insights into the challenges and opportunities associated with GAI.

    Panelists and Moderator

    1. Daniel Castaño - Law Professor from the Universidad Externado de Colombia.
    2. Zoe Darme - Senior Manager at Google.
    3. Brittan Heller - Senior Fellow at the Atlantic Council and Affiliate at the Stanford Cyber Policy Center.
    4. Larry Magid – Panel moderator; Founder and CEO of ConnectSafely
    5. Janice Richardson - Senior Advisor to Insight, a European internet safety education organization.

    Key Discussions and Concerns

    1. Ethical Biases and Trust in GAI:

    Brittan Heller highlighted the risks of conversational AI systems, such as chatbots, inadvertently propagating moral and ethical biases, which can erode trust and lead to public dissatisfaction. She expressed concerns about the lack of accountability in environments where authenticating content or indicating its provenance is challenging, potentially leading to misinformation or bias. Additionally, she warned about malware, cybersecurity risks, foreign interference, deep fakes, and their profound economic and societal consequences.

    1. AI in Education:

    Janice Richardson discussed both the opportunities and challenges of AI in K-12 education. She pointed out the potential for AI to enhance personalized learning, overcome disabilities, provide data-based feedback, reduce skill gaps, and automate repetitive tasks. However, she also raised concerns about the geopolitical impact of AI, its potential to exacerbate systemic racism and inequality, and the risk of bias in AI algorithms negatively impacting marginalized communities.

    1. Validation and Authenticity Challenges:

    Zoe Darme emphasized the challenges of validating content, noting that common heuristic shortcuts are not always reliable. She highlighted how generative AI can create authentic-looking messages, making it difficult for individuals to discern accuracy and authenticity. She also mentioned the difficulty in identifying content as AI-generated, as both human-created and AI-generated content can be inaccurate or misleading.

    1. Global Equity and Digital Divide:

    Daniel Castaño discussed how GAI could affect the digital divide in global majority countries. He pointed out the danger of AI further increasing inequality both within and between countries. Algorithms may favor wealthy nations while negatively impacting people in developing countries.

    1. Historical Perspective:

    Larry Magid acknowledged the concerns and excitement over GAI and placed them in a historical context. While recognizing that GAI will have both positive and negative consequences, they stressed the need to avoid moral panics and exaggerated fears. He emphasized that the impact of GAI on society remains uncertain, but it is unlikely to pose an
    “extinction-level threat” comparable to nuclear war, climate change, or pandemics.

    Conclusion

    “Exploring the Risks and Rewards of Generative AI” provided a comprehensive overview of the challenges and opportunities associated with GAI. The panelists’ insights and audience questions highlighted the ethical, educational, validation, and equity-related concerns while also acknowledging the potential benefits of GAI. The discussion underscored the importance of responsible development and use of generative AI to maximize its positive impact and mitigate potential risks. It remains crucial for policymakers, industry experts, and researchers to work together to shape the future of AI in a way that benefits society as a whole while addressing these concerns.

    IGF 2023 Lightning Talk #118 Measuring Gender Digital Inequality in the Global South

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    Although there have been improvements in reducing the gender digital divide, progress is still quite slow when considering SDG gender equality targets, especially in the global south. Higher-quality data, both quantitative and qualitative, are imperative for understanding the multidimensional aspects of the gender digital divide.

    ,

    We must consider contextual differences when interpreting gender digital divide data across countries. There are challenges in understanding the underlying mechanisms and reasons behind all of the results, but research is ongoing, especially through the EQUALS Global Partnership.

    Calls to Action

    Good-quality, nationally representative data on digital use by gender, including non-binary identities, is needed across countries, especially in the global south. It is important to track historical data to measure progress.

    ,

    Findings from data across different countries should be interpreted carefully by considering variations in social context. In addition to quantitative data, qualitative data should be collected for this purpose.

    Session Report

     

    Post-Session Report: Measuring Gender Digital Inequality in the Global South

    Lightning Talks

    October 9, 2023

    Internet Governance Forum, Kyoto, Japan

    At this lightning talk session, three experts, representing diverse regions and experiences, shared their recent research findings on understanding the gender digital divide in the Global South. This session underscored the importance of quality measurement in addressing gender digital inclusion. Through their presentations, the researchers shared strategies and  resources that can be used to implement policies, surveys, and other tools to collect gender ICT data. Although many challenges associated with measuring the gender digital divide are region-specific, it is hoped that this session provided a platform for researchers and practitioners to connect and share knowledge that can be transferred across contexts.

     

    Moderator

    Dr. So Young Kim, KAIST 

     

    Speakers

    Dr. Alison Gillwald, Research ICT Africa

    Dr. Christopher Yoo, University of Pennsylvania

    Dr. Maria Garrido, University of Washington

    Dr. Matias Centeno, National Institute of Agriculture Technology, Argentina

     

    Key takeaways

    1. Although there have been improvements in reducing the gender digital divide, progress is still quite slow when considering SDG gender equality targets, especially in the global south. Higher-quality data, both quantitative and qualitative, are imperative for understanding the multidimensional aspects of the gender digital divide.
    2. We must consider contextual differences when interpreting gender digital divide data across countries. There are challenges in understanding the underlying mechanisms and reasons behind all of the results, but research is ongoing, especially through the EQUALS Global Partnership.

     


    Lightning Talk 1: Gender Digital Inequality in Africa

    Alison Gillwald, PhD

    Executive Director, Research ICT Africa and Professor, University of Cape Town Nelson Mandela School of Governance

    https://researchictafrica.net/ 

     

    Key points

    • Fundamental data is needed for developing evidence-based policy. The After Access Surveys aim to fill the gap in basic data collection across countries in Africa. 
    • Significant gender gaps in Africa reflect and have the potential to exacerbate underlying structural and intersectional inequalities. Women are not a homogenous group. Gender inequalities clearly intersect with inequalities in other segments of the population.
    • It is also important to consider the third-level digital divide. Once online, women also appear more restricted than men in the uses of digital technologies and the benefits they can derive from using them.

     


    Lightning Talk 2: The Impact of Mobile Internet Uptake and Use in Bangladesh and Ghana

    Christopher Yoo, PhD

    John H. Chestnut Professor of Law and Founding Director of Center for Technology, Innovation & Competition, University of Pennsylvania

    https://www.law.upenn.edu/institutes/ctic/ 

     

    Key points

    • Qualitative and quantitative study in Bangladesh and Ghana assessing the impact of mobile internet connectivity on socioeconomic well-being showed the importance of cultural context.
    • Different types of mobile internet use provided more benefits for women’s wellbeing in each country. Overall, women faced greater criticism for their Internet use compared to men, and were more likely to be unhappy with their internet use.
    • Future longitudinal research will examine the impacts of coming online for women in Bangladesh.

     


    Lightning Talk 3: The State of Inclusive Connectivity & Meaningful Access to Information 

    Maria Garrido, PhD

    Principal Research Scientist, Technology & Social Change Group of University of Washington

    https://tascha.uw.edu/ 

     

    Matias Centeno, PhD

    Principal Research Scientist at the National Institute of Agriculture Technology, Argentina

     

    Key points

    • The Development and Access to Information (DA2i) dashboards demonstrate how inclusive connectivity and meaningful access to information contributes to development, and is embedded across the UN 2030 Agenda (https://da2i-dashboards.org/). 
    • Between 2015 and 2022, connectivity has improved but still drags behind 2030 targets, gender equity has made slow and weak progress, and civil rights and political freedoms have declined.
    • In order to assess progress, we need better, more quality data tracked historically.
    IGF 2023 Town Hall #83 Empowering Women in Tech: Insights from EQUALS

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    Joint initiatives, such as the EQUALS Global Partnership, EQUALS-EU, and the ITU Handbook on Mainstreaming Gender in Digital Policies, can identify best practices across sectors, countries, and contexts.

    ,

    Quality data and research are fundamental to our consideration of the gender digital divide and assessing progress over time.

    Calls to Action

    Recruit authors to contribute to the 2024 EQUALS Research Coalition Report.

    ,

    Promote and expand partnerships within the EQUALS Global Partnership.

    Session Report

     

    Post-Session Report: Empowering Women in Tech: Insights from EQUALS

    Town Hall

    October 10, 2023

    Internet Governance Forum, Kyoto, Japan

    This Town Hall brought together a panel of partners from the EQUALS Global Partnership for Gender Equality in the Digital Age to highlight the vital role of empowering women in technology education and employment. They highlighted EQUALS initiatives addressing this issue and shared their experiences in enhancing women's digital access, skills, and leadership. The event also touched on future EQUALS plans and ways for participants to engage.

     

    Moderator

    Prof. Dasom Lee, KAIST 

     

    Speakers

    Prof. Michael Best, Georgia Institute of Technology 

    Ms. Tamara Dancheva, GSMA

    Ms. Gitanjali Sah, ITU

    Prof. Moon Choi, KAIST

     

    Key takeaways

    1. Joint initiatives, such as the EQUALS Global Partnership, EQUALS-EU, and the ITU Handbook on Mainstreaming Gender in Digital Policies, can identify best practices across sectors, countries, and contexts. 
    2. Quality data and research are fundamental to our consideration of a gender digital divide and assessing progress over time. 

     


    Presentation 1: Overview of the EQUALS Research Coalition

    Prof. Michael Best, Professor, Sam Nunn School of International Affairs, Georgia Institute of Technology, and Executive Director, Institute for People and Technology 

     

    Key points

    • The EQUALS Research Coalition was formed in 2017 by Prof. Michael Best and Dr. Araba Sey. One of the first goals of the Research Coalition was to publish a “tent post” report as a way to synthesize the status of gender digital divide and drive the conversation going forward. 
    • Through the efforts of the Research Coalition, ITU, Dr. Nancy Hafkin, and many others, the inaugural research report, Taking Stock, was published in 2019. See: Taking Stock

     


    Presentation 2: EQUALS-EU Key Project Outcomes

    Ms. Tamara Dancheva, Senior International Relations Manager, GSMA

     

    Key points

    • EQUALS-EU was developed to build capacity and expand networks for women and girls in social innovation and entrepreneurship. 
    • The key policy recommendations are centered around gender-inclusive innovation standards, such as involving key groups in problem definition, ideation, and solution design; gender-inclusive language use; gender-diverse metrics employed before and during innovation performance; and documenting and archiving the innovation performance. See: EQUALS-EU Policy Brief

     


    Presentation 3: Handbook on Mainstreaming Gender in Digital Policies

    Ms. Gitanjali Sah

    Strategy and Policy Coordinator, ITU

     

    Key points

    • This handbook was developed by ITU to address the gap in materials targeting policymakers on gender-responsive digital policies. It is based on research from 19 different Member States. 
    • Key recommendations: including specific objectives to gender/women/girls in key national strategic documents such as digital agendas or national financial inclusion strategies; putting in place programs or projects that specifically address women and girls; and setting gender criteria for the assessment of project proposals.
    • See: Handbook on Mainstreaming Gender in Digital Policies

     


    Presentation 4: Looking forward: Data collection and the next iteration of the EQUALS research report

    Prof. Moon Choi, Head and Associate Professor, KAIST Graduate School of Science and Technology Policy

     

    Key points

    • The next iteration of the EQUALS Research Coalition Annual Report on the gender digital divide will be published in the fall/winter of 2024, with three focuses: Skills, Leadership, and Access. 
    • Contributors can choose whether they will write a chapter (4,000 – 6,000 words) or a case study (1,000 – 1,500 words). 
    • The initial deadline for the statement of interest will be December 1, 2023.

     

    IGF 2023 Day 0 Event #134 Talk with Metaverse residents – a new identity and diversity

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    Metaverse has great potentials to free people from restrictions of physical world and people can put a different outlook and gender from the physical world

    ,

    Flexibility and plurality in gender often complicate the problem of harrassment which need a new consideration

    Calls to Action

    It was an introductory session and not to call any actions

    Session Report

    IGF 2023 Day 0 Event #134
    Talk with Metaverse residents – a new identity and diversity Sunday, October 8, Workshop Room 4

    Participataion:
    51 pepole at peak on-line, including Host and 37 people on-site

    Summary of Session:
    Moderator explained that the session was first held at Japan Internet Governance Forum 2022 and re-made to proposed to IGF2023, appreciated the acceptance and passed the floor to the first presenter, Virtual Girl Nem, an avator persona, presenting online.

    Nem started her presentation by demonstrating the natural movement of her avator body in Metaverse.  It moves naturally by various devices and sensors attached to her physical body. She showed her three avatar bodies to which she switches time to time for various occasions, moreover she can switch even to non-human avatars. She emphasized the Metaverse removes the limitation in the physical world.

    She then introduced several popular VR platforms and their characteristics which are different from one anaother.  The number of users has increased rapidly in recent years and shares of the platforms for a region are different from one another.  She also argued that people who have been freed from restrictions by being virtual are gaining a new identity as "a Revolution of Identity.

    Liudmila followed Nem with her presentation. She introduced previous studies on avatars (salazr 2009, AO 2018, Nem 2022, Miyake 2022, Ginga 2022, Hine 2000), which showed that the physical gender does not always match the gender of the avatar, while there are many female avatars.  She also pointed out that harassment on the Metaverse is occurring and the gender difference between physical and avatar complicates the situation.  Finally, she introduced a joint statement by Nem and Mila, who spoke at this session, on the issue of harassment and their efforts to solve the problem.

    Questions and Answers:
    Q: You said that most avatars are female, regardless of their real gender.
    A by Nem: Half of the survey results indicate that it is because they prefer female avatars, but half of the respondents feel that female avatars are easier to talk to.

    Q: Nem lives virtually and in real life, and doesn't it confuse you?
    A by Nem: Sometimes I get confused, but this is normal. It is becoming like clothing for me.

    Q:Many companies are launching metaverse, but I don't have a good idea how to make use of it in my life.
    A by Nem: :It is a tool with various potentials which should far improve your life
     

    IGF 2023 Day 0 Event #18 EQUAL Global Partnership Research Coalition Annual Meeting

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    A) The next iteration of the EQUALS Research Coalition Annual Report on the gender digital divide will be published in the fall/winter of 2024, with three focuses: Skills, Leadership, and Access.

    ,

    B) As a long-term goal, the EQUALS Research Coalition will consider developing a signature scale for assessing gender digital equity.

    Calls to Action

    A) Recruit authors contributing to the 2024 EQUALS Research Coalition Annual Report.

    ,

    B) Promote and expand the partnerships of the EQUALS Research Coalition.

    Session Report

     

    Meeting Minutes and Report: 2023 EQUALS Research Coalition Annual Meeting

    The EQUALS Research Coalition convened its annual meeting on October 8, 2023 with participation from 14 members. The primary agenda revolved around the upcoming 2024 EQUALS Research Report, which will be segmented into three key areas: Access, Skills, and Leadership. To facilitate meaningful contributions, members engaged in breakout groups aligned with these focus areas, exploring potential research topics and identifying potential contributors. Finally, the meeting featured a comprehensive large group discussion to strategize the timeline and execution of the research project.

    Key Takeaways

    1. The next iteration of the EQUALS Research Coalition Annual Report on the gender digital divide will be published in the fall/winter of 2024, with three focuses: Skills, Leadership, and Access. 
    2. As a next step, the co-chairs will recruit authors and circulate a statement of interest form that will be due on December 1, 2023. 
    3. As a long-term goal, the EQUALS Research Coalition will consider developing a signature scale for assessing gender digital equity.

    Meeting Minutes

    Access group

    Members: Christopher Yoo, Maria Garrido, Matias Centeno

    Topics discussed:

    • Many partners to reach out to APC, Georgia Tech, etc. Can give them a platform to talk about the great work they are already doing.
    • Will devote part of the section to forward-looking, more speculative work. For example, thinking about alternative measures of access, like network-based data that we can make inferences from.

    Leadership group

    Members: Moon Choi, Dasom Lee, Pranav Tiwari, Audrey Plonk

    Discussion points:

    • There are different types of leadership (fellowship based and training and e-learning based), as well as various existing statistics used to measure leadership.
    • It would be interesting if we could distill it down to one signature indicator, and then try to go into as much depth as possible.
    • Interested in looking at different types of corporations and geographies.

    Skills group

    Members: So Young Kim, Gayani Hurulle

    Discussion points:

    • There are difficulties in finding commonalities in how digital skills are defined across contexts, especially when we consider more sophisticated technological skill sets related to AI.
    • Possible partners to reach out to include APC-ICT, World Bank, etc.

    Online cluster

    Members: Taylor De Rosa, Ern Chern Khor, Ayanna Samuels, Prossy Kawala

    Discussion points:

    • Discussed the research interests of the two members. Ayanna’s research is focused on digital innovation ecosystems and could contribute to any section. Prossy’s organization, the Center for Media Literacy and Community Development could contribute a case study on training for youth and women in media and information literacy in Uganda.

    Discussion on timeline

    • Would be good to transition the bi-monthly call to separate calls for each cluster, led by the section leader. For timeline, perhaps we can get the statement of interest, including title, short abstract, and author by November.
    • It would be nice to have an end to end timeline or road map of the process that we can show to possible authors.
    • We should be mindful about giving a generous timeline during the editing process because it can take a long time.
    IGF 2023 WS #235 Leveraging AI to Support Gender Inclusivity

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    There are many ways that AI is already being used to increase gender equality.

    ,

    The meeting provided insights into the practical applications of AI in promoting gender inclusivity, while also raising important questions about ethics, biases, and the impact of AI on diverse communities.

    Calls to Action

    There have been multiple efforts that have developed valuable sets of principles around AI and many touch on gender. It's time to move to implementation.

    Session Report

     

    The meeting provided insights into the practical applications of AI in promoting gender inclusivity, while also raising important questions about ethics, biases, and the impact of AI on diverse communities.

    The discussion also highlighted the importance of continuously evaluating AI systems to ensure they behave fairly across different identity groups and that small-scale data can be sufficient to identify biases and take corrective actions.

    Emma Higham and Christian von Essen from Google spoke about how AI is being used to make search results safer and more inclusive. They discussed Google's mission to organize the world's information and ensure it is universally accessible and useful. They emphasized the need to understand user intent and prevent users from encountering explicit or graphic content when it is not relevant to their queries. They explained how AI, such as BERT and MUM, is used to improve search results and address biases in AI systems. They also mentioned using AI to identify and provide assistance for users in crisis.

    Bobina Zulfa, a researcher, discussed the intersection of gender and AI in the context of Africa. She highlighted the challenges faced by African women in accessing and using AI technologies and the need to redefine the notion of "benefit" in technology development. Bobina emphasized the importance of fostering liberatory AI that empowers communities and questioned the impact of AI technologies on privacy and consent.

    Lucia Russo from the OECD discussed the OECD AI principles, which are non-binding guidelines for trustworthy AI development adopted in 2019. These principles include value-based principles promoting sustainable development, human-centered values, transparency, safety, and accountability throughout the AI life cycle, as well as recommendations to governments. The first two principles promote inclusion and reducing inequalities.

    Lucia highlighted various policies and initiatives implemented by different countries to promote gender equality and inclusivity in AI. For instance, in the United States, efforts are made to improve data quality and increase the participation of underrepresented communities in AI and machine learning. The UK promotes inclusivity and equity in AI development through programs like Women in AI and Data Science. Netherlands and Finland have developed guidelines for non-discriminatory AI systems, especially in the public sector. The OECD also launched a catalog of tools for trustworthy AI, including tools to reduce bias and discrimination.

    Jenna Manhau Fung, from the Youth IGF, shared insights from the youth perspective. She noted that younger generations are generally positive about AI's potential to address gender bias and promote inclusivity. Jenna emphasized the importance of engaging coders in policy-making and the need for international standards to guide AI development. She also mentioned her personal experience with a small-scale writer's content not appearing in Google search results due to certain policies, highlighting the need for inclusivity for all content creators.

    In response to a question about fine-tuning and diverse training data, Google representatives, Christian von Essen and Emma Higham, explained that addressing biases in AI models involves both fine-tuning and improvements in the initial training data. The process is cyclical, and feedback from users plays a crucial role in making the models more inclusive.

    Overall, the conversation addressed both the challenges and opportunities of using A.I. to promote gender inclusivity and the importance of policies, principles, and independent audits in this context.

     

    IGF 2023 Launch / Award Event #71 Digital Safety and Cyber Security Curriculum

    Updated:
    Key Takeaways:

    The SDSC curriculum is vital in empowering students, parents, and teachers to create a cyber-resilient community. It provides a deep grasp of cybersecurity principles, equipping participants with skills to recognize and mitigate cyber risks. Emphasizing hands-on training, quizzes, and assignments, it offers an interactive learning experience, preparing the community to navigate online threats effectively.

    ,

    SDSC curriculum adopts a holistic approach to digital safety, addressing students, parents, & teachers. It goes beyond technical skills, emphasizing critical thinking, problem-solving, & cyb.sec. trend awareness. Parents are educated online safety, while teachers are equipped to use technology effectively in education, stressing continuous learning. This approach fosters community digital literacy & a responsible digital environment for all.

    Calls to Action

    Play a proactive role in advocating for the integration of the (SDSC) within schools. Collaborate with educational authorities, school boards, & stakeholders to underscore the importance of digital safety education. Emphasize the curriculum's comprehensive approach & its capacity to empower students, parents, & teachers in navigating the digital world. Seek endorsements & accreditation from pertinent educational bodies for the inclusion of SDSC

    ,

    Obtain International Recognition Take steps to obtain international endorsement for the SDSC curriculum. Showcase how it contributes to creating a safer, more responsible digital environment in developing nations. Advocate for unified efforts across diverse community sectors to strengthen digital sustainability & foster a safer online landscape in the era of artificial intelligence. This aligns with vision of achieving safe internet usage for all

    Session Report

    The Creators Union of Arab & the Arab Media Union ECOSOC Consultative Status, as a civil society organizations within the IGF community, we have a responsibility to contribute to sustainable development and stay current with technological advancements, our recent initiative, "The Digital Safety & Cybersecurity Curriculum," and with the effective and distinguished contribution from the strategic partner "HDTC Training Center" we all cooperated to  introduce it in  the Kyoto forum sessions, aligns with these goals, promoting digital safety and responsible Internet use, this approach also achieves the Sustainable Development Goals (4,5,9,10,11,17).

    This curriculum has a global mission, its primary objective is to enlighten students, parents, and educators about the significance and principles of cybersecurity, the curriculum's essence lies in providing a comprehensive grasp of cyber threats and vulnerabilities, along with the essential knowledge and skills to identify and counter cyber risks effectively.

    The session revolved around a comprehensive presentation of the digital safety and cybersecurity curriculum initiative, which delved into intricate aspects such as the curriculum's specifics, the methods for its implementation, its objectives, and the intended audience, moreover, it shed light on the anticipated outcomes resulting from its successful implementation.

    The website dedicated to the curriculum was also explained and how to use it for each target group, through the innovative utilization of instructional videos and interactive practical training via an online platform, enriched with quizzes and assignments to enhance the educational journey, the curriculum ensures a dynamic and engaging learning experience.

    Distinguished Event Speakers:

    1. Dr. Ahmed Sarhan (Nick name Ahmed Nour): He holds the esteemed position of President at the Creators Union of Arab and the Arab Media Union. Dr. Nour is an accomplished Media and Intellectual Property expert and an esteemed researcher, He presented an overview of the initiative and expressed his wholehearted support for its adoption. He emphasized his strong commitment to spreading the curriculum, making it available to a broad spectrum of stakeholders, and conducting workshops to introduce it effectively.
    2. Dr. Nabih AbdelMajid: As a professor at the Colleges of Technological Sciences in the United Arab Emirates, Dr. AbdelMajid is the Intellectual Property owner of the curriculum, bringing a wealth of knowledge and experience to the table, He provided an extensive elucidation of the curriculum, delineating the anticipated outcomes stemming from its implementation and highlighting the profound impact it can have on safeguarding our children. Additionally, he underscored the role of parents and teachers in promoting digital safety and fostering a secure technological environment for everyone.
    3. Dr. Hala Adly Husseien: A prominent figure in the field of block chain, she is a professor specializing in this area, and also she serves as the Secretary-General of Women's Affairs at the Creators Union of Arab and holds the position of Secretary-General of the Union of Arab Women Leaders, In her presentation, she examined the subject of cybersecurity through the unique lens of blockchain technology, delving into its pivotal role in realizing safe internet usage, she expounded on block chain’s capacity to securely and transparently trace all online activities, underlining its potential in ensuring a secure digital environment, the utilization of block chain technology for enhancing cybersecurity and its pivotal role in the realm of artificial intelligence is noteworthy. Cybersecurity is inherently integrated into block chain technology due to its decentralized nature, which is founded on principles of security, privacy, and trust. This integration not only brings transparency, cost-efficiency, and heightened security but also offers rapid implementation.

    On-Site Moderator:

    Dr. Nermin Selim: the Secretary-General of the Creators Union of Arab and also an Intellectual Property expert and researcher, further enriching the discourse with her expertise, She posed several inquiries regarding the curriculum's efficacy in educating parents and its potential to facilitate child monitoring while maintaining their self-confidence.

    Following the comprehensive presentation of the curriculum, the esteemed speaker, Dr. Hala, raised questions about its adoption and suggested its inclusion in the forthcoming meeting of the specialized federations of the League of Arab States, and Dr. Nabih responded by confirming that it had received approval from the Knowledge Authority in Dubai.

    Dr. Ahmed Nour further emphasized that efforts were underway to disseminate it across various educational organizations based on the recommendation of the IGF platform.

    Moreover, during the session, a master's student from Nepal posed an inquiry concerning the prevention of privacy violations. Dr. Nabih addressed this concern by explaining that this curriculum equips students with the knowledge to detect privacy infringements and take control of their privacy. It is purposefully designed to prioritize user security.

    Following the conclusion of the session, a UNESCO delegate reached out to the event organizers, expressing interest in arranging a meeting to delve deeper into the curriculum's intricacies and explore avenues for its implementation.

    The primary topics addressed were:

    • The complexities and risks within the digital landscape.
    • The Details of "The Digital Safety & Cybersecurity Curriculum."
    • Raising awareness and proposing potential solutions to mitigate digital risks.
    • Examining the block chain's role in promoting safe internet use.
    • Emphasizing the incorporation of education into ensuring digital safety for our children, with active involvement from parents and teachers based on solid scientific principles.

    At this launch event, our speakers emphasized the vital importance of creating a safe online space for children and providing parents with insights into their digital interactions.

    Our goal is to see this curriculum adopted by numerous educational institutions, ensuring the protection of our children and establishing a secure internet framework. It's worth noting that the program is technically prepared for immediate use.

    The Outcomes we aim to achieve

    • Empowering students, parents, and teachers to build a cyber-resilient community.
    • Raising a comprehensive understanding of cybersecurity principles and practices, the curriculum prepares participants with the knowledge and skills necessary to identify and mitigate cyber risks.
    • The emphasis on practical training, quizzes, and assignments ensures an interactive learning experience, educating a community that is well-prepared to navigate the evolving landscape of online threats.
    • Focuses on developing critical thinking, problem-solving abilities, and staying abreast of the latest cybersecurity trends.
    • Prepares teachers to integrate technology effectively into classrooms while emphasizing the importance of continuous learning.
    • Promote a secure and healthful environment for all members of the community.

    Key Recommendations

    1. Advocate for SDSC Implementation
    • Take an active stance in promoting the integration of the Student Digital Safety Certification (SDSC) curriculum within educational institutions.
    • Engage with educational authorities, school boards, and stakeholders to emphasize the importance of digital safety education.
    • Highlight the curriculum's comprehensive approach and its potential to empower students, parents, and educators in navigating the digital realm.
    • Solicit support and accreditation from pertinent educational bodies for the incorporation of SDSC into school curricula.
    1. Seek Endorsement from International Organizations
    • Initiate efforts to secure endorsement for the SDSC curriculum from international organizations, underscoring its role in fostering a safer and more responsible digital environment, particularly in developing nations.
    • Advocate for a collaborative effort across diverse community sectors to bolster digital sustainability endeavors and promote online safety, especially in the context of the artificial intelligence revolution and the overarching objective of ensuring safe internet use for all aligned with "The Internet we want - Empowering all people".
    1. Recommend Curriculum Implementation
    • Your esteemed platform, the IGF, is encouraged to endorse the importance of this initiative and work towards its implementation through various educational institutions and relevant ministries. The ultimate goal is to establish a secure digital environment for our children.

     

    Last but not least, the IGF stands as the paramount global platform for charting the world's technological landscape, we have seized this opportunity to make it our medium for conveying our message to all involved parties and to fulfill the overarching theme of the global IGF, "The Internet we want - Empowering all people.

    IGF 2023 Open Forum #46 IGF to GDC- An Equitable Framework for Developing Countries

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    The GDC is an opportunity for SIDS to get involved and shape what the process should look like for these countries. It allows the opportunity to present a unified voice for SIDS extricated from the dominance of the Global North and larger developed countries. The GDC presents a space to pool internet governance issues and to address gaps in other processes, capacity and fragmented governance.

    Calls to Action

    CTU commits to draft a correspondence after consultation with SIDS to present to the UNIGF MAG Chair to present to the Leadership Panel. This will outline improvements, gaps, needs and mitigation strategies that SIDS advocate to be included in the IGF processes.

    IGF 2023 Open Forum #23 A bottom-up approach: IG processes and multistakeholderism

    Updated:
    Key Takeaways:

    Multistakeholder processes have seen success and increasing use over the last two decades, but not all processes branded as multistakeholder have been meaningfully inclusive. Those engaging in the WSIS+20 review process, particularly member states must go beyond referencing the importance of multistakeholderism, and shape a process that includes stakeholder mapping, welcomes diverse participation and understands distinctions between expertise

    Calls to Action

    Preparation will be key for WSIS+20 Review process to achieve meaningful multistakeholder participation. This must consider how to create multiple channels for stakeholder input and discussion, including holding regional meetings to prepare, which should reflect local contexts and regional priorities. Preparations should also leverage mentorship and sponsorship to empower and amplify participation from those new to the IG space

    ,

    We must look creatively at our resources to ensure an inclusive WSIS+20 Review process. This ranges from large scale activities where appropriate, such as concerted efforts from UN bodies to outreach to a diverse range of people using new and creative methods to raise awareness, all the way to individual actions from stakeholders to facilitate inclusion, like organisations convening groups of stakeholders to prepare for and participate in WSIS+20

    Session Report

    Session context

    The UK’s Open Forum articulated why the multistakeholder model is essential to Internet governance, highlighted the World Summit on the Information Society (WSIS) +20 Review process as an important UN process for the multistakeholder community to engage with and asked participants to brainstorm ways to make the WSIS+20 Review process fully inclusive to multistakeholder participation. The forum built on the UK’s IGF 2022 Lightning Talk ‘Renewing the WSIS mandate: the Internet and its governance beyond 2025’, the recording for which can be found on YouTube. 

    Summary

    Participants explained that multistakeholder processes involving Internet governance have seen success and increasing use over the last two decades, but not all processes branded as multistakeholder have been meaningfully inclusive. Indeed, sometimes the word multistakeholder is used inauthentically, to describe processes that are not meaningfully inclusive, in order to capitalise on the positive and trusted reputation of ‘multistakeholder’ processes as a gold standard for consultation. To address this, participants stated that those engaging in the WSIS+20 review process, particularly member states, must go beyond referencing the importance of multistakeholderism, and shape a process that includes stakeholder mapping, welcomes diverse participation and understands distinctions between different types of expertise.

    In addition, WSIS+20 established a clear recognition of different stakeholder groups. Since 2003, participants remarked that groups of Internet governance stakeholders have not been shown to be fixed over time, and that groups are not homogenous. Looking forward, to have fully inclusive processes, the nuances of relevant stakeholder groups need to be recognised, accommodated and welcomed.

    Practical barriers to participation including costs of engaging, visas and not enough notice given of timelines were raised. Participants noted that mentorship and sponsorship schemes could help address these points and encourage newcomers to become involved with the WSIS+20 review process.

    Participants agreed that preparation will be key for WSIS+20 Review process. As such:

    • We must consider how to create multiple channels for stakeholder input and discussion, including holding regional meetings to prepare, which should reflect local contexts and regional priorities. Preparations should also leverage mentorship and sponsorship to empower and amplify participation from those new to the Internet governance space.
    • We must all look creatively at our resources to ensure an inclusive WSIS+20 Review process. This ranges from large scale activities where appropriate, such as concerted efforts from UN bodies to outreach to a diverse range of people using new and creative methods to raise awareness, all the way to individual actions from stakeholders to facilitate inclusion, like organisations convening groups of stakeholders to prepare for and participate in WSIS+20. 

    Next steps

    The 2024 Commission on Science and Technology for Development (CSTD) is expected to set out timelines and milestones for the WSIS+20 Review process that will include opportunities for multistakeholder participation. A Report for the WSIS+20 process will be written for the CSTD looking ahead to WSIS post 2025. Governments in particular will need to create opportunities for meaningful multistakeholder participation as part of this timeline, and will need to raise awareness of opportunities to participate across stakeholder groups.

    IGF 2023 Launch / Award Event #156 Net neutrality & Covid-19: trends in LAC and Asia Pacific

    Updated:
    Avoiding Internet Fragmentation
    Key Takeaways:

    -Most of the panelists agree on the importance of net neutrality to safeguard an open and equal Internet in terms of the operability of digital trade.

    ,

    -Certain elements accompanying the principle of net neutrality were highlighted, such as dispute resolution and the rights of internet users and consumers.

    Calls to Action

    -Consideration should be given to how to promote more active monitoring of compliance with the principle of net neutrality.

    ,

    -The Pacific Alliance showed interest in this session and requested a presentation on net neutrality in order to discuss further developments in the framwork of the Subcommittee on Digital Economy (SCED) of the Pacific Alliance meeting in November 2023.

    Session Report

    This session organized by the University of Chile achieved a multistakeholder participation with a special focus on the perspectives of the Global South in relation to the principle of net neutrality. It had wide participation of Academia, Government, Technical community and Civil society.

    In this regard, participants were able to learn about the experience of both the members of the Pacific Alliance and the regional integration forum itself.

    Panelists discussed Chile's experience in being the first country in the world to legislate the principle of net neutrality, and then Peru, Colombia and Mexico followed the trend, managing to establish the legal presence of the principle within the Alliance. 

    Subsequently, the incorporation of net neutrality in the Pacific Alliance Trade Protocol in its Telecommunications Chapter was discussed, establishing an important precedent in the area of public international law and international economic law. 

    Panelists from academia (Chile), the technical community (Brazil), government (Chile and Argentina) and civil society (Perú) discussed the importance and challenges of the principle, mainly in terms of protecting users' rights and the necessary monitoring of compliance.

    IGF 2023 Launch / Award Event #187 Digital sovereignty in Brazil: for what and for whom?

    Updated:
    Avoiding Internet Fragmentation
    Key Takeaways:

    As insights from the research to date, it is possible to identify different notions related to the expression “digital sovereignty”, even though it does not always appear explicitly in public documents or speeches.

    ,

    Internet fragmentation is one of the biggest concerns nowadays, and digital sovereignty is one of its potential causes;

    Calls to Action

    There are no shared notions or definitions of what sovereignty in the digital sphere would mean, which justifies the attempt to map the narrative on digital sovereignty.

    ,

    The audience raised questions related to understanding if the expression “digital sovereignty” makes sense, and concerns regarding the potential impacts that the adoption of different definitions of sovereignty could have on the Internet’s operating model.

    Session Report

     

    Objective of the session: 

    The session aimed to present the preliminary results of a research project resulting from a partnership between CEPI FGV Direito SP and the Brazilian Chapter of Internet Society (ISOC Brazil), and partly funded by Internet Society Foundation. The project seeks to conceptually and empirically identify how digital sovereignty notions are constructed based on Brazilian stakeholders' narratives from different sectors in public documents, taking into account legal, social, economic, and political implications connecting the local, regional, and global levels. 

     

    Presentations

    Raquel Gatto mentioned that the biggest threat to the Internet is the phenomenon of splinternet - the fragmentation/division of the global network, when there is no longer the use of the Internet common protocols - which can derive from different approaches, such as those related to infrastructural challenges, technical challenges, matters of national security, among others, including the facet of digital sovereignty, the main subject studied in the research project. 

     

    Flávio Wagner brought that the specific context of Brazil is that of a country that has already been implementing different regulations on the Internet, such as the Internet Bill of Rights (Marco Civil da Internet) and the Brazilian General Data Protection Law (LGPD), in addition to discussing new regulations related to AI, misinformation, digital platforms, cybersecurity, among others. Aspects related to sovereignty are being used as justification for the creation of these norms and other public policies, but there are no shared notions or definitions of what sovereignty in the digital sphere would mean, which justifies this attempt to map the narrative on digital sovereignty.

     

    Ana Paula Camelo brought the methodology used in the research, as well as the current activities that are being carried out, such as the mapping of public documents from different stakeholders that contain notions of digital sovereignty, interviews, and the elaboration of a course on the theme. Also, Ana Paula highlighted some of the preliminary findings of the research: (i) in the Brazilian context, written documents don’t specifically use the term “digital sovereignty”. Therefore, the research team decided to look for broad notions that in some way explored the relationship between sovereignty and the digital environment, the Internet, regulations and discussions about network fragmentation; (ii) diverse understandings are at stake when discussing digital sovereignty (such as self-determination, the power to regulate, national security, etc.), as well as different perspectives (such as political, legal and technical lenses).

     

    Questions and debates

    The audience shared some questions and concerns with the speakers. The highlights are: 

    (i) try to take a historical approach on Brazil’s prior investment in digital sovereignty (such as the production of technological equipment and the development of open source code), to understand if that past has correlations with the current developments. 

    (ii)  in addition to discussing the notions of the expression “digital sovereignty”, it is important to also question the term itself, to understand whether it makes sense or whether it is just used as a buzzword. It is important to discuss such a term since there is no country absolutely independent in relation to others in some matters.

    (iii) try to understand the different impacts that the use of the term “sovereignty” can have on the digital sphere, especially considering that some discourses and practices relating to the “exclusive” feature of political and legal senses of sovereignty can hurt the fundamental aspects of the functioning of the Internet.  

    The speakers and members of the research project will take the comments and questions raised into consideration in the further development of the project. 

    IGF 2023 Lightning Talk #103 Strengthening Cybersecurity for a Resilient Digital Society

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    Takeaway: 1) Provide easy and understandable ways for smartphone users to practice secure behaviours. 2) Provide education and awareness for smartphone users to enhance their privacy and security practices.

    Calls to Action

    1) To support a resilient digital society, a public-private partnership is suggested. Through policy and providing technical controls with good UX to make it easy to practice secure smartphone behaviours on people’s devices that will mitigate the increase in threats. 2) • To support citizens of a digital society, a multi-stakeholder partnership between government, business, academia and civil society is suggested.

    Session Report

    In the lightning talk “Strengthening Cybersecurity for a Resilient Digital Society – Opportunities to Increase Smartphone Privacy and Security”, focusing on SDGs 9 & 17, we presented: 

    • Smartphone’s Role in a Digital Society 

    • Smartphone Privacy Risks 

    • The Evolving Smartphone Cyber Threat Landscape 

    • Promoting Safer Behaviours and Practices for a Cyber Resilient Society 

    The audience was asked to reflect on what they think is the most significant risk of using a smartphone at the beginning of the presentation. The increased global presence of smartphones is based on the information from Statista (2023), with 6.5 billion smartphone users, representing 68% of people worldwide having a smartphone. Smartphone usage has also changed, with 80 apps per user on average (DataProt, 2023); 6 out of 10 smartphone users choose finance apps over websites (Google, 2016), and 73% of online shopping comes from smartphone devices (DemandSage, 2023). The total mobile industry economic contribution currently contributes $5.2 trillion to global GDP (GSMA, 2023). 

    So smartphones bring many benefits, from enabling citizens in mobile-first countries to other places to offering 24/7 connectivity as a versatile all-in-one product. The benefits range from increased access and efficiency, reducing the necessity to travel, enabling commerce, payment, learning and photography

    At the same time, if we look at the threat landscape based on data from 2022, we can see that it is rising with cyber criminals starting to focus on mobile devices, malware for smartphones being everywhere, including advanced malware that may be used to target specific valuable individuals, an increase in vulnerable mobile devices that lead to security incidents and mis-configurations of app. back-ends that expose users personal data.

    The risks to smartphone users include data breach, data leakage, identity theft, device loss, privacy violation and surveillance. Based on a report from the US (Lookout, 2014), the consequences of phone theft led to 12% experiencing fraudulent charges to their account, 9% having their identity stolen, 47% reporting a loss of time and productivity, and 10% reporting a loss of confidential company data.

    Thus, digital societies must build resiliency by raising cybersecurity and privacy practices. 

    To fully understand the importance of why, we asked the audience to consider privacy risks, explaining how apps may collect personal data and connect it to individual behaviours and how often users may not know what data is leaving their devices even when it is not done maliciously.

    We shared information about the evolving smartphone cyber threat landscape, showing that both the ratio of exposed mobile devices and the ratio of successful attacks have increased in recent years. This is worrying, considering all the possible vulnerable data assets stored on smartphones (passwords and other credentials, location history, sensitive documents, personal medical information, communication with assistant apps, and phone call history). 

    We shared an example of how a smartphone cyber-attack can be conducted using smishing and malware, as in the case of the SpyNote campaign. The phishing attack was done through mobile text messaging or email with a malicious URL to a malware file. Once the user downloads and installs the file, their device will be infected, and the attacker can explore or steal data or even control the whole device.

    We propose a multi-stakeholder partnership approach to promote safer smartphone behaviours and practices. Governments and industry can support this by ensuring that devices come with technical controls that are easy to understand and use by smartphone users. A multi-stakeholder partnership between the education sector, industry and civil society can support increasing formal education and awareness and influence society to gain safer security and privacy behaviours and practices.

    While creating safety measures for a smartphone and digital society may be more complex, we must start. Governments can mandate the industry and education systems to play an active role in building a digital community. The mobile industry (manufacturers and network providers) can further promote their brand value by developing and providing easy privacy and security functionality and education.

    In summary, we can mitigate the many smartphone threats by using a multi-stakeholder partnership and partnering for the goals (SDG 17). This will allow us to protect the development of a sustainable and resilient digital infrastructure globally (SDG 9). There was active engagement during the presentation and the audience provided thought-provoking questions during the Q&A session which truly enriched the discussion. The diverse perspectives and regional insights that were shared were invaluable, and we are thankful for everyone’s contributions. As we move forward, we hope that these discussions and insights will be brought into our daily lives and that we can work together to make the digital world a safer and more secure place for all. 

     

    References: 

    DataProt https://dataprot.net/statistics/how-many-apps-does-the-average-person-have/  

    DemandSage https://transition.fcc.gov/cgb/events/Lookout-phone-theft-in-america.pdf  

    Google https://www.thinkwithgoogle.com/marketing-strategies/app-and-mobile/finance-app-user-statistics/  

    GSMA https://www.gsma.com/mobileeconomy/wp-content/uploads/2023/03/270223-The-Mobile-Economy-2023.pdf  

    Lookout https://transition.fcc.gov/cgb/events/Lookout-phone-theft-in-america.pdf  

    Statista https://www.statista.com/forecasts/1143723/smartphone-users-in-the-world 

    IGF 2023 Launch / Award Event #61 Book presentation: “Youth Atlas (Second edition)”

    Updated:
    Key Takeaways:

    The Youth Atlas highlights the importance of youth participation in the Internet Governance Forum (IGF) and highlights the need for collecting data and statistics to showcase the active engagement of young people worldwide. It also discusses the collaborative efforts of volunteers to give visibility to youth initiatives and the significance of maintaining the involvement of young individuals within the IGF ecosystem.

    ,

    The IGF Berlin in 2019, where the first edition was released, is a pivotal time as a mechanism for youth to join high-level discussions and decision-making processes and the importance of meaningful youth participation in internet governance spaces.

    Calls to Action

    Give Visibility: Provide visibility to the experiences and contributions of young people within the IGF ecosystem, especially more indigenous, people with disabilities or underrepresented communities. Highlighting their involvement and initiatives on a global scale.

    ,

    Work for Youth by Youth. Nothing without youth: The importance of youth participation and collaboration within the IGF. It calls for youth to actively engage, learn, and work collectively to create influence and drive positive change within the IGF.

    Session Report

    The launch of the Youth Atlas 2.0 is a big moment for recognizing the input of young people around the world. This new version shows how dedicated they are to shaping the digital world for a better tomorrow.

     

    The Youth Atlas 2.0 is not just a storybook, but a compendium of valuable insights from young people around the world. Their experiences and histories are intertwined to create a tapestry that serves as a tribute to their involvement in various fields. They are the minds that shape our digital world now and in the future.

     

    The guide explores important topics, such as data and statistics. The Youth Atlas 2.0 is a valuable resource of data and statistics, offering insights into the young population's contributions, achievements, and potential in the digital domain.

     

    This edition examines the experiences of youth who participated in the Internet Governance Forum (IGF) or fellowship programs, showcasing their transformation from newcomers to more veteran participants.

     

    The text also includes information on programs and initiatives for young people. The Youth Atlas 2.0 highlights how crucial youth-oriented programs and initiatives are by showcasing numerous ways that young people actively engage in the IGF ecosystem.

     

    Dedicated volunteers who invested their time and energy into this collaborative effort orchestrated it. The objective is to increase the visibility of youth involvement and motivate additional young individuals to join in youth initiatives.

     

    This project stands as a testament to international cooperation. This project is the result of many people working together to support a global initiative. It highlights the importance of active participation in the IGF, rather than just watching from the sidelines. Through this project, we aim to shape discussions and decisions, making a valuable contribution to the broader conversation. One of the special characteristics of the book is it transmedia nature of the book that provides insights into the views of young people on Internet Governance through video interviews accessible via QR code.

     

    The Youth Track became a pivotal point in 2019 during the IGF Berlin, as it gave young people a voice and integrated their perspectives into the conversation. The collaboration among young people eager to contribute to important conversations and decision-making has been very valuable. It can be difficult to find and connect with similar individuals worldwide, but the committed youth coordinators have made the process much simpler and easier to follow. As an example, since 2021, the Youth Summit has now been a platform for seniors and youth to talk and empower the younger generation on regional IGFs and the IGF itself. This amplifies young voices as partners, creating a hopeful and confident sentiment as long as youth empowerment is prioritized.

     

    Another result of these Youth processes is the new Teen Dynamic Coalition which was formed by collaborating with youth in 2023 after one of the outcomes of the Youth Summit 2022 in Addis Ababa, demonstrating the power of collective efforts. 

     

    What constitutes meaningful participation in the IGF remains a pressing question. It's not just sitting at the table. It's shaping the structure and content of the digital world. Joining in isn't a one-time thing, it's a continuous process that involves learning and contribution. NRIs play a critical role in getting started.

     

    The message at the heart of it all is clear – the IGF must serve the interests of youth by working for them. It's a team effort, sharing knowledge and working together to have a say in the IGF. Including more voices, like those from indigenous populations, marginalized communities, people with disabilities, is crucial to creating a digital future that is fair and inclusive.

     

    The Youth Atlas 2.0 shows us how important it is to listen to young people. They deserve recognition for their commitment, dedication, and contributions to making the digital era a better place.

    IGF 2023 Day 0 Event #51 Shaping AI to ensure Respect for Human Rights and Democracy

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    The AI technologies have very serious potential in helping humanity protect the environment and fight climate change and indeed in many corners of the world the work is already underway.

    ,

    At the same time, any risks and potential adverse effects for climate and environment in connection with the design, development, use and decommissioning of AI systems should be carefully studied and adequately addressed in the work of national and international institutions currently involved in AI governance with wide and adequate participation of all relevat stakeholders.

    Calls to Action

    Global character of the AI technologies requires not only national, but also international legal response.

    ,

    Any emerging standards should be conducive to innovation, contain sufficient indications regarding possible risks and recommend effective processes to tackle such risks.

    Session Report

    On 8 October 2023 (2 to 3.30 PM) the Council of Europe organised a Day 0 session on “Shaping artificial intelligence to ensure respect for human rights and democratic values” opened by its Deputy Secretary General Bjørn BERGE. The Chair of the Committee on Artificial Intelligence Ambassador Thomas SCHNEIDER moderated the panel, which included Ivana BARTOLETTI (WIPRO, the Women Leading in AI Network), Francesca ROSSI (IBM Fellow and AI Ethics Global Leader), Merve HICKOK (CAIDP), Daniel CASTAÑO PARRA (Universidad Externado de Colombia) participating on-line and Dr Liming ZHU (CSIRO of Australia) and Professor Arisa EMA (University of Tokyo and RIKEN Center for Advanced Intelligence Project and member of the Japanese Delegation in the CAI) attending in-person.

    The discussion connected the in person and online participants with the panel representing different stakeholder groups (domestic and international standard setters, industry and academia). There was a general agreement that the extraordinary benefits of AI technologies should serve humanity making life better for everyone. However, this important progress should not come at the price of human rights or democratic values. The panelists and public were of the view that at an international level it made sense to develop common legally binding approach to basic principles which should govern how the AI is designed, developed, used and decommissioned. Such approach should be based on the existing and widely accepted international human rights standards. Individual presentations by the panelists were followed by a lively discussion involving both in person and on line participants.

    IGF 2023 WS #483 Future-Ready Education: Enhancing Accessibility & Building

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    Digital education has the potential to transform the lives of millions of people in Latin America & Caribbean. To achieve this, governments and organizations can: Expand broadband access to schools and other educational institutions, especially in rural and underserved areas. Provide affordable devices and connectivity to students and teachers. Train teachers and administrators on digital tools and resources.

    IGF 2023 Networking Session #153 Generative AI and Synthetic Realities: Design and Governance

    Updated:
    AI & Emerging Technologies
    Key Takeaways:
    "Understanding the training data of generative AI systems is crucial.", "Further investigation is needed into how people interact with generative AIs and their potential consequences."
    Calls to Action

    "Encourage further research in the field of Human-Computer Interaction (HCI) applied to generative AI."

    ,

    "Explore methods to prevent cybercrimes using deep fakes."

    Session Report

    The meeting focused on discussing Generative AI, with an emphasis on platforms' ability to interact with humans and provide relevant answers to their questions. There was a discussion about the role of children in teaching robots, interface humanization, challenges faced with AI usage, and its capabilities. Additionally, concerns regarding safety, regulation, transparency, and ethics in AI use were addressed, especially in contexts such as small businesses and potentially deceptive situations.

    Topics Discussed

    Generative AI and Interactivity:

    The ability of platforms to receive human interactions and respond appropriately to requests.

    Children learning about AI and teaching robots to understand humans.

     

    Challenges of AI Usage:

    Accuracy errors, interface humanization, scope visibility, misuse, and resolution of ambiguities.

    Transparency and interpretation of references depending on the context.

     

    AI Capabilities:

    Scale, homogenization, emergence, conversation assistants, and hallucination.

    Challenges related to hallucination, lack of transparency, and misalignment with human expectations.

     

    Small Business Situation:

    Small business owners often lack awareness of their business status, affecting credit granting and loans.

     

    Key Audience Questions:

    Audience asked about the impact of generative AI  in crimes and cybersecurity, especially in deep fakes and voice deception. Discussions on regulation and creativity to deal with these new technologies.

    Speakers emphasized the need to get accustomed to new technologies and find creative solutions to regulate their use. The reality of deep fakes and the need to learn to deal with them, drawing an analogy with the existence of knives at home but the importance of laws to ensure safe usage.

    Conclusion and Recommendations

    Emphasized the importance of addressing generative AI challenges, including hallucination, lack of transparency, and deception, with effective regulatory and educational measures.

    Mentioned the need to promote transparency and ethics in AI use, especially in sensitive scenarios such as small businesses. Suggested continued interdisciplinary discussions and collaborations to address emerging challenges of generative AI, safeguarding democratic values and individual freedom.

     

    This report summarizes the key points discussed in the Generative AI meeting, highlighting participants' perspectives and audience concerns regarding the use of AI in criminal and cybersecurity contexts.

    IGF 2023 WS #345 DigiSov: Regulation, Protectionism, and Fragmentation

    Updated:
    Avoiding Internet Fragmentation
    Key Takeaways:

    - Different states have different norms, so we should aim for unified norms among states where applicable and share practices

    ,

    - The impact of internet fragmentation is even more crucial in the Global South, and we need to keep that in mind.

    Calls to Action

    - Regulations are needed but there need to be also a space for multistakeholder dialogue as well as country specifics.

    ,

    - Policies and regulations play an important role, however we must make sure they don’t make more harm.

    Session Report

    DigiSov: Regulation, Protectionism, and Fragmentation

    IGF WS#345

     

    • Policies and regulations play an important role, however we must make sure they don’t make more harm.
    • Different states have different norms -> we should aim for unified norms among states where applicable and share practices.
    • Data is the new oil in the world -> some states has yet been excluded from the digital economy.
    • We should strike for accessibility and equal conditions among states. So that everyone can use the benefits and opportunities.

     

    • They newly have a law that regulates the digital sector.
    • We should bring internet connectivity to all the people.
    • New projects are arising to help with the new challenges connected to internet accessibility.

     

    • We should strike for sharing the important discussions among multiple parties (technical community, academia, etc.) => multistakeholder approach
    • IG and policies should coordinate, otherwise the fragmentation may occur.
    • User experiences may differ (e.g., internet shutdowns, geo-blocking)
    • Essential principals: equality, experience with assessing the internet, harmonization (new policies should be talking to other existing policies).
    • Many user choices are now shaped by other parties.

     

    • We tend to take the internet for granted (we tend to do many small tasks on the internet without not even realizing that).
    • Governments tend to regulate content with intentions to protect the people – but do they really do so?

     

    • We should strike for a multistakeholder model in discussing the internet fragmentations and related impacts.
    • Not all critical r. are made my states!

     

    • DSA, AI act and other related regulations may impactfully change the internet as we know it now.
    • We currently observe a change in the mindset of states – they try to also consider the freedom of speech and human rights.

     

    • Fragmenting has different impact on different states (for example on market in Africa vs in Europe)
    • The digital divide is coming to the debate as a strong influential factor.
    • We need to find a solution for dissemination of disinformation. Especially in global south it is incredibly impactful. However, the regulations cannot go beyond the boards of the freedom of speech.
    • Having dialogues on the international level is to way to go.
    • Regulations are not problematic per se.
    • Global South -> information security is essential. There possibly can’t be a unified solution.
    • Everyone should be equally protected from the harms and consequences related to the internet fragmentation.
    • There are new challenges and other will arise soon. The world is changing rapidly.
    • Regulations are needed but there need to be also a space for multistakeholder dialogue as well as country specifics.
    • The impact of internet fragmentation is even more crucial in the Global South, and we need to keep that in mind.
    IGF 2023 DC-DDHT Robotics & the Medical Internet of Things (MIoT)

    Updated:
    AI & Emerging Technologies
    Key Takeaways:
    Robotics in the Medical Internet of Things space is growing rapidly. What are the Data Organization, Privacy, Security, Accessibility and Ethical Issues that a multistakeholder group must resolve for good design, operations (including Green Health) and patient care ?,

    UN SDG 3 mandates Health & Wellness For All. Meaningful Medical IoT access must explore technologies, such as DTN, to enable equitable access to medical services and record keeping.

    Calls to Action

    Muttiskateholder discussions on the way forward with human robotic interaction interfaces (acessibility. privacy, security +). Internet HCI interfaces are a good startiing point for discussion.

    , How can DTN and other emerging technologies be developed for healthcare reach?
    Session Report

    IGF 2023 Dynamic Coalition on Data Driven Health Technologies (DC DDHT)

    IGF Kyoto 2023 Session Report on Robotics and the Medical Internet of Things

    October 9, 2023

    Reported by Amali De Silva-Mitchell (Coordinator DC DDHT) and Dr Joao Rochas Gomes

    Onsite moderators: Dr Amado Espinosa (Mexico) and Judith Hellerstein (US). Online Moderators Amali De Silva-Mitchell (UK/Sri Lanka) and Dr Joao Rochas Gomes (Portugal).

    Robotics is pitched for a significant role in the delivery of healthcare services for the future, from research to actual patient care to hospital maintenance etc.. Already there are a large number of wellness and patient (home) monitoring and support devices in use through the internet, but access and accessibility issues are limiting access to All, UN SDG #3, Health and Wellbeing for All. This dynamic coalition focuses on the patient or end user interactions, with technology. The DC is due to publish a paper on this same subject in 2023. The session engaged the guest speakers and participants to a discussion on the role of robots within healthcare, with a specific emphasis on the internet.

    Oscar Garcia

    Outlined the need to have good standardized medical records systems set up, for meaningful, systematic data exchanges. Robots and AI need well organized data records to function optimally. He emphasized that the communications regarding record’s content can be facilitated to remote areas such as the moon with Delay Tolerant Networks DTN. This technology can be developed for rural and remote parts of the planet at a fraction of the cost of the set-up of satellites, which will speed the development of e-health to all much faster.

    Dr Samo Grasic

    Showcased the technology behind DTN. DTN works by storing data at an intermediate point for data transmission. His work with tagging roaming reindeer in northern Sweden has been effective using this technology in remote locations of the arctic.  This highlighted that there are solutions that can be further developed to bring peoples living in very remote areas, access to ehealth and emergency health care support, far sooner than could be expected. These solutions will help with critical, disaster and emergency relief and help bring the goal of the United Nations Sustainable Development Goal number 3, Health and Wellness for All within operational range.

    Jutta Croll, DC on Children, Co-Coordinator DC of DCs

    Referred to the UN Convention on the Rights of the Child, Art. 24 and called on States Parties to recognize the right of the child to the enjoyment of the highest attainable standard of health and to facilities for the treatment of illness and rehabilitation of health. She outlined what the UN-Committee on the Rights of the Child had laid down in general Comment #25 on children’s rights in the digital environment in relation to digital technologies. A child’s health, she said, start with its birth, therefore without identification, registration and acknowledgment of child / person, access to healthcare may be limited, delayed or denied. With regard to the medical Internet of Things she referred to Teddy the Guardian, a health monitoring device for children and pointed out there are issues of privacy, transparency and ethics in to the sensitive data of children

    Judith Hellerstein, DCAD Co Coordinator

    Persons with Disabilities are very fearful of robots in the medical profession since it will mean they are not able to communicate with their doctor, nurse etc., and will be disenfranchised.  Persons who are hard of hearing cannot read lips as the robot has no lips and no facial expressions.  They also need either sign language or human captioning.  AI captioning is not very good or reliable.  Also need to be face to face and without a mask.  However, if these robots were telepresence ones, machines with a computer on top where they can communicate face to face with a person on a screen and have access to captioning, face 2 face or have access to sign language, that could be a big help.  So, if they are connected and can have this in with the doctor that would be helpful.  Facial expressions are critical for lip reading and hand gestures for sign language. Hence it will be essential to have text displays for adults and the robot needs to conduct sign language. 

    There is tremendous opportunity for robotic devices to be developed to assist with accessibility. For persons with cognitive disabilities, a quiet place to assist these would be helpful. We just have to remember that robots that take blood, or assist doctors we cannot forget the human touch or robots that are telepresence ones.

    Prof. Dr Rajendra Pratap Gupta, DC- Digital Health

    Spoke of the imminent presence of robots in the healthcare space and pointed to the fact that the capability of robots exceeds the human capability.  Robots will find use in two main areas; handling  1) Routine tasks like cleaning & disinfecting wards, serving patients, and drug dispensing and diagnostic sample delivery. This will lead to cost savings and floor efficiency and in 2) Medicine: like doing clinical procedures, the role of the exoskeleton in paralyzed patients, social interactions with seniors and other specialized roles in clinical settings which will lead to better clinical outcomes and cost savings for care providers.  He further added that he envisioned a future of specialized robotics like cardiology robotics, urology robotics, and oncology robotics. Also, clinical robots are expensive so while the effectiveness is proven, democratization will take years to happen, given that we need to bring down the cost.

    Dr Houda Chihi, DC DDHT

    Spoke of the numerous security issues that must be overcome to secure communication between robots and the humans. These protections are essential for the expected function of robots and for the outcomes. The security design process for robot function in the environment at large must be well thought out and stress tested. Updates with security patches to software are critical, to maintain protections.

    Jörn Erbguth, DC DDHT

    Privacy concerns arise with the use of robots due to their extensive data collection capabilities through numerous sensors including cameras, microphones, temperature sensors, sound, touch, and proximity sensors among others. Additionally, robots might be able to access patient files and other monitoring devices resulting in the accumulation of vast amounts of data exceeding that of traditional surveillance systems. Patient data is typically highly sensitive health data that is afforded the highest protection under EU GDPR and other data protection laws. What part of the data is stored and for how long? It is important to determine who has access to this data within the hospital; is it solely restricted to the treating doctor, or is it accessible to all personnel? Additionally, does the robot manufacturer have access to the data, e.g., for the purpose of detecting malfunctions and improving the functionality of the robots? Further, in the event of the patient's demise, do the relatives have access to this data? Does law enforcement have access to this data? What is the legal basis for that processing of highly sensible personal data? If based on consent, data protection regulation might require to be able to give granular consent, meaning limiting the consent to some of the data processing only.

    Dr Joao Rochas Gomes, DC DDHT

    Provided points from the DC DDHT draft paper on Robotics in Miot and Healthcare, due to be published in December 2023 at https://intgovforum.org/en/content/dynamic-coalition-on-data-driven-health-technologies-dc-ddht

    During his intervention, he discussed the integration of robotics in healthcare. He emphasized data privacy, accountability, seamless hardware-software integration, and reliable connectivity as key aspects. João further noted the importance of scalability, user-friendly design, and education. He concluded by highlighting the potential and challenges of robotics in healthcare, inviting further discussions.

    DC DDHT Annual Work 2023 Announcement

    The authors for the 2023 edition of the Dynamic Coalition on Data Driven Health Technologies book “Health Matters, Technologies Driving Change in Healthcare, A Community of Thought” were: Frederic Cohen, Dr Joao Gomes and Yao Amevi Amessinou Soussou. The book is found at: https://intgovforum.org/en/content/dynamic-coalition-on-data-driven-health-technologies-dc-ddht

    Highlights from their articles are as follows: Emphasis on international collaboration for research and development, noting the rapidly changing landscape for medical technologies and the need for designing “Greening” into health technologies.

    We thank the 35+ participants for their interest in the session and encourage them to join our DC for extended conversations!

    IGF 2023 WS #273 Can a layered policy approach stop Internet fragmentation?

    Updated:
    Avoiding Internet Fragmentation
    Key Takeaways:

    To guard against Internet fragmentation a layered approach can help the regulator think about what part of the internet do not want to affect. When designing or implementing public policies relating to Internet user activity and behavior can be more effectively implemented at the ‘highest layer’ of the Internet stack.

    ,

    From a mukltistakeholder perspective regulations to develop better policy that is fit for purpose, necessary and proportionate to internet fragmentation taking into account how the regulation affects infrastructure layer and the ability to provide those services as well as global effects such as the splinter for the internet. Consequently, regulation should not happen at the infrastructure layer because that can’t be done proportionally.

    Calls to Action

    Policymakers should address harmful internet fragmentation, that cause the splinter of the internet and collectively identify principles for effective regulation fostering an open, globally connected, secure and trustworthy Internet.

    ,

    From the policy perspective the focus of regulation and accountability should target the public core of the internet which sits in the top layers of the stack.

    IGF 2023 WS #409 AI and EDTs in Warfare: Ethics, Challenges, Trends

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    1. AI in the military domain goes beyond lethal weapon systems, and can impact the pace of war, as well as increase vulnerabilities due to limitations in AI.

    ,

    2. Geopolitical power considerations and lack of awareness cause deadlock in moving these conversations forward.

    Calls to Action

    1. The international community needs to define a set of concrete ethical principles applicable to the use of AI in defence to open a pathway for implementation in the style of International Humanitarian Law.

    ,

    2. International Organisations must take on more responsibility and leadership in establishing and implementing binding ethical frameworks.

    Session Report

    Abstract:

    What makes this topic relevant to the IGF? AI systems are dual-use by nature, meaning that any algorithm can be used also in military contexts. We indeed already see AI and EDTs being used in conflicts today, like in the war in Ukraine. The availability of data, machine learning techniques, and coding assistance makes the use of such technologies far more accessible to non-state actors as well.

    The plethora of ethics guidelines and policy frameworks largely exclude the military context, even though the stakes appear to be much higher in these applications. As such, the European Union’s risk-based AI Act completely excludes military uses of AI. This omission raises questions regarding the consistency and fairness of the regulatory framework. 

    The debate regarding AI in the military extends beyond the legality of autonomous weapon systems. It encompasses discussions about explainable and responsible AI, the need for international ethical principles, the examination of gender and racial biases, the influence of geopolitics, and the necessity of ethical guidelines specifically tailored to military applications. These considerations highlight the complex nature of implementing AI in the military and emphasize the importance of thoughtful and deliberate decision-making.

     

    Speakers’ summaries:

    Fernando Giancotti. In the war in Ukraine, AI is primarily used in decision support systems. But in the future, Giancotti hypothesises that we can expect a major change in warfare due to the increased use of AI. According to him, the starkgap in ethics discussions is a serious issue. A recent research case study on the Italian Defense published by the speaker with Rosanna Fanni highlights the importance of establishing clear guidelines for AI deployment in warfare. Ethical awareness among commanders is high, but commanders are concerned with accountability, and the study emphasises that commanders require explicit instructions to ensure the ethical and effective use of AI tools.Commanders also worry that failure to strike the right balance between value criteria and effectiveness could put them at a disadvantage in combat. Additionally, they express concerns about the opposition's adherence to the same ethical principles, further complicating the ethical landscape of military AI usage.

    On the other hand, Giancotti also recognises that AI has the capacity to bring augmented cognition, which can help prevent strategic mistakes and improve decision-making in warfare. For example, historical wars have often been the result of strategic miscalculations, and the deployment of AI can help mitigate such errors.

    While different nations have developed ethical principles related to AI use, Giancotti points out the lack of a more general framework for AI ethics. As shown by the study, AI principles vary across countries, including the UK, USA, Canada, Australia, and NATO. On UN level, the highly polarised and dead-locked discussion on Lethal Autonomous Weapon Systems (LAWS) does not seem to produce promising results for a universal framework. Therefore, Giancotti argues for the establishment of a broad, universally applicable ethical framework that can guide the responsible use of AI technology in defence. He suggests that the United Nations (UN) should take the lead in spearheading a unified and multi-stakeholder approach to establishing this framework.

    However, Giancotti acknowledges the complexity and contradictions involved in the process of addressing ethical issues related to military AI usage. Reaching a mutually agreed-upon, perfect ethical framework may be uncertain: Nevertheless, he stresses the necessity of pushing for compliance through intergovernmental processes, although the prioritisation of national interests by countries further complicates the establishment of universally agreed policies.Upon broad agreement on AI defence ethics principles, Giancotti suggests to operationalise these through the wealthof experience with International Humanitarian Law.

     

    Peter Furlong. One of the main concerns regarding the use of AI in warfare is the lack of concrete ethical principles for autonomous weapons. The REAIM Summit aims to establish such principles; however, there remains a gap in concrete ethical guidelines. The UN Convention on Certain Conventional Weapons has also been unsuccessful in effectively addressing this issue.

    However, many technologies beyond LAWS cause risks. Satellite internet and broader use of drones in warfare are some examples. Even commercialhobby drones and other dual-use technologies are being used in warfare contexts and military operations, despite not being designed for these purposes. Since AI's capabilities are dependent on the strength of sensors, the cognition of AI is only as good as its sensing abilities. Furlong explains that the value and effectiveness of AI in warfare depend on the quality and capabilities of the sensors used. More broadly, dual-use devices might not meet performance and reliability expectations when not being trained for a warfare context.

    Furlong concludes by stating that the military use of AI and other technologies has the potential to significantly escalate the pace of war. The intent is to accelerate the speed and effectiveness of military operations, which impacts the role and space of diplomacy during such situations. Targeted and specific principles related to the military use of AI are necessary, and conferences and summits play a crucial role in driving these discussions forward.

     

    Shimona Mohan: Explainable AI (XAI) and Responsible AI (RAI) are explored by some countries, like Sweden, in their military applications. The REAIM Summit produced a global call on responsible AI with 80 countries were present, but only 60 countries signed the agreement. It appears that these non-signatory countries prioritise national security over international security regulations and laws.

    Mohan also raises gender and racial biases in military AI as important areas of concern. Gender is currently seen an add-on in defense AI application, use cases, and at most a checkbox that needs to be ticked. A Stanford study revealed that 44% of AI systems exhibited gender biases, and 26% exhibited both gender and racial biases. Another study conducted by the MIT Media Lab found that facial recognition software had difficulty recognising darker female faces 34% of the time. Such biases undermine the fairness and inclusivity of AI systems and can have serious implications in military operations.

    Likewise, biased facial recognition systems raise major ethical, but also operational risks. For instance, the use of those dual-use technologies is used in the Russia-Ukraine conflict, where soldiers were identified through these systems. This highlights the potential overlap between civilian and military AI applications and the need for effective regulations and ethical considerations in both domains.

    Mohan summarises three key issues with lacking awareness of gender and racial bias in military AI systems: 1) bias in data sets; 2) weapon review does not include gender bias review; 3) a lack of policy discourse on bias in AI systems. 

     

     

    Author's comment: Created with the DigWatch IGF Hybrid Reporting Tool. https://dig.watch/event/internet-governance-forum-2023/ai-and-edts-in-w…;

    IGF 2023 Town Hall #29 Impact of the Rise of Generative AI on Developing Countries

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    1. There was a general consensus among panelists and participants that the rise of generative AI can have a positive effect on social and economic development in any country. 2. Furthermore, several opinions confirmed that even in developing countries, generative AI can contribute to new productivity improvements and increased employment opportunities if used wisely.

    Calls to Action

    While rule-making regarding the generative AI is progressing in the G7, there is generally no specific mention of generative AI in the digital transformation policies of many developing countries. It may lead to differences in the speed at which new employment creation opportunities such as prompt engineers. Thus, developing countries should also urgently formulate policies regarding the utilization of generative AI which will create jobs.

    Session Report

    The session was delivered in a town hall style to gather multiple voices and summarize what people of different statuses have considered about the proposed topic.

    In the session, the moderator, Mr. Tomoyuki Naito, shared his intention behind proposing this session and its background, which was explained beforehand on the session page. It included repeated emphasis on the term ‘Generative AI’ during the AI-related sessions during IGF2023, including an opening remark provided by His Excellency Fumio Kishida, the Prime Minister of Japan. Most sessions had been debating about the urgent necessity of common rules to guide the use of generative AI to avoid possible harm on both economy and society, including human rights. On the other hand, the moderator pointed out the opportunity side of generative AI, particularly for global south countries, which might lead to a huge possibility of job creation.

    After all invited panelists’ succinct introduction in relation to their backgrounds as for the relation between ICT and development, the moderator posed a question about how all panelists consider the rise of generative AI represented by Chat-GPT. Is it a good thing for economic and social development aspects of developing economies or not?

    Ms. Safa Khalid Salih Ali shared her views on the positive side of generative AI, which might be useful for financial sectors such as credit assessment at banking institutions. 
    Mr. Robert Ford pointed out that the use of generative AI has huge potential in every country regardless of economic level differences. 
    Dr. Sarayu Natarajan shared her views on positive side; however, she also mentioned downside risks of generative AI such as potential risks of existing jobs being replaced by AI as many international institutions are pointing out.
    Mr. Atsushi Yamanaka declined to say YES or NO, but his personal view is rather positive if data that operates the generative AI could be appropriately collected and secured.

    During the town hall, the moderator opened the floor to collect valuable feedback and questions. Several questions were raised by both onsite and online participants, including one about the possible use of generative AI in education. It was appropriately answered by one of the online audience members, which illustrated an ideal participatory interaction among all the participants.

    The moderator shared an information about recent working paper released by the International Labor Organization (ILO) "Generative AI and Jobs: A global analysis of potential effects on job quantity and quality" that pointed out that the impact of generative AI on jobs could be larger in advanced economies and not much in developing economies due to the current nature of job distributions. He noted that this kind of analysis should be appropriately referred to as there might be huge potential for job creation opportunities in global south countries, so they should seriously consider its importance in their policy framework.

    The session ends in one hour as scheduled, but this important topic should continue to be discussed among larger stakeholders. Its outputs should be appropriately and accordingly shared and adopted, particularly in global south countries, to avoid further enlargement of the digital divide.

    IGF 2023 WS #217 Large Language Models on the Web: Anticipating the challenge

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    Open-source in AI is an important instrument to democratize solutions. But only open-sourcing the model is not enough if the communities around the world don't have information about the dataset used for training.

    ,

    Local populations (i.e. Global South) should be involved in all stages of the development of AI products that will reach their markets.

    Calls to Action

    It should be a priority to develop standards/watermarks to track synthetic content generated by AI.

    ,

    It is strategic to establish an economic framework that remunerates content creators in the era of generative artificial intelligence.

    Session Report

    Speakers: Vagner Santana, Yuki Arase, Rafael Evangelista, Emily Bender, Dominique Hazaël-Massieux, Ryan Budish
    Moderator: Diogo Cortiz da Silva
    Online Moderator: Ana Eliza Duarte
    Rapporteur: Matheus Petroni Braz

    Key Takeaways

    • Open-source in AI is an important instrument to democratize solutions. But only open-sourcing the model is not enough if the communities around the world don't have information about the dataset used for training.
    • Local populations (i.e Global South) should be involved in all the stages of the development of AI products that will reach their markets.

    Call to action points

    • It should be a priority to develop standards/watermarks to track synthetic content generated by AI.
    • It is strategic to establish an economic framework that remunerates content creators in the era of Generative Artificial Intelligence.

    Report

    Diogo Cortiz (Researcher at Web Technology Study Center [Ceweb.br] and professor at Pontifical Catholic University of São Paulo [PUC-SP]), opened the session by introducing the theme of the discussion about large language models on the web, trying to focus on some of the technical aspects about how Generative AI could impact the web.

    The moderator explained three main dimensions to structure the workshop: The first dimension is regarding data mining from web content, the second about what happens when we incorporate generative AI chatbots in search engines and the last one about the web as the main platform where generative AI content is posted.

    Each dimension has its own policy question to guide the discussion:

    1. What are the limits of scraping web data to train LLMs and what measures should be implemented within a governance framework to ensure privacy, prevent copyright infringement, and effectively manage content creator consent?
    2. What are the potential risks and governance complexities associated with incorporating LLMs into search engines as chatbot interfaces and how should different regions (i.e Global South) respond to the impacts on web traffic and, consequently, the digital economy?
    3. What are the technical and governance approaches to detect AI-generated content posted on the Web, restrain the dissemination of sensitive content and provide means of accountability?

    Emily Bender, professor at University of Washington, responded to the first question with concerns about the ease of the global process of grabbing data from the web and claiming it as your own. She advocates for a consensual technology approach, emphasizing that data should be collected in a meaningful, opt-in manner. For her, data collection needs to be intentional rather than a massive, undocumented process. She also believes this approach is not representative, as she argues that the internet does not provide a neutral viewpoint of the world. In response to the second question, she believes that the risks of incorporating large language models (LLMs) into search engines are significantly high because LLMs cannot be trusted as sources of information. Rather, they should be seen as sources of organized words. Specifically for the Global South, the speaker shares a tweet with a graphic that underscores how the data used to train LLMs is concentrated in the Global North, resulting in a significant lack of representation and potential misinterpretation of the Global South within these models. The speaker goes on to highlight that the outputs of LLMs could pollute the information ecosystem with synthetic media that does not represent the truth, but can appear authentic due to the polished structure of the content. Addressing the third question, the speaker concludes their initial remarks by advocating for watermarking synthetic media at the moment of its creation, as this cannot be effectively done at a later time in the current scenario. They also emphasize the importance of policies to ensure a less polluted information space.

    Yuki Arase, professor at University of OSaka,  started affirming her alignment with Emily's remarks. For the first question, she highlighted how messages are being massively used to train LLM and showed concerns about biases and hate speech, as well as how unbalanced the training data are in relation to the greater diversity of people's characteristics around the world. For the second question, the speaker said that as generative AI tools became more popular and easy to access, it also increased the chances of people taking their results for granted, not checking fonts or the veracity of the information. She indicates that having a way to link the content to their data sources could be a way to solve this. For the third question, the speaker reinforced the necessity of training data with local perspectives and in a variety of languages.

    Vagner Santana, researcher at IBM, introduced his remarks about the first question with a storyline of the Web and how the next generation of Web3+ could respond to the LLMs black boxes and "retrain” with its own data in the future. There's even a risk of people creating pages just to poison LLMs training data. Regarding the second question, the speaker shared his opinion about possible human replacements by LLMs, showing that we will need to rethink the ways content creators are remunerated in this new era. For the third question, he defended the accountability of generated content and the understanding of how the technology works as a first step, as well as a moral entanglement concept extended to the content created by those technologies in an attempt to minimize misusage. The speaker also affirmed that the higher the distance between creation and use, the more impersonal technology is. As final takeaways, the speaker reinforced his vision of studying how technology is used, including different contexts and repurposed applications, as well as putting responsible innovation into practice.

    Ryan Budish, public policy manager at Meta, started his remarks by highlighting some recent applications launched by Meta such as translation for a huge variety of languages around the globe and automatic speech translators. He also highlighted how making some LLMs available to researchers is generating really interesting results and new applications. In his point of view, the main question is not whether the technology is good or bad, but what it can be used for. The speaker spoke about the immense amount of data needed to train these models if we want them to have better results, justifying the need to use web available data for it alongside other kinds of databases. Regarding the first question, he talked about privacy and showed examples of how this is applied in Meta's new generative AI products as measures to exclude some websites that commonly share personal information and not use users private information for it. For the second question, the speaker discussed Meta's efforts related to open source AI perspectives and how this could improve AI models for everyone, including economic and competitive benefits. Regarding the third question, he spoke about Meta's vision regarding watermarks and the challenges of creating technical solutions for this kind of application, showing some examples of how Meta is mitigating the possible misuse of these tools by bad actors. In conclusion, the speaker shared a vision about how the governance of this technology should look, supporting the principled risk-based, technology-neutral approaches to regulation of AI.

    Dominique Hazaël-Massieux, W3C, replied to the first question differentiating search engines from LLMs, mostly regarding the black box context of this last one, where you can not have a direct link to the source of that answer. In his perspective, it is fundamental to think about permission strategies regarding data scraped from the web for training LLMs, thinking about something more robust than just robots that block some data from being used. Regarding privacy issues, the speaker highlighted another important difference between search engines and LLMs: in the first, there is the possibility of the “right to be forgotten”, where you can remove specific contents, something that is not feasible to do with the current LLMs. Regarding the second question, he endorsed Emily's remarks about not facing LLMs as a source of reliable or checkable information. The speaker invited the public to think about the stakeholders that should be involved in policymaking for generative AI solutions, balancing the approaches to not harm innovation at the same time that important regulation measures are ensured. Concluding with the third question, the speaker also agrees that it is a challenging technical question, even more so when we think about hybrid contents where AI provided a first version or correction to previously existing content. He also defended the idea that LLMs are not generating the problem of misinformation and fake news, but could make it scalable and worse. In conclusion, the speaker indicates as a solution the collaboration between technologists, researchers and regulatory bodies to discuss the quality of these generated contents and possible governance directions.

    Rafael Evangelista, from Brazilian Internet Steering Committee and University of Campinas, São Paulo, started the discussion talking about the proliferation of low-quality online content, largely driven by the financial incentives of the digital advertising ecosystem. He discussed how this poses significant threats to democracy, as exemplified in Brazil's 2018 elections, where misleading content was amplified through instant messaging groups and monetized through advertising. For the speaker, economic disparities and currency fluctuations in the Global South drive people, including young professionals, to produce subpar content for income, affecting alternative media and lowering content quality. He also affirms that the rise of LLMs raises even more concerns about the spread of low-quality information, suggesting a reevaluation of the compensation structures to redirect wealth from tech corporations to support high-quality journalism and fair compensation for content creators. This extends to open-access scientific journals for LLM training, recognizing collective knowledge as common rather than restricting access or individual compensation. To conclude, the speaker affirms that shifting from individual compensation to collective knowledge production through public digital infrastructures is vital to addressing global North-South disparities and minimizing the potential misuse of LLMs in weakly regulated markets.

     

    IGF 2023 Networking Session #53 Exploring the Intersections of Grassroots Movements, Interne

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    Grassroots movements know best the pressing needs of the people and they act accordingly. Their impact could be strengthen with support from the government.

    ,

    Bringing grassroots movements together and forming a coalition might help their voices and needs be heard and improve their effectiveness.

    Calls to Action

    Grassroots movements and other organizations of the civil society should strenghten their connections and act as an unit.

    ,

    Governments should listen more to the important issues raised by grassroots movements and follow their example on acting in a more determined way.

    IGF 2023 Day 0 Event #185 The Internet WE Want: Perspectives from the Amazonian Region

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    We should look for local complementary solutions to connectivity. They seem to be more sustainable, because their address the real need of the communities.

    ,

    We need to think about connectivity gaps not just as a technical, but a multidimensional problem.

    Session Report

    The Internet WE Want: Perspectives from the Amazonian Region”, our session at #IGF2023 presents the results of our research “Latin America in a Glimpse: The Amazon". A joint comparative research coordinated by Derechos Digitales,  with studies by Idec, Fundamedios, Fundación Internet Bolivia and Dejusticia, to attempts to understand the digital divide and gaps in the Amazon region, with case studies in Brazil, Ecuador, Bolivia and Colombia.

    Our comparative report highlights trends in the case studies and lessons for new experiences and initiatives. It also offers recommendations for the different stakeholders.

    Brazilian research Camila Leite from IDEC, highlighted that in Brazil, “despite North and Northeast Region are the biggest that occupy 40% of the territory, we still have lack meaningful connectivity, we have a lot of inequalities”. The case study from IDEC studied the Nossa Senhora do Livramento community, one of the 6 existing communities in the Tupé Sustainable Development Reserve, in the rural area of Manaus, capital of Amazonas state.
     
    “Structural concerns is that this community is in a place that face challenges related to climate change, rains and dry soil, lack of electricity”, said Leite.
     
    Presenting the work from Rhizomatica and APC regarding community networks in the Amazon Region, Carlos Baca shows Hermes, a free and open-source that provides affordable communication, allowing for the transmission and reception of data.
     
    Carlos Baca highlights that “local complementary solutions are more sustainable because their address the real need of the communities”. But it is necessary to enable “environments and synergies between different stakeholders”.
     
    “Autonomy is the key to draw the type of access the communities want to have” says Carlos Baca.
    IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    Procedures should be mandatory regarding the monitoring of the systems not only within companies but also at the societal level. Public debates should be organised about the development and use of the systems. In these debates, young people should be involved as they are the main users of the systems.

    ,

    AI regulations are focused on the development and commercialisation of generative AI systems but we cannot ensure proper use of these systems. This should be taken into account by making final users more accountable.

    Calls to Action

    As there is a consensus on universal ethical values (i.e. UNESCO Recommendation on AI Ethics), what is now needed is local, inclusive and accountable implementation. Financial support for achieving this is expected from international organisations and big tech companies.

    ,

    Generative AI developments might reinforce digital asymmetries and big tech dominance. Open data and science, and standardisation could avoid this and therefore should be imposed by public regulations.

    Session Report

    As moderator of the panel, Yves POULLET explained what we call “Generative AI”, why these systems and their applications might be fruitful for the society but also encompass a lot of risks not only for our individual liberties but also for societal concerns. He raised the question to the audience “who has already used generative AI systems?”, to which a majority of persons answered positively.

    He explained that initially, OpenAI proposed to restrict the use of ChatGPT only to professional users because it could be dangerous if deployed to the general public. However, OpenAI launched a general public application in 2022 and since then, many companies have developed foundation models as well. The foundation models are general purpose models and are not per se developed for specific purposes. We spoke about ‘transformers’, Large language machines, multimodal generative systems, …. Dall E, Chat GPT, Midjourney, Bard, ERNIE 3.0, KoGPT. A lot of generative AI applications derived from foundation models are possible.

    Gabriela RAMOS, Assistant Director-General for the SHS UNESCO department in charge notably of the AI Ethics Recommendation, delivered initial remarks, introducing the UNESCO Recommendation on the Ethics of AI, and the recent UNESCO report on ChatGPT comparing the latter with the provisions of the recommendation. She enumerated the various issues generated by generative AI systems and claimed for more reflexion, action and initiatives in this field. She pleaded for governance of these technologies. On that point, she introduced the impact assessment tool developed by UNESCO.

    Dawit BEKELE introduced the technical peculiarities of Gen AI: they are designed to generate human like content. They generate coherent and context-related inputs based on the input they receive. He pointed out that they are used on large scale platforms. He then stressed the numerous benefits of generative AI systems as they can be used directly for filtering, rewriting, formatting, etc. They are a major factor of risks for our societies because of potential misuses, creation of harmful content, the fact that many people trust what they see online, risks to information and education, risks for jobs (writers, etc.). Moreover, because of the bias challenge (biased data sets), some countries banned their use. The applications of language models are diverse and include for instance text completion, text-to-speech conversion, language translation, chatbots, virtual assistants, and speech recognition. They are working with big data from a lot of different sources: public ones (as Wikipedia, administration data bases, …) but also from private ones. However, these resources are not representative of the whole world.

    The first roundtable focused on the Generative AI governance and faced the following questions: Do we need a regulation? What do you think about soft law based just on ethical recommendations or voluntary code of conduct? What do you think about the “moratorium” requested by certain companies and academics?  What about a global regulatory model as UN is thinking?

    Changfeng CHEN first mentioned a concept: cultural lag. Cultural lag is a term coined by sociologist William F. Ogburn to describe the delayed adjustment of non-material culture to changes in material culture in the 1920s. It refers to the phenomenon where changes in material culture (such as technology, tools, infrastructure) occur more rapidly than changes in non-material culture (such as beliefs, values, norms, including regulations). She applied this concept to generative AI. Under her opinion, first, we need regulation for generative AI because it is a powerful technology with the potential to be used for good or for harm. But generative AI is still developing, and the scientists and engineers who create it cannot fully explain and predict its future. Therefore, we need to regulate it prudently rather than nip it in the cradle through regulation. Furthermore, we need to be more inclusive and have the wisdom to calmly deal with the mistakes it causes. That only shows human civilization and self-confidence. Second, a moratorium on generative AI being a temporary ban on the development and use of this technology, it would be a drastic measure. It is unlikely to be effective in the long term. Generative AI is a powerful technology with the potential to be used for good, and it would be unwise to stifle its development entirely. Third, a global regulatory model for generative AI would be ideal but it will take time to develop and implement. AI, including generative AI is developing very rapidly in China and has been widely used. Fourth, she explained that China has been at the forefront of developing and regulating generative AI. In 2022, China released the Interim Administrative Measures for Generative Artificial Intelligence (AI) Services. It was published in July 2023. These measures require providers of generative AI services to: Source data and foundation models from legitimate sources; Respect the intellectual property rights of others; Process personal information with appropriate consent or legal basis; Establish and implement risk management systems and internal control procedures; Take measures to prevent the misuse of generative AI services, such as the creation of harmful content.

    Stefaan VERHULST stressed the importance of a responsible approach and raised the question of the extent to which the development of AI should be open or closed. He praised for open data and open science to avoid digital asymmetries. He underlined the fact that the US is again member of UNESCO and endorsed its Recommendation on AI Ethics. He pointed out that the principles underpinning the ethical values are aligned in multiple documents: the US Blueprint for an AI Bill of Rights, the UNESCO Recommendation on AI Ethics, the EU documents, etc. The US approach is based on a co-regulation approach. He stressed the need for notice and explainability rather than a complete regulatory system. On that point, he underlined the fact that States in the US are much more active in working on legislative regulations than the federal authority. Cities are particularly active as well in this field and Stefaan Verhulst underlined the interest of this bottom-up approach more respectful of local disparities and permitting a real participation of the citizens.

    During the Q&A, Omor Faruque, a 17-year-old boy from Bangladesh and the founder and president of Project OMNA suggested for the policy questions as a global child representative to: 1. Establish clear ethical guidelines for the development and use of foundation models, with input from all stakeholders, including children and young people; 2. Create a public registry of foundation models, including information about their ownership, purpose, and potential risks; 3. Develop mechanisms for public oversight and accountability of foundation models; 4. Convene a global forum on generative AI to discuss the ethical, legal, and social implications of this technology; 5. Support research on the impacts of generative AI on everyone including children and young people; 5. Promote digital literacy and critical thinking skills among children and young people so that they can be informed users of generative AI; 6. Consider a moratorium on the development and use of generative AI systems until appropriate safeguards are in place to protect children and young people from potential harms.

    Steven VOSLOO (UNICEF) stressed that UNICEF is also concerned that we don't yet know the impacts of generative AI (positive and negative) on children's social, emotional and cognitive development. Research is critical, but will take time. He asked how we can best navigate the reality that the tools are out in the public, and we need to protect and empower children today when we'll only fully know how later.

    Torsten Krause said that responsibility is a question for all and not only for children and asked if we would need official institutions’ certificates or permission before distribution of technologies likes generative AI systems.

    On this matter, Stefaan Verhulst and Changfeng Chen agreed that young people must be involved.

    A Finnish attendee stressed that it would be complicate to regulate a technology that is used by the general public.

    Doaa Abu Elyounes, Programme Specialist in the Bioethics and Ethics of Science and Technology section, said that it is of course tempting to use these systems because they allow to be faster in writing for example, and therefore that we should be more aware of the risks involved.

    The second roundtable was dedicated to specific socio-economic topics linked with Gen AI. First, the problem of non-representativeness of certain languages in big data excluding certain populations and creating a cultural dominance. Second, the fact that the use of most of these generative AI applications are based on a business model which requires payment for the proposed services. 

    Siva PRASAD stressed that people developing the technologies look for profit and accentuate the digital divide because they are not interested in populations which are not a source of profit. The use of technology is affecting social innovations, and it is the role of public authorities to pay attention to the digital divide, especially as regards marginalised communities. He evoked the specific problem of the use of generative AI in the education system and underlined the right for young people to use the technology for building up their personality and the obligation of the teachers to help to be aware of the risks. Referring to the opinion of Stefaan Verhulst, he said that the local approach is the only way to develop sustainable and equal societies. The international and national authorities have to finance this local approach. On that point, he asserted that while there is a universal bill of rights, local answers are needed.

    Fabio SENNE focused on aspects of measurement of the socioeconomic impacts of AI and data governance. He called for attention to the current scenario of digital inequalities and how it shapes the opportunities and risks related do generative AI. Disparities among countries and regions can impact diversity in data used to train AI based applications. In the case of Brazil there is not only the availability of content in Portuguese, but also in more than 200 indigenous languages. Digital inequalities affect diversity in data used to train models. In terms of access to connectivity and devices, we see persistent patterns of inequalities (ethnicity, traditional population, rural/urban, income, level of education, age). Diversity and inclusiveness have to be principles. The use of Generative AI tools also can be affected by poverty and vulnerability [e.g. income]. Early-adopter tend to benefit more when a new application is available. The impacts tend to be more disruptive in the early phases of dissemination of those tools. Fairness and non-discrimination/ inclusive access for all are also principles.

    As concluding remarks, the panellists indicated what are the more crucial issues raised by generative AI in their opinion. We summarised these in the key takeaways and call for actions (see above).

    As final remarks, Marielza OLIVEIRA thanked the Working Group on Information Ethics of IFAP-UNESCO for this panel and developed recommendations as regards the IFAP and UNESCO future works. She addressed to the IGF the need to go forth with that topic as a major challenge for all people and our society. She pleaded for continuing the discussion and trying to solve the delicate issues raised by these technologies.

    IGF 2023 Town Hall #28 The perils of forcing encryption to say "AI, AI captain"

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    There is agreement that encryption is a crucial tool, and AI has its pitfalls, but there continues to be disagreement on whether it is possible to scan content without breaking encryption. Privacy and security experts, platforms + others believe this is not possible and would increase vulnerability online. Child safety groups and others believe it can be done. AI is not a solution. More engagement on this intersection is required.

    Calls to Action

    Robust, multi-stakeholder engagement on the role of encryption online is necessary given the far reaching consequences of proposals to scan content for the internet landscape and fundamental rights, so that consensus can be achieved in a way that ensures privacy and safety, which are mutually reinforcing. Equally, the acceleration towards deployment of AI for various purposes needs to be interrogated with a push for adequate impact assessments.

    IGF 2023 Town Hall #25 Let’s design the next Global Dialogue on Ai & Metaverses

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    Citizens' Dialogues must involve programmers, developers, and algorithm makers into the discussion as participants of the dialogue. having the people that use the technology and the people who make the technology in the dialogue together is paramount and to explore the trade-offs in AI and metaverses.

    ,

    Decentralization is key in the process of a global citizens' Dialogue: uniformity on the global topic while decentralized deliberations locally all over the world. For inclusion purposes to bring vulnerable and marginalized communities on board in present, and for content purposes to explore the global issue at scale but also the local and regional issues.

    Calls to Action

    The next global citizens' dialogue on AI is an open process: whoever wants to join the conversation, engage with the organizing team to advise take part in the impact committee, scientific committee or support the fundraising process is more than welcome to do so. Contact: [email protected]

    ,

    Go to wetheinternet.org to learn more on the 2017 - 2020 Global Citizens' Dialogue on the future of internet governance #wetheinternet

    IGF 2023 Lightning Talk #174 Switch! - an inclusive approach to capacity building

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    In developing gender and diversity training programs to support women and LGBTQI+ people in the technical community, it is crucial to place the individual needs of each participant at the forefront and try to genuinely understand what they want and need.

    ,

    In training programs, it is important to look beyond just the technical skills and certification, and consider soft skills like languages, management and those involved in networking.

    Calls to Action

    Technical organizations, be they public or private, should actively support women and LGBTQI+ people in their career development, which may extend beyond just training - career development opportunities and access to networks of peers is crucial.

    ,

    It can be difficult for young women in technical industries to understand all the opportunities that are open and available out there, which requires outreach and mentorship. Technical organizations can start early in spreading awareness of career paths.

    Session Report

    This lightning talk discussed the Switch! Gender and Diversity project by the APNIC Foundation.

    The Switch! Gender and Diversity project operates in the economies of Cambodia, Laos, the Philippines, Timor Leste, Thailand and Viet Nam. The project seeks to improve the technical knowledge, skills and confidence of women and LGBTQI+ technical staff working on Internet network operations and security, and to help them acquire and validate professional certifications in network operations and security with which to advance their careers. Participants engage in a variety of training activities in programs tailored to each participant’s professional development plan.

    APNIC Foundation Senior Project and Business Coordinator Cathlene Corcoran moderated the session and Foundation CEO Sylvia Cadena gave an introduction to the Foundation and the Switch! project.

    Switch! Participant Maristela Miranda shared her experiences in the project. She noted that the project helped her expand her career path and opportunities. She noted that while the project had mentoring opportunities, there is also the possibility in future of providing guidance to young women about what career paths and opportunities are out there.

    IGF 2023 Lightning Talk #66 Internet Governance Transparency: a Data Driven Approach

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    Take away 1) Standardisation is open but not necessarily accessible: standardisation is a complex process where participation and understanding of its decisions and implications involves technical understanding and time availability.

    ,

    Take away 2) Analysis of the large amount of open data produced by standardisation organisations offers an opportunity to facilitate access to stakeholders within and beyond the technical community.

    Calls to Action

    Call for action 1) More analysis of the data generated by standardisation organisations will benefit the interplay between the different stakeholders of the Internet and beyond the technical community

    ,

    Call for action 2) for other Internet gobernance communities to embrace data openess and thee analysis of its data to help bridge the gap across stakeholders and foster understanding and informed dialogue

    Session Report

    Standards Developing Organization (SDO) are a manifestation of multistakeholderism and critical for the Internet from both a governance and technical perspective. Technical decisions taken at an SDO can have deep implications for all Internet stakeholders. Many SDOs (eg, IETF, W3C) are very transparent in the decisions and debates that lead to the development of standards.

    Greater analysis of the data produced by standardisation organisations is an opportunity for better Internet gobernance. Greater analysis of this rich data can facilitate the interplay from the different stakeholders and make complex technical debates more accessible to other stakeholders.

    The IRTF Research and Analysis of Standardization Processes (proposed) Research Group promotes, debates and hosts this type of work.

     

     

    IGF 2023 WS #107 Stronger together: multistakeholder voices in cyberdiplomacy

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    UN cybersecurity dialogues should take steps to have more robust and systematized multistakeholder inclusion.

    ,

    Fragmentation of cybersecurity dialogues, at the UN and beyond, make multistakeholder engagement more challenging, especially for organizations with limited resources.

    IGF 2023 WS #225 Risks and opportunities of a new UN cybercrime treaty

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    A UN cybercrime convention has the potential to improve and streamline cooperation by governments in combatting cybercrime, improve capacities, and strengthen respect for human rights.

    ,

    There are, however, challenges with the scope and human rights protections in the current draft of the treaty.

    Calls to Action

    All stakeholders are encouraged to review the next draft of the treaty once it is released, which is expected in November, ahead of the final scheduled round of negotiations in January 2024.

    ,

    Stakeholders can engage in the negotiations by providing input to industry groups and civil society organizations accredited to participate, or reach out directly to member state delegations.

    IGF 2023 Networking Session #86 Opening and Sustaining Government Data

    Updated:
    Data Governance & Trust
    Key Takeaways:

    Open data platforms are vital, but they are not well supported as hype and donors are excited about AI but not about the underlying data.

    ,

    CSOs cannot rely on FOIA laws as the driver of open government data policies; we need god local examples of the application of data to drive positive stories and catalyze interest and support.

    Calls to Action

    IGF and all technology events should include sessions and speakers on the importance of good data governance and data transparency in any session on artificial intelligence.

    ,

    Advocates for open data should develop case studies in the format of if this then that, if this data were made open, then this is the impact that will be made by specific actors.

    Session Report

    Participants from government and Ghana, Togo, Kuwait, Maldives, Sri Lanka, and the US joined to learn from case studies and resources shared in the slide deck here: bit.ly/GovOpenDataIGF2023
     and engaged in a networking event to share their own questions and work in their context and what their next steps could be.  

    Key Takeaways:

    - Open data platforms are vital, but they are not well supported as hype and donors are excited about AI but not about the underlying data. 

    - CSOs cannot rely on FOIA laws as the driver of open government data policies; we need god local examples of the application of data to drive positive stories and catalyze interest and support.

    Call to Action:
    IGF and all technology events should include sessions and speakers on the importance of good data governance and data transparency in any session on artificial intelligence. 
    - Advocates for open data should develop case studies in the format of if this then that, if this data were made open, then this is the impact that will be made by specific actors. 

    IGF 2023 Town Hall #91 Dare to Share: Rebuilding Trust Through Data Stewardship

    Updated:
    Data Governance & Trust
    Key Takeaways:

    Data sharing is a critical component of interoperability in public entities, but it requires coordination and preparation across multiple stages of the value chain. See: Data Governance Map.

    ,

    Data sharing is a critical component of interoperability in public entities, but it requires coordination and preparation across multiple stages of the value chain. See: Data Governance Map.

    Calls to Action

    Governments need to proactively adopt data governance frameworks across the value chain to ensure the best outcomes for data usage and interoperability.

    ,

    It is critical to encourage regulatory frameworks for data institutions and data intermediaries, particularly to empower data altruism in various domains, e.g., the EU Data Governance Act

    IGF 2023 Launch / Award Event #88 Legitimacy of multistakeholderism in IG spaces

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    Knowing what levels of legitimacy beliefs prevail in which quarters- and what kinds of forces shape those legitimacy beliefs- can contribute to more informed and nuanced policymaking.

    ,

    The openness of the multistakeholder model is integral to the success of the Internet.

    Calls to Action

    There is a need to determine the boundaries of Internet Governance and reflect upon the potential need to create new multistakeholder processes to deal with aspects of the digital sphere that fall outside of the scope of Internet Governance to ensure and maintain the legitimacy of current frameworks.

    ,

    It is crucial to ensure the openness of spaces of Internet Governance by bridging the gap between procedural openness and unwelcoming cultures in practice.

    Session Report

    IGF 2023 Launch/Award Event #  88 Legitimacy of multistakeholderism in IG spaces

    Rapporteur: Sophie Hoogeboom

    Speakers: Dr. Hortense Jongen, VU Amsterdam and University of Gothenburg, Dr. Corinne Cath, University of Amsterdam, Nadia Tjahja, United Nations University- CRIS.
    Discussants: Jordan Carter, auDA, Elise Lindeberg, Government of Norway, Alisa Heaver, Government of The Netherlands

    The session held on the 9th of October at 09:30 discussed the legitimacy of multistakeholderism in Internet Governance spaces: ICANN, IETF and the IGF. The importance of multistakeholderism continues to be highlighted in policy circles in which legitimacy and meaningful participation of stakeholders is emphasised as vital components needed for a functioning multistakeholder environment. During this session, three scholars presented their publications. After this, the discussants of the session provided comments and feedback centred around three questions:

    (1) How can multistakeholder initiatives promote meaningful participation from diverse stakeholders and social groups? (2) What is the relationship between inclusive participation and the legitimacy of multistakeholder initiatives? (3) What lessons for other multistakeholder bodies can we draw from the different ways in which the three multistakeholder bodies at the focus of this session (e.g. ICANN, the IETF and the IGF) aim to promote participation? 

    Dr. Hortense Jongen discussed her publication on the legitimacy of ICANN which is part of a larger research project done in cooperation with Professor Jan Aart Scholte that focuses on the levels, drivers and implications of legitimacy. The research is centred around the question: of how far, and on what grounds, does multistakeholderism as a mode of global governance gains legitimacy? As it aims to measure the levels of legitimacy beliefs toward a key multistakeholder apparatus, in this case ICANN, and to identify what generates (or limits) those beliefs. During a period of two years (2018-2019) hundreds of surveys interviews were conducted with insiders, participants and general elites in the world. The key findings of this research showed that ‘legitimacy beliefs are neither so high as to warrant complacency nor so low to prompt alarm.’ Secondly, there seems to be ‘fairly secure legitimacy on the inside; and somewhat more wobbly on the outside. Thirdly, legitimacy beliefs within the ICANN sphere show limited variation by stakeholder group, geographical location or social category. Lastly, there was ‘no glaring Achilles heel of vulnerability in any quarter but also no striking concentration of greater ICANN champions’ found. Moreover, the research has looked at the drivers of legitimacy and has identified three different types of drivers: organisational drivers (e.g. accountability, transparency and decision-making processes) individual drivers (position within ICANN and personal benefits) and societal-level drivers (perception of structural inequalities, although found, did not negatively impact the legitimacy). In sum, the research suggests that ICANN has fairly secured legitimacy and that the drivers of these legitimacy beliefs are multiple and variable.

    Jordan Carter (discussant) spoke about the Roadmap on Internet Governance intended to try and provoke discussion and dialogue among the Internet community about the ways in which Internet governance needs to be improved. Carter focused on a couple of aspects of legitimacy and stated that more broad-based participation will enhance the outcome and outputs of Internet governance processes which would be more likely to be accepted by participants and others. A deficit of people from the global south participating in these processes could be improved by providing effective funding approaches and improving the culture of these frameworks as they vary in their welcoming nature. Secondly, Carter argued that there is a need to review the foundations of Internet Governance in light of the future, which could enhance the legitimacy of Internet governance at large. Thirdly, the institutional innovation question was addressed, such as the extent to which a topic such as AI fits or belongs in the framework of Internet Governance, as there is a risk that it will turn in all governance topics.

    Dr. Corinne Cath presented her ethnography on the exclusionary cultures at the IETF, which makes key protocols and standards that enable networks to connect. Cath presented the key findings of her research centred around the question of ‘how suitable is the IETF for civil society participation?’ The duality of the findings in that, on the one hand, the multistakeholder model can be an important model of governing the Internet while it on the other hand, can in practice, be exclusionary and discriminatory to minority voices, especially those in civil society was stressed. According to Cath, to maintain the openness of the multistakeholder model, there is an urgent need to address these exclusionary and discriminatory aspects. If this is not addressed, multilateral approaches will be used in favour of multistakeholder approaches. Cath found that although these processes are procedurally open, they are also culturally closed off, which hinders participation. These cultural dynamics include denial of politics in technical discussions, procedural openness as a distraction, reliance on informal networking and abrasive working practices.    

     Nadia Tjahja presented her research on youth meta-participation at the IGF. Tjahja looked at how youth are creating new spaces within the IGF that align with the values of the IGF. In her latest publication, an edited definition of meaningful participation has been proposed, as well as a revision of Arnstein ladder, resulting in the ‘Pyramid of Participation’ in which the ways in which we see how people integrate within the IGF is captured. It was argued that tokenised participation, found in Arnstein’s ladder, was outside the scope and left out of the Pyramid of Participation. Instead, meaningful participation that failed in the process is analysed through the lens of why and how they are not able to participate meaningfully. Through interviews, the research mapped participant activities to the Pyramid of Participation, therefore exploring how young people are navigating through processes at YOUthDIG, EuroDIG and the IGF.

    Lindeberg (discussant) stressed and agreed on the importance of ensuring inclusive and meaningful participation in the stakeholder model. Moreover, the need to make sure that the discussions are not spread among too many platforms, as this could hinder the participation of small states and organisations in these spaces, was stressed as well as the need to make existing platforms stronger and share best practices.  

    Alisa Heaver (discussant) stressed the fact that still not all countries are represented in ICANN and the unequal participation in the GAC in terms of regional representation. And hope to see in the next round that GTLD’s more diverse stakeholders are represented meaning a larger array of top-level domains in other languages, and other scripts and ensuring that more registries and registrars are equally distributed across the world. 

    Key takeaways:

    • Knowing what levels of legitimacy beliefs prevail in which quarters- and what kinds of forces shape those legitimacy beliefs- can contribute to more informed and nuanced policymaking.
    • The openness of the multistakeholder model is integral to the success of the Internet.
    • It is crucial to reflect on how youth navigate the IGF ecosystem to enhance their meaningful participation in Internet governance

    Call to action points:

    • There is a need to determine the boundaries of Internet Governance and reflect upon the potential need to create new multistakeholder processes to deal with aspects of the digital sphere that fall outside of the scope of Internet Governance to ensure and maintain the legitimacy of current frameworks.
    • It is crucial to ensure the openness of spaces of Internet Governance by bridging the gap between procedural openness and unwelcoming cultures in practice.   
    • To define and adopt a definition on meaningful participation, which was proposed in this session, and within this lens reflect on youth activities and processes.
    IGF 2023 Open Forum #138 Regional perspectives on digital governance

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    1. regional organisations can bring national challenges in the field of digital governance to the global discussion, and vice-versa: regional organisations act as translators and filters in both directions.

    ,

    2. successful regional interactions work when stakeholders are engaged in the discussions, including civil society

    Calls to Action

    1. involve academia in the discussions about national and regional implementation of these forms of cascaded governance

    ,

    2. reflect more on how regional organisations can work together and help each other when developing peer review and best practices.

    IGF 2023 Networking Session #158 An infrastructure for empowered internet citizens

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    Important to think about the systematization of local knowledge and how libraries can contribute to this process

    Calls to Action

    Assessing possibilities of collaboration with local libraries

    Session Report

    Departing from the experience shared by the speakers, some of the steps that can be taken into account to accelerate national digital transformation include the following: taking immediate action to expand internet access and develop digital infrastructure and internet services for all, preparing a transformation digital roadmap for the government's strategic sectors, public services, social aid services, education, health and other industries, take immediate action to integrate national data center, take into account the needs of digital talents and prepare regulation and related funding schemes.
    Improving and expanding access to libraries will accelerate the human resource development who will master science and technology, improve creativities and innovations to create job opportunities, reduce unemployment rate and increase income per capita.
    Libraries play a crucial role in building valuable partnerships with communities and organizations.

    IGF 2023 Day 0 Event #25 IGF LAC NRI's Space: Fostering Cooperation in LAC

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    Youth Empowerment for Internet Governance: There's a clear consensus on the pivotal role of the youth in shaping the Internet governance landscape. Moving forward, initiatives should actively involve and leverage the potential of the younger demographic. To achieve this, educational institutions, governments, and private sectors in the LAC region should collaboratively develop youth-centric programs and platforms. This requires medium-term action

    Calls to Action

    Collaborative Regional Action for Inclusivity: The LAC region recognized the imperative need for collaborative efforts in ensuring a more inclusive and democratic Internet. Future strategies should prioritize establishing regional alliances, fostering knowledge-sharing, and pooling resources. Governments, intergovernmental organizations (IGOs), and the private sector must jointly champion these collaborative efforts, aiming for tangible outcomes

    Session Report

    Introduction: Nicolas opened the session by expressing gratitude towards the enthusiastic NRI coordinators from the LAC region. The primary goal of the meeting was to evaluate the initiatives at national, regional, and youth-led levels, pinpointing common trends, differences, and the future roadmap. Roberto emphasized gaining a holistic understanding of the challenges and opportunities in Internet governance within the LAC region.

    Brazil: Barbosa represented Brazil and its youth-led initiative. Despite the challenges posed by the pandemic, Brazil's Internet Governance Forum remains steadfast in its commitment to enhance youth participation. They've instituted numerous training sessions and webinars, primarily focusing on fostering Internet governance. The Brazilian Internet Governance Forum acts as a preparatory event for global participation and has seen significant participation, both in-person and remotely.

    Colombia: Julian Casasbuenas shed light on the Colombian IGF's endeavors, especially their contributions to the global digital compact. The Colombian IGF hosted several workshops, emphasizing digital security, health, and blockchain, among other topics. The commitment to making the Internet more democratic, inclusive, and safe was evident in Julian's representation.

    Ecuador: Carlos Vera highlighted Ecuador's efforts to democratize Internet governance. Their initiatives have consistently involved various stakeholders, bringing richness and diversity to their events. The focus remains on increasing inclusivity in these discussions.

    Colombia: Laura Victoria Ramos and Benjamin Chong provided insights into the efforts in Colombia, especially regarding the push for inclusivity and diversity. They underscored the importance of the Colombian Youth IGF and the role of youth in these conversations. A particular emphasis was placed on bridging the digital divide and making the Internet more accessible to all.

    South School of Internet Governance: Olga's presentation underscored the importance of capacity building in the LAC region. The South School on Internet Governance has been instrumental in this aspect, organizing events across various cities and promoting the tenets of Internet governance.

    Central America: Lia Hernandez introduced the newly-formed Central American IGF. This initiative is a testament to the collaborative spirit of the LAC nations, aiming to address subregion-specific challenges in Internet governance.

    Youth LACIGF: Umut Pajaro, representing the Youth IGF, elaborated on the unique challenges and opportunities faced by the younger demographic. There's a clear emphasis on regional coordination and ensuring that the youth are not just participants but active contributors in shaping Internet governance policies.

    LACIGF: Raul Echeberria from LACIGF brought attention to the collective efforts in the LAC region. He highlighted upcoming events such as the LACIGF and emphasized the importance of collaboration and continued discourse, and congratulate Colnodo for being the new secretary for the LACIGF.


    Conclusion: The LAC NRI Space Session was a comprehensive exploration of the Internet governance landscape in the Latin America and Caribbean region. From Brazil to Ecuador, each representative shed light on their nation's initiatives, challenges, and future aspirations. The overarching theme was the collective drive towards a more inclusive, democratic, and secure Internet for all in the region. The role of youth, capacity-building initiatives, and regional collaboration stood out as pivotal aspects of the discourse.

    IGF 2023 Day 0 Event #52 IGF LAC Space

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    The organizations agree on the need to continue strengthening the spaces related to the ecosystem as well as the multi-stakeholder model, addressing challenges such as Internet fragmentation, connectivity and access, promoting the monitoring of regulatory developments in the region.

    Session Report

    1. Key Policy Questions and related issues

    The IGF LAC Space invited different organizations to share the projects, initiatives and research they have developed over the past year with a focus on the Latin American and Caribbean region, including impressions regarding the following question "Based on your organization's perspective, what challenges related to the management and evolution of the Internet in the region can you identify?". The session sought to foster discussion and new opportunities for regional collaboration.  The second part of the session hosted researchers supported by LACNIC and CETyS who shared their recent research related to Internet governance in the regional sphere.

    2. Summary of Issues Discussed

    The organizations that participated in this edition of the IGF LAC Space shared the projects, initiatives and research they have undertaken over the past year and their answers to the suggested question. The session included representatives from the private sector, civil society, the technical community and academia.

    Speakers talked about the work they have done from different approaches to tackle several issues, like meaningful connectivity, youth engagement, and access within the Latin American and Caribbean region. They highlighted the importance of strengthening the multistakeholder model and analyzing the evolution of the regulatory framework in the region, noting the attempts of passing new regulations in Latin American countries that negatively impact the development of the Internet and the digital economy in the region.

    During the second part of the session, researchers supported by LACNIC and CETyS shared their projects and main findings. They addressed a wide range of topics like zero rating, regulatory challenges for the open Internet, and Internet access for indigenous communities. 

    The session once again brought together organizations from the Latin American and Caribbean Internet ecosystem to share new developments, exchange ideas and strengthen regional cooperation opportunities.

     

    3. Final Speakers

    Nigel Cassimire - CTU

    Olga Cavalli -  SSIG

    Rodrigo de la Parra - ICANN

    Juan Carlos Lara - Derechos Digitales

    Alessia Zucchetti - LACNIC

    María Fernanda Martínez -CETyS

    Rocío de la Fuente - LACTLD

    Ernesto Rodríguez- Vice Minister of  Communications in Cuba

    Emiliano Venier -  Researcher

    Germán López -  Researcher

    Daniela Cuspoca - Researcher

    Pilar Huppi Lo Prete -  Researcher

    Daniel Triviño Cepeda - Researcher

    IGF 2023 Open Forum #135 Enhancing the digital infrastructure for all

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    Digital skilling up as an enabler to foster digital development, which was underpinned by G7/G20/ASEAN agenda. Improvement of digital skilled people to enhance the usage of digital infrastructure

    ,

    Promote to exchange among public sector/practitioners/academia/private sectors, for improving the digital skills and mutual understanding, creating new services across the several sectors, which leads to the innovation.

    Calls to Action

    Creating digital knowledge hub with the participation of multi-stakeholders, to create an environment for innnovation

    ,

    Increase of liquidity of human capital/knowledge region/globally for bringing a new synergy of innovation

    IGF 2023 Networking Session #171 Fake or advert: between disinformation and digital marketing

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    Leveraging Ethical Advertising - One insight emphasized the role of ethical advertising in supporting independent journalism. Collaborative efforts by governments, private sector, and civil society are proposed to create and enforce ethical advertising standards, ultimately contributing to a more reliable media environment.

    ,

    Bridging the Global South Gap - there are disparities in disinformation understanding between the global north and south. The session encourages a knowledge exchange, where global north regions can learn from the south's long historical perspective on disinformation, ultimately aiding the fight against disinformation. This collaborative learning could inform future strategies to tackle disinformation globally.

    Calls to Action

    South-North Knowledge Exchange - Recognizing the unique challenges faced by the global south regarding disinformation, the session urges the global north to engage in a mutual knowledge exchange. This exchange should involve sharing strategies, best practices, and expertise to combat disinformation and strengthen independent media. Collaboration should be fostered between academia, civil society, and governments to facilitate this exchange.

    ,

    Ethical Advertising Initiatives - The discussion underscores the need for concrete actions towards ethical advertising standards. Governments, the private sector, and civil society are urged to collaborate and establish a framework that enforces responsible advertising practices. This should be initiated within a specific timeframe to promote transparency and reliability in the digital advertising landscape.

    Session Report

     

    IGF 2023 Networking Session #171 Fake or advert: between disinformation and digital marketing

    Youtube link and official website

    The IGF 2023 Networking Session #171, titled "Fake or Advert: Between Disinformation and Digital Marketing," brought together experts from diverse sectors to delve into the intricate relationship between advertising and disinformation in the digital age. Organized by InternetLab, this session aimed to facilitate an open dialogue and exchange of insights on the challenges posed by disinformation in the advertising landscape, focusing on measures taken by social media platforms and governments to combat this complex issue. The session featured enlightening discussions and key takeaways from speakers representing various perspectives.

     This is the updated list and order of speakers:

    • Eliana Quiroz (Internet Bolivia)
    • Anna Kompanek (Center for International Private Enterprise - CIPE)
    • Herman Wasserman (University of Stellenbosch):
    • Renata Mielli (Comitê Gestor da Internet)

    Onsite Moderator: Heloisa Massaro ([email protected])

    Online Moderator: Laura Pereira

    Rapporteur: Alice Lana

    Format: Networking session

    Heloísa Massaro, the director of InternetLab, set the stage by emphasizing the importance of addressing disinformation in advertising. This session aimed to shed light on the complexities of this relationship, focusing on how advertising can either contribute to or mitigate the spread of disinformation.

    1. Eliana Quiroz (Internet Bolivia): Eliana highlighted the differences between countries, even within the global south, in their capacity to afford digital marketing campaigns. She emphasized the financial aspect of disinformation campaigns and how they impact public discourse. Eliana's perspective provided a valuable understanding of the disparities in digital marketing capabilities across regions.

    2. Anna Kompanek (Center for International Private Enterprise - CIPE): Anna brought a unique perspective from the local business community, emphasizing that not everyone from the private sector involved in discussions like these represents big tech. She stressed the impact of paying for adverts on disinformation and the quality of journalism in a country. Her company's report, created in collaboration with CIMA, highlighted the importance of ethical advertising to support independent journalism and media spaces. Anna's insights underlined the significance of ethical advertising for enhancing the public image of companies.

    3. Herman Wasserman (University of Stellenbosch): Herman discussed the disparities in understanding disinformation between the global north and the global south and how this impacts the conversation. He presented two critical points: firstly, disinformation has a long history in the global south, and secondly, there is a double threat to information landscapes, both externally and internally. Herman's perspective emphasized the historical context and the importance of understanding disinformation in different regions. He also highlighted the role of advertising in supporting small independent media outlets, especially in authoritarian regimes.

    4. Renata Mielli (Comitê Gestor da Internet): Renata focused on the disinformation industry and the dominance of digital platforms in the advertising market. She raised questions about the role of government and governmental regulation in the context of digital marketing and disinformation. Renata's questions encouraged thoughtful consideration of the state's role in regulating the advertising industry and social media platforms, and how this could contribute to a more responsible and reliable digital marketing environment.

    Q&A and Panelists' Responses:

    The panelists engaged in a Q&A session, where they fielded questions on topics such as elections, self-regulation mechanisms, and regulatory approaches. The speakers provided comprehensive and insightful responses, contributing to a deeper understanding of the issues at hand.

    In conclusion, the IGF 2023 Networking Session #171 provided a platform for a meaningful exchange of ideas on the complex interplay between advertising and disinformation. The speakers' diverse perspectives and the questions raised during the session demonstrated the need for collaborative efforts to address disinformation in advertising and foster a more responsible and reliable digital marketing environment. The session exemplified the IGF's commitment to promoting a better understanding of these multifaceted challenges and encouraging innovative solutions for a more informed and responsible information ecosystem.

    Key Takeaway 1: Leveraging Ethical Advertising - One insight emphasized the role of ethical advertising in supporting independent journalism. Collaborative efforts by governments, the private sector, and civil society are proposed to create and enforce ethical advertising standards, ultimately contributing to a more reliable media environment.

    Key Takeaway 2: Bridging the Global South Gap - there are disparities in disinformation understanding between the global North and South. The session encourages a knowledge exchange, where global north regions can learn from the south's long historical perspective on disinformation, ultimately aiding the fight against disinformation. This collaborative learning could inform future strategies to tackle disinformation globally.

    Call to Action 1: South-North Knowledge Exchange - Recognizing the unique challenges faced by the global south regarding disinformation, the session urges the global north to engage in a mutual knowledge exchange. This exchange should involve sharing strategies, best practices, and expertise to combat disinformation and strengthen independent media. Collaboration should be fostered between academia, civil society, and governments to facilitate this exchange.

    Call to Action 2: Ethical Advertising Initiatives - The discussion underscores the need for concrete actions towards ethical advertising standards. Governments, the private sector, and civil society are urged to collaborate and establish a framework that enforces responsible advertising practices. This should be initiated within a specific timeframe to promote transparency and reliability in the digital advertising landscape.

     

    IGF 2023 DC-DigitalHealth Conversational AI in low income & resource settings

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    Generative AI is proven when it comes to its competence, and it is proven

    ,

    We need to create evidence based Generative Healthcare AI

    Calls to Action

    We need to create a Generative AI Healthcare Governance Model and Consortium on Generative Healthcare AI

    ,

    Develop a project on Generative AI at the Dynamic Coalition , the tangible results of which can be demonstrated at the Next IGF.

    Session Report

    Conversational AI in Low-Income & Resource Settings

    Date: 11th October 2023

    Time: 17.00 – 18.30

    Workshop room 9. Room C-2

    Session by DC-Digital Health

    Key Takeaways

    Dr. Rajendra Pratap Gupta ( DC- Digital Health / India)

    • He talked about his triple 80A Rule: 80A-80A-80A: 80% of people don't have 'Access' to healthcare; 80% can't 'Afford' healthcare; 80% have 'Acute' illnesses. So, the need for the fourth 'A,' i.e., Artificial Intelligence
    • Conversational AI is important to achieve affordable and accessible healthcare.
    • Generative AI is based on data; however, we don't have Data (clean data). Conversational AI won't solve problems until we have data.
    • Conversational AI Chatbots scored 82% in the MRCGP exam compared to human physicians, who scored 72% - A Study of the results of the five-year questions between 2012-2017.
    • AI in isolation will create more Mistrust unless combined with Blockchain (Identity and Reliability)
    • We should not get trapped by the private sector agenda -BIG TECH.
    • Looking forward to a day when we can use blockchain and AI to use WhatsApp for prescriptions. Unless we touch the lives of people living in LMICs, we will have failed in our duty on Responsible for AI.
    • Technology is not a threat to jobs but a big threat to lack of competence. So, upskill and cross-skill—otherwise, not just technology, everything is a threat.
    • We need to create a Global Generative AI Healthcare Governance Model.

    Mr. Dino Cataldo DELL’ACCIO (UNJPSF / USA)

    • Identification of the End User is important- Response needs to be tailored and aligned to the end user's needs.
    • Multiple Technologies together will be able to solve the problem - The joint functioning of AI, which is a Probabilistic Technology, and blockchain, which is a Deterministic Technology. It will provide that level of support - certainty of reliability about data.
    • Conversational AI needs to be trustable and people-friendly. It must be a mix of top-down and bottom-up approaches to develop this.
    • There is a need to consider that this technology can be used for good and bad.
    • A criterion must be developed to check its reliability and ensure it is human-centric.

    Mr. Sameer Pujari (WHO/ Geneva)

    • Focus on People and Not Technology
    • Technology has to be understood as an enabler
    • In Low-income resource settings, there is a Massive Gap in healthcare providers, So the Role of Conversational AI is critical.
    • Technology getting cheaper will create a difference in these areas
    • Four areas – Equity, Sustainable Business models, How it benefits the user, how we take it to the user (+1 point)
    • We have to be cautiously optimistic and ensure there are appropriate safeguards when working with AI and for the development of the right products.
    • A detailed, systematic, and creative approach is required.
    • Technology must be adaptable to different countries and be human-centric at all levels.
    • Stick to guidelines rather than regulation? The impact of the solution will determine this.
    • If you don't regulate, it will cause more damage. There is a massive risk of misappropriation without the right regulations.
    • People are more concerned about money than health.

     

    Ms. Shawna Hoffman ( USA)

    • AI Chatbot was used to track essential items during COVID-19.
    • Reaching the remote and underserved areas needs to be included in the fold.
    • AI is in 90 different components; Conversational AI is one of them.
    • Conversational AI is consistent.
    • Fact-checking Conversational and generative AI is really important.
    • It can be a great educational tool. Conversational AI can be customized for local use.

    Mr. Sabin Dima ( Romania )

    -  AI is the greatest tool ever created.

    -  AI is not able to replace humans, but it helps with some skills

    -  Data; Data traceable; Ethics -

    • AI in healthcare will Democratize Access to healthcare
    • The AI model is efficient to encapsulating settings
    • We have the technology here; the problem is not of not having the technology. We have to use this technology to research.
    • Sending voice messages in the voice of doctors to ensure that patients continue their treatment and not leave it in between. AI can be really helpful in aftercare.
    • We have everything; we just need to start doing it now.
    • Digital doctors for every village. One mobile phone for every village can be a starter to get everyone in the fold.

    Dr. Ashish Atreja (USA)

    •  Doctors don't give Time to explain.
    • We need to Unlock Care with Time and space (Provide care anywhere and don't need physicians)
    • Any solution, if validated, can become a global solution
    • Democratizing it completely
    • Use the model of Hybrid AI (Combine Physician centered care with Patient-centered care)
    • Inequity in people, states, organizations
    • Digital Divide to Digital Bridge (Leverage Tech to bridge the gap)
    • Using technology for the right use cases. - Put it under a scientific and evidence-based lens and demystify conversational AI.
    • Optimism should be combined with caution when dealing with AI.
    • The onus is on us to put science as the base to demystify healthcare rumours using conversational AI.
    • Creating AI models on closed-in sources like textbooks, verified data sets, and not the internet

    Dr. Olabisi Ogunbase (Nigeria)

    • Use of digital technology for patient engagement.
    • Objectives
      • Patient notification and information
      • Patient Education
      • Support System
      • Digital Patient Engagement
    • Conversational AI can make conversations with end users possible in real Time. Expediting the process of care delivery.

     

    Ms. Mevish Vaishnav (India)

    -  Analyse conversations and create Generative Health AI

    -  Starting point of Generative General AI

    • Driven by Generative AI, dependent on each other
    • Conversational AI will make patients believe that they are heard.
    • Forming a global Generative Health AI group will be a melting pot of all healthcare stakeholders.

     

    Rapporteur - Online . Ms. Mevish P. Vaishnav 

    Rapportuer - Onsite : Ms. Priya Shukla 

    The session concluded with a call by all the expert panelists for supporting tangible solution at DC - Digital Health 

    a) We need to create a Global Generative Healthcare Consortium

    b) A Generative Healthcare AI Governance Model

    b) Develop a project on Generative AI at the Dynamic Coalition, the tangible results of which can be demonstrated at the Next IGF.

    IGF 2023 Networking Session #145 Discussing Internet Governance research in time of crisis

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    Internet Governance studies need to include a comparative dimension to allow for collaborative research. Developing a transdisciplinary approach is also of paramount importance from legal studies to computing science.

    Calls to Action

    Promote open and multistakeholder cooperation in research project to embrace the complexity of digital technologies.

    Session Report

    Key takeaways

    Internet Governance studies need to include a comparative dimension to allow for collaborative research. Developing a transdisciplinary approach is also of paramount importance from legal studies to computing science.

    Call-to-action points 

    Promote open and multistakeholder cooperation in research project to embrace the complexity of digital technologies. 

    The session allowed for an open discussion and sharing of research programs (as well as future initiatives, grants, new degrees…) and opportunities to collaborate. The informal setting of the session was also helpful to facilitate such discussion.

     

    IGF 2023 WS #481 Barriers to Inclusion:Strategies for People with disability

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    the two key takeaways are as following: 1. Policies for and around people with disabilities should be develop and included regulation acts. 2. Collaborations between app developers and content creators should be fostered.

    Calls to Action

    Call to actions are: 1. People will disabilities must be given the change to get involve in decision making on issues pertaining to them. 2. Provision should be made to cater for each disabilities since all of the disabilities are not the same.

    IGF 2023 WS #72 Defence against the DarkWeb Arts: Youth Perspective

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    1. Dark Web cannot be regulated as there is no unique measure to ban the whole DeepWeb system.

    ,

    2. Governments should put in great efforts to create more educational and explanatory activities for internet users especially young ones because not many of them really know the difference between the DarkWeb and DeepWeb. All of such structures are used for illegal purposes and criminal, but there is the only benefit of DeepWeb deals with personal privacy

    Calls to Action

    1. For tech companies: to develop technologies to protect people online, increase the level cybersecurity

    ,

    2. For governments and big corporations: not to restrict personal opinion online and abuse the power because the comprehensive information control causes the internet fragmentation.

    Session Report

    How dangerous the darkweb is for the average user and what benefits it can bring to society – these controversial issues were the main questions of the session.

    Milos Jovanovic: You need to understand that the darkweb is just part of the so–called 'deep web'. And when we talk about offenses on the Internet and the prohibited content that is posted on it, we must understand that all this is among us, in the 'big' Internet, not only in the 'dark' part of it. Therefore, it is extremely important to improve protection systems against all kinds of violations everywhere, throughout the online space, without limiting your efforts to just one part of it.

    The main association when people hear the word 'darkweb' is something bad, something related to cybercrimes and people with bad intentions. But most people use browsers like Tor (the most well–known way to access the darkweb) for normal search and visiting not sites located in the so-called darkweb, but rather quite familiar news resources and social networks (some of which have a version in the darkweb), which are available without special software. The main problem is the desire for a secure connection. 

    Izaan Khan: The fight against cybercrime in the darkweb is impossible without law enforcement access to the darkweb. Many criminals were caught by the police because of this. Therefore, of course, it is necessary to understand that everything has its pros and cons. A complete ban of the darkweb is impossible for the simple reason that a new technology will always appear in place of something forbidden, which will be even more difficult to keep track of than the existing products. Therefore, it is necessary to be able to find a balance between anonymity, freedom of speech and user safety.

    However, in the darknet, you still need to be doubly careful, since there you can undoubtedly stumble not only on scammers, but also on malware, Fifi Selby reminded. "The main problem is that users do not fully understand how to protect themselves from cyber threats. And some of them deliberately resort to using software to access the darknet like Tor in order to preserve their privacy in the online space. Therefore, the only way out is to create more secure applications so that people will trust accessing the network through a regular browser more than using encryption software. And this can only be done through joint efforts with specialists around the world," he said. 

    Gabriella Marcelja spoke about how developing technologies can help in the future with the fight against cybercrime in the darkweb. For example, AI and some programs can already detect money laundering schemes, as well as calculate patterns of criminals to help identify their repeated actions and identify them. Now many countries are focusing more on users, on biometric identification, although it seems that for Internet security we need to improve software and technical equipment, as this is a direct way to combat cybercrime.

    Pavel Zoneff, a representative of the Tor Project, also spoke at the session, who assessed the darknet from the point of view of the most popular browser that is usually associated with it. He gave an interesting statistic: it turned out that only 1% of all Tor traffic refers to "onion servers", that is, pages directly located in the darkweb. The most popular resource visited by Tor users is Facebook. That is, in fact, Tor is used like any other browser. He also recalled that cybercrime does not exist only within the darkweb, it is also widespread on the 'Clearnet', especially if we take into account that the number of pages in the darkweb is in the thousands, and there are billions of sites all over the Internet. 

    In the end, the speakers came to the conclusion that it is necessary to do more cybersecurity education and training of new information security specialists in different parts of the world, rather than imposing prohibitions. Since bans lead to the creation of other dangerous tools for anonymity on the Internet, which can be much more unpredictable than existing ones.

    IGF 2023 Lightning Talk #2 Successful Data and AI Strategies

    Updated:
    Data Governance & Trust
    Key Takeaways:

    Takeaway 1: During Q & A, there was an inquiry about how OpenAI can have "data value", which is an interesting thought, since data strategies need to have data management in place, and with big data, classification of the data might be difficult as well as aligning the data with strategic objectives may not be as clear. The answer that I gave was to prioritize the data for classification and define 1-2 "ethical" objectives to align with.

    ,

    Takeaway 2: After Q & A, there was another inquiry about schools in Hong Kong which don't allow high school girls to use ChatGPT, for the risk of Plagiarism. The 2 girls, who asked, wanted training on how to use ChatGPT. This raises a fundamental concern about the ethical use of AI in Education sector, especially if no clear global regulations/policies are in place/followed. Should Education allow ChatGPT or prohibit it..? A thought to ponder.

    Calls to Action

    Call for Action 1: It would be beneficial to do some research on how OpenAI developers, as the main stakeholders, use Data Strategy frameworks, during their development work, and to make sure that their AI is aligned with an efficient Data strategy, that is put in place.

    ,

    Call for Action 2: In the education sector, research on the impact of using ChatGPT among students need to be pursued in further depth, by researchers. Some critical questions need to be answered such as how does Generative AI tools, such as ChatGPT, affect the performance of learners and what policies/regulations need to be put in place to make sure of the Ethical use of AI among both youngsters and scholars, in the Education sector.

    Session Report

    This report addresses the feedback that I have received from the participants during and after the session.   

    There is a strong link between Data Strategy, AI Strategy and AI Ethics.  It was interesting that some participants (from the Education sector: Academy STEP Institute, Cambodia & Vietnam) requested a copy of my slides, in order to educate and share the knowledge with their students in one of the schools.  These participants mentioned that found the slides to be enlightening and informative, since the slides contained clear concise steps on how to build successful data and AI strategies, and what questions need to be tackled and answered, during this process, including Data Governance, Data Quality and AI Ethics/Trust.  I was more than glad to share the slides with the Academy, after the session by email, in order to facilitate the knowledge sharing and to be of help to the students in their learning stages.

    Another interesting inquiry, from another participant, was on how we can create "data value" for OpenAI, which is an interesting thought, since data strategies need to have data management in place, and with big data, classification of the data might be difficult as well as aligning the data with strategic objectives may not be as clear. The answer that I gave was to prioritize the data for classification and define 1-2 "ethical" objectives to align with.   In addition, it would be beneficial to do some research on how OpenAI developers, as the main stakeholders, use Data Strategy frameworks, during their development work, and to make sure that their AI is aligned with an efficient Data strategy, that is put in place.

    Moreover, there was a third inquiry from two (2) schoolgirls who attended the session, and they wanted to get some training on how to use ChatGPT, since one of my slides contained an example for how to get valuable information from it.   The 2 girls mentioned that their school in Hong Kong doesn't allow high school girls to use ChatGPT, for the risk of Plagiarism.  The ethical use of AI in the Education sector is a fundamental concern, especially since there are no clear global regulations/policies in place or currently followed. Should Education allow ChatGPT or prohibit it.? It's a thought worth pondering.  I truly believe that, in the education sector, research on the impact of using ChatGPT among students need to be pursued in further depth, by researchers. Some critical questions need to be answered such as: How does Generative AI tools, such as ChatGPT, affect the performance of learners? and what policies/regulations need to be put in place to make sure of the Ethical use of AI among both youngsters and scholars, in the Education sector?   

    Finally, I would like to thank UN IGF organizers and Japan for hosting this insightful forum, which provided perspective on many of the urging topics in the world today, related to Data Governance and AI.   The discussions and speeches that were shared by all speakers were inspiring. 

    Thank you very much and I am looking forward to future co-operation with UN IGF and other UN entities for the pursuit of: AI for Humanity.

    Best regards,

    Fadwa AlBawardi

    Saudi Arabia

     

     

    IGF 2023 Open Forum #82 AI Technology-a source of empowerment in consumer protection

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    AI can also be a source of empowerment in consumer protection and an ally of consumer agencies if responsibly deployed. Consumer law policymakers and enforcers must adapt to the rapid development of AI tools in e-commerce and put safeguards in place for consumers.

    Calls to Action

    In terms of AI, keeping the state of play is staying behind the technology and the new risks it carries for consumers. AI initiatives of consumer agencies should be encouraged and further developed, and so should the international collaboration of the enforcers and policymakers.

    Session Report

    The round table session focused on presenting some of the AI initiatives and key areas of AI usage identified in relation to consumer protection. Speakers from international organizations (OECD, EC), national consumer protection authorities (ACCC, UOKiK), academia (University of Reading), and NGOs (Tony Blair Institute) discussed their current AI projects and expressed their opinions on the potential of AI in consumer protection.

    The session was opened with a brief introduction to consumer protection, followed by a presentation of academic research and perspective on the use of AI in the Enforcement Technology (EnfTech) toolbox. The expert stressed that embracing the use of EnfTech is inevitable for enforcement agencies in order to keep digital markets in check. There are some discrepancies between the consumer agencies who start their technological reorganization on different levels, but the results of the EnfTech survey showed that even very little technology can do a lot to enforcement.

    Next, the global and European perspective on AI tools was brought to the table. The speakers discussed AI impact on consumers, the e-tools developed by international organizations and the opportunities for consumer protection enforcers in the deployment of AI. Some of the e-tools mentioned in the presentation by OECD included the Polish system for intelligent contract terms analysis (ARBUZ), the fake reviews detector, Korea’s Consumer Injury Surveillance System as well as currently developed AI tools for empowering consumers to reduce energy consumption and make greener shopping choices. The OECD is also working on an AI Incidents Monitor for automatic global monitoring of AI hazards. The EC presented its eLaboratory and its main features that help collect evidence and conduct investigations. The eLab tools are available to CPC case handlers in the eLab environment. Another example of an intelligent tool used by the EC in the 2022 ‘Black Friday sweep’ is the price reduction tool that covers different websites, languages and currencies and presents a graphic representation of a price evolution and flags potential infringements. The EC also launched behavioural experiments with the use of AI to test the impact of online practices on consumers (e.g. labelling of advertising, dark patterns, use of cookies).

    The session then focused on the presentation of specific tools and AI projects carried out by national consumer agencies from Poland and Australia. The Polish authority has recently implemented an AI-powered tool for the automatic analysis of terms of contracts and the detection of abusive clauses. The development of the ARBUZ system was co-financed by the EC. The tool assists casehandlers and will be further trained under supervision. Another EC-funded project taken up by the Polish consumer agency is focusing on dark patterns and the possible AI deployment to detect them. Polish experience with implementing AI into a consumer agency shall be concluded in a form of a white paper. From the Australian perspective, attention was drawn to streamlined webform processing and manipulative design detection. The techniques on this matter include entity extraction, classification and relevance. The expert also stressed the importance of data and giving more control to consumers over their data (Consumer Data Right).

    A different ankle insight into AI in consumer protection was also delivered by an NGO. The recent Tony Blair Institute for Global Change report calls for a new policy agenda for the UK and the reorganization of institutions to deal with technological change. During the session, some of the governance challenges were discussed. The presentation also covered the adoption of computational antitrust by agencies in Finland and the UK using a supervised machine learning approach. The examples demonstrated that the cartels could have been caught using statistical methods. The presentation concluded with policy questions for AI deployment, including ethics of the algorithm, human oversight and LLMs.

    The session was summed up with conclusions from the speakers on the future of AI in consumer protection. Businesses move quickly with the deployment of AI technology, but enforcement agencies have to step into this game towards a more proactive approach. The priority of consumer agencies should be collaboration, development of AI tools and increased use of technology in the course of investigations. The potential and the limitations of AI should also be thoroughly analyzed to make better use of the technology for consumer protection.

    IGF 2023 WS #339 Increasing routing security globally through cooperation

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    Routing security is increasingly becoming an area of policy concern. Policy makers and decision makers in industry need greater awareness about how and why they can implement Resource Public Key Infrastructure (RPKI) and other measures. Implementation of routing security measures and collection of data comes at a certain cost, and a challenge to address is that actors might not see direct benefits immediately, even though in the long term, it pro

    Calls to Action

    Improving routing security requires collective action. It’s vital to ensure that as many networks as possible around the world act for the collective good and adopt routing security best practices and tools like RPKI. It is in their own best interest too, to adopt these measures, as it helps secure the availability and reliability of online services provided, and also helps avoid reputational damage when they are affected by routing incidents.

    ,

    Increasing adoption of RPKI and routing security practices requires better data gathering and more resources. We need to have more support through incentives and to further develop tools and build awareness to support networks to adopt these practices. RPKI prevents some incidents but does not protect against all route leaks. Autonomous System Provide Authentication (ASPA) needs further examination in this context.

    Session Report

    The workshop on “Increasing routing security globally through cooperation” aimed to examine the gaps or obstacles hindering the adoption of routing security measures such as RPKI. The speakers shared insights into the need for RPKI, global adoption rates, the role of policy makers. 

    Bastiaan Goslings, RIPE NCC, provided background information on the technical aspects of the Border Gateway Protocol, how and why routing incidents can take place and how routing security measures such as Resource Public Key Infrastructure (RPKI) can help protect networks. He explained the role of the Regional Internet Registries and provided data on RPKI adoption.

    Verena Weber, OECD, spoke about the OECD’s studies on security and the digital economy, one of which focused on routing security. She mentioned OECD government studies looking into routing security such as in the United States (in 2022 and 2023), Sweden (2020-2022), and ENISA (2019). She listed four pillars of action for governments to consider:

    ·      Promote measurement and collection of routing incidents

    ·      Promote deployment of good practices

    ·      Facilitate information sharing

    ·      Define a common framework with industry to improve routing security

     Annemieke Toersen, Netherlands Standardisation Forum, shared the practices followed to ensure greater uptake of standards. A measure they use is “Comply or Explain”. This is a list of standards that all government entities are required to follow or explain why they have failed in their compliance. Further, the Netherlands Standardisation Forum cooperates with vendors to educate them and finally they share documentation for procurement and also monitor to what extent their standards are followed in the procurement process. Their standards are openly available at Internet.nl which includes a Hall of Fame, as their preference is to acknowledge good behaviour over naming and shaming. She also shared information on how the Netherlands Standardisation Forum cooperated with the RIPE NCC to run training courses and raise awareness about routing security.

    Katsuyasu Toyama, JPNAP, shared insights into how JPNAP encouraged different network operators to deploy routing security measures. He recommended that national industry associations play a role in advocating the use of routing security measures; that RPKI implementation specifications should be updated and improved; and that more work should be carried out on Autonomous System Provide Authentication (ASPA).

    Workshop attendees then discussed possible incentives for operators to deploy RPKI, the gap between the technical personnel who need to implement measures and managers who might not understand the need for them, the need for greater recognition when operators act in the collective interest (for example the MANRS programme) and the need to ensure that there are enough resources available to the organisations that collect data and develop tools for routing security.

    IGF 2023 WS #288 A Global Human Rights Approach to Responsible AI Governance

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    (1) the need for a human rights approach to AI governance to be accompanied by capacity building efforts to ensure all stakeholders understand and therefore can support such efforts: (2) the need for Global Majority voices to feed into global forums on AI governance and ensure that frameworks respond to local concerns

    Calls to Action

    (1) The need for the Hiroshima Process to greater consider human rights; (2) Assessing the impacts and utility of regulatory frameworks after enacted

    IGF 2023 Launch / Award Event #8 ISIF Asia 2023 Awards

    Updated:
    Avoiding Internet Fragmentation
    Key Takeaways:

    Technical staff working in Internet development face geopolitical barriers in their efforts to bring down Internet speeds. Giving an example in Pakistan, the submarine cables all link to the Southern part of the country and are not linked to neighbouring countries. Staff working in political areas should help and work with technical staff to improve the situation for technical staff.

    ,

    Internet exchange points (IXPs) link content providers (like social media companies) and Internet Service Providers (ISPs). Without content providers participating, they can’t get the connections with ISPs, and without ISPs they can’t get the content providers. But content providers are large companies and getting their participation can be challenging for IXPs trying to improve Internet conditions. Finding ways to bring them to the table would h

    Calls to Action

    Governments and organizations with significant policy influence should assist Internet Exchange Points and small operators to negotiate with governments and discuss issues related to Internet traffic.

    ,

    It can be difficult for Internet Exchange Points to negotiate with large content providers, so government should assist in negotiations with content providers to be hosted locally. Content providers such as social media companies should also proactively reach out to fledging Internet Exchange Points to help them improve Internet speeds in their countries.

    Session Report

    The ISIF Asia awards recognized the contributions of organizations supporting the development of Internet Exchange Points (IXPs). These IXPs play an important role in directing web traffic around the world.

    There were three awardees:

    The Lahore University of Management Sciences (LUMS) for their work on software-defined IXPs – the award was accepted by project lead Zartash Afzal Uzmi.

    The Myanmar Internet Exchange (MMIX) for their work on supporting networks in Myanmar – the award was accepted by project lead Thein Myint Khine.

    The University of Malaya, for the design, development and operation of an SDN-based Internet eXchange playground for Networkers – the award was accepted by project lead Dr Ling Teck Chaw.

    The video of the session is available here:

    https://www.youtube.com/watch?v=kC6p73CvtyI

    The awardees raised some key themes:

    • There are a lot of challenges around negotiations between content providers (such as large social media companies) and the ISPs who peer at the IXP. One will not join without the other side being there. So how do they start?
    • If there can be a way to bring larger institutional players in like the content providers and governments to help encourage the content providers to peer there, it would be of assistance to these IXPs. This makes a difference in Internet speeds for these countries

    The ISIF Asia awards are presented by the APNIC Foundation, and they recognize the use of innovative technologies to support Internet development in the Asia Pacific.

    The individual videos for each of the three awardees can be seen here:
    https://apnic.foundation/announcing-the-isif-asia-awards-2023/

     

    IGF 2023 Day 0 Event #196 Leave No One Behind: The Importance of Data in Development.

    Updated:
    Data Governance & Trust
    Calls to Action

    Leave No One Behind: The Importance of Data in Development.

    Session Report

     

    KEY TAKEAWAYS

    • How data is governed properly and protected.
    • Ways of ensuring data governance inclusivity
    • Africa being left behind when discussions are going on based on connectivity
    • Data is expensive, which prevents people from contributing to the data pull and connectivity
    • Trying to close the gap to ensure inclusive internet
    • Individuals do not have a digital footprint because they do not have the funds to purchase smart phones or the data to have access to the internet.
    • Have generic devices that are cost-effective to enable the less privileged have the means to afford them to connect with the world as well
    • The need to know diversity and need for data
    • How do we transform people by giving them access to data?
    • Transform the ecosystem of less privileged communities to get them connected to the internet.
    • Have stronger lead institutions that include communities and women together.
    • Having indigenous knowledge to leverage
    • Development is a major challenge.
    • Identifying the type of data that is needed
    • Issue of capacity, knowledge, and skills in using their infrastructure.
    IGF 2023 WS #403 Safe Digital Futures for Children: Aligning Global Agendas

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    Data, evidence and knowledge-sharing are key to overcoming ideological, political, sectoral and structural silos in global digital agendas (cybersecurity, child online safety, tech-facilitated gender based violence, digitization, connectivity etc.), particularly when in comes to placing children's safety at the heart of those.

    ,

    Applying a vulnerability lens across all areas of work related to digital development and internet governance is essential for ensuring an inclusive, safe and secure online world, particularly for children who make up 1/3 of the global internet users.

    Calls to Action

    We are calling for more investment in child online safety across the entire ecosystem, with particular focus on capacities of low and middle income countries, as well as more upstream and collaborative action to ensure that safety by design for all is a key consideration of digital development.

    ,

    In order to make progress across all agendas, we must have intentional and rigorous focus on participation of people with lived experience and children to ensure that we are not further exacerbating and augmenting existing vulnerabilities, harmful gender norms or expressions of violent behaviors and power dynamics though the online world.

    IGF 2023 Town Hall #21 Towards Ethical Principles for Responsible Technology

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    It is important to look at how we develop, invest in, deploy and regulate technology across the innovation cycle. We need collective thinking and alignment on a set of common issues that need to be addressed. h It is crucial to embrace a borderless, global perspective while respecting cultural differences. Moreover, prioritizing principles such as security and privacy through an 'ethical by design' approach upstream in the development process or

    ,

    The emphasis on human-centric, rights-oriented frameworks has underscored the need for solutions that are not only ethically sound but also practical and implementable. The principles are on the table; the next step is their effective implementation. This requires a comprehensive risk management framework for AI and the establishment of agile, multistakeholder governance mechanisms.

    Calls to Action

    The accelerating pace of technological advancements necessitates a paradigm shift in governance models and a mindset towards responsible innovation that accepts that all stakeholders: developers, investors, startups, incumbents, users and policymakers, operate with high levels of uncertainty with regard to new technologies Instead of defaulting to regulation as the sole solution, it's imperative to explore more agile, flexible, and distributed go

    ,

    
The concept of treating users as consumers, thereby granting them agency in the marketing data market, is a powerful avenue for empowerment. Establishing multistakeholder platforms for trends detection, information sharing, agenda setting and sense-making. By convening diverse groups across policymakers, technologists, investors, businesses, civil society a and academia, we can elevate ideas, drive actions, and address complex issues surrounding

    Session Report

    In addition to these takeaways and action points, the Collingridge dilemma reminds us that innovation inherently carries unforeseen consequences. It is vital to consider the potential impacts of new technologies and implement prospective rule-based and intermediate rules (soft law) developed by stakeholders beyond governments. Balancing the need for innovation with the imperative to protect citizens remains central.

     

    Furthermore, the operationalization of ethical principles, as exemplified by the Project Liberty and Aspen Digital initiative, is pivotal. This initiative involves embedding ethics into infrastructures, fostering multistakeholder collaboration across 5 continents, and ensuring broad participation of over 200 stakeholders. This concerted effort aims to translate principles into processes and recommendations, with a draft document set for release in December.

     

    Ultimately, the evolving landscape of emerging technologies such as immersive technologies and the metaverse need not necessitate reinventing policies. Instead, it calls for a concerted effort to translate existing principles into actionable measures. This transformative journey requires not only regulatory intervention but also a fundamental shift in corporate governance models and stakeholder mindsets. The OECD Global Forum on Technology endeavors to achieve this by convening diverse stakeholder groups, fostering the identification, analysis, and exploration of crucial gaps in emerging technologies. Similarly, Aspen Digital undertakes an analogous initiative, uniting cross-sectoral groups to facilitate the exchange of information and collective understanding. Noteworthy examples include their commissions on information disorder, as well as AI's impact on labor markets, elections, and trust. The Global Initiative for Digital Empowerment shares a similar mission with the OECD Global Forum on Technology and Aspen Digital, aiming to convene diverse stakeholder groups to empower users and their data. 

     

    In conclusion, the path towards ethical governance of emerging technologies demands a multifaceted approach, involving collective alignment on values, agile governance, user empowerment, and robust multistakeholder collaboration. By implementing these recommendations, we can navigate the challenges of innovation while upholding ethical standards in the digital age.

    IGF 2023 Lightning Talk #115 AI-Driven Learning Revolution in Cambodian Higher Education

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    1. Most AI-powered tools are designed primarily for dominant languages, such as English, and often underperform when applied to low-resource languages with limited available data. It's crucial to note that many students and lecturers in Cambodia do not speak English and, consequently, are unable to utilize these technologies. Prioritizing the development of AI tools for low resource languages is essential for wider accessibility.

    ,

    2. AI-powered chatbots, which assist in generating content for students and lecturers, provide varied answers according to the prompt. Teaching students how to effectively use prompts to interact with these tools is also essential.

    Calls to Action

    1.Technical groups should consider prioritizing support for low-resource languages to benefit all users.

    ,

    2. Educational organizations should consider incorporating instruction on how students can effectively use prompts to interact with AI tools, ensuring the generation of the best possible responses.

    IGF 2023 Launch / Award Event #169 Design Beyond Deception: A Manual for Design Practitioners

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    To address dark patterns and their associated harms, multi-stakeholder intervention is necessary. This requires engagement with a wider community of researchers, designers, technologists and policymakers on deceptive design as a crucial internet governance issue in order to ensure a safe and trustworthy internet for all. This will make digital rights like privacy actionable, and ensure consumers are protected online.

    Calls to Action

    As technologies evolve and newer interfaces emerge, deception can take different forms. Adopting principles of ethical and human-rights-centered design is crucial while building new technologies including AI-generated interfaces, AR/VR etc. Regulatory measures mustn't limit themselves to existing interfaces and taxonomies but instead locate deception within human-technology interaction to design a collective future that is beyond deception.

    IGF 2023 Lightning Talk #57 Digital taxes & reprogramming value in the network society

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:
    The architechture of the internet is not stable, but in constant flux. The shape of the internet responds to the program that defines what is valuable (Castells, 2008). Thus, the idea that there is an ongoing process of active re-networking is more accurate than to frame such process as one of fragmentation, which has connotations of some passive occurrence and the idea that something is lost.,

    There are two key ways in which reprogramming and the resulting re-networking take place: i) Through direct pressure (i.e. governments imposing sanctions on companies, governments- which I discussed at IGF 2022 and then published as a paper https://journals.sagepub.com/doi/epdf/10.1177/20594364221139729); and ii) Through a coordinated shift in incentives,the greatest example of which is the proposed global tax being advanced by the OECD

    Calls to Action

    Establish an IGF ambassador to participate in these OECD meetings charged with providing an assessment of the potential policy impacts of the proposed program with a policy, politics and geopolitical lens, with particular focus on low and middle income countries, and progress towards the achievement of the SDG goals

    ,

    The establishment of a committee to define what data should be collected to be able to assess the progress towards such goals and stop-gap corrective mechanisms that might need to be introduced into the OECD project to ensure that it does indeed advance the public interest.

    IGF 2023 WS #564 Beneath the Shadows: Private Surveillance in Public Spaces

    Updated:
    Data Governance & Trust
    Key Takeaways:

    The discussion should shift from the nature of data collected, be it biometric or other, to the critical question of data control. Emphasizing who governs the data is paramount for privacy and human rights. Civil society and academia should focus on uncovering data flows and ensuring accountability of both the private sector and government.

    ,

    Urgent international efforts are required to establish robust privacy regulations globally for private surveillance in public spaces, safeguarding human rights and ensuring transparent data sharing. Genuine collaboration among governments, civil society, and the private sector is essential for the formulation and enforcement of these regulations.

    Calls to Action

    Civil society organizations, academia, and the private sector has to collaborate on research and advocacy efforts, with a focus on uncovering the flow of data and holding both private surveillance companies and government authorities accountable for their actions. This collaborative approach will help build a more transparent and just environment for private surveillance in public spaces.

    ,

    Governments and regulatory bodies must proactively develop and implement comprehensive privacy regulations for private surveillance in public spaces. These regulations must address data control, transparency in data sharing, and protection of human rights. It is imperative that these stakeholders work collectively to ensure the proper oversight and enforcement of these regulations to safeguard individual freedoms.

    IGF 2023 Town Hall #74 Internet fragmentation and the UN Global Digital Compact

    Updated:
    Avoiding Internet Fragmentation
    Key Takeaways:

    1. The technical community need to proactively engage with government stakeholders, including the foreign affairs diplomatic corps, if their perspective would inform the GDC process.

    ,

    2. Key messages with updated narratives are required for the technical community to communicate assertively when engaging in the GDC process.

    Calls to Action

    1. Proactive engagement by the technical community with diplomatic officials participating in the GDC negotiations.

    ,

    2. Cooperation among the different bodies in the technical community ecosystem is needed to develop clear and refreshed narratives that are constructive to their perspectives being reflected in the GDC process.

    Session Report

    The session expanded on the role of the technical community into the GDC process. The Internet is a network of networks and a system of trust. The discussion focused on how to protect this Internet technical layer and its interoperability, by not endangering or diluting the trust when politicised in the context of geopolitical fragmentation.

    Discussants talked about the status of PNIF – fragmentation at technical layer and fragmentation of user experience. Fragmentation of institutions where Internet governance is being discussed was also raised.

    When perspectives are overly technical, they tend to exclude government stakeholders that might not have the specialised technical expertise.

    Q&A

    There was concern and confusion around why the technical community seemed to be removed from the stakeholder groups that should be consulted as part of the multistakeholder process of the GDC. Historically, the Tunis agenda had referred and discussed the technical community specifically.

    Apart from highlighting its role in multistakeholder participation, the discussants also raised that it was important for the technical community to consider what it can contribute to the discussions in the GDC. There is a new demographic of diplomatic officials who are now engaged in the negotiations phase and were not present during the WSIS discussions that were initiated in 2003/2005. This new demographic would require regular and constant engagement from the technical community to inform their negotiations of the GDC.

    Discussants also raised the need to bridge the technical with the policy perspectives since the GDC is an intergovernmental process. Hence, there was a call for the technical community to be more involved. Developing the key messages with updated narratives that the technical community wants to convey is urgent for governments to understand how the Internet infrastructure can be used to its full potential.  

    auDA roadmap for Internet governance. http://www.auda.org.au/about-auda/internet-governance-and-public-policy/audas-internet-governance-roadmap-2023-2025

    https://blog.apnic.net/2023/08/21/the-global-digital-compact-a-top-down-attempt-to-minimize-the-role-of-the-technical-community/ 

    IGF 2023 DCAD Re-envisioning DCAD for the Future

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    Participants for this session suggested several Accessibility improvements to the IGF site and how persons with disabilities can participate fully in all sessions.

    ,

    2) Increase awareness of staff and community on how to create accessible sessions. Participants noted how many of the same problems noted todayon accessibility are not new but many of the same issues people have been comenting on for years and were a bit frustrated by how little things have changed

    Calls to Action

    IGF Secretariat should create an Accessibility section of the website letting peole know what to expect when they are either participating online or in person

    ,

    the need to work more closely with the Secretariat on ensuring that accessibility standards are adhered to and also having Secretariat hire an accessibility testing firm to conduct accessibity test on the IGF site and correct the deficits that have become barriers to persons with disabilities

    Session Report

    The session had 30 people in attendance. The two Coordinators opened the Meeting and introduced the disability fellows and the speakers for the session.  We reviewed the activities that DCAD had done throughout the year and focused our efforts on ways to improve the accessibility of online events and how to ensure that persons with disabilities, no matter where they live, can actively participate in all discussions.

    Several of the speakers remarked on how little things have changed over the years, but others noted different improvements and stated while changes may seem to be very small, we are making an impact.  We are working within the system to improve it from above and we are gaining traction.  This year the IGF was able to hire some additional full-time staff who have expressed a real interest in improving accessibility of the iGF and change is indeed happening.

    Participants discussed the issues they have with accessibility at the IGF and the registration process and suggested improvements.  These were some of the main improvements suggested.

    1. Create an Accessibility statement on the IGF main page. This page lists the commitments the IGF has made to make the website accessible to people and it would also detail what apps, plugins, and other tools the site is using and how their compliance with accessibility requirements.  It would also provide people who attend the IGF either in person or remotely with what they can expect in attending the IGF. For example,
      1. if images are used, these would be described fully.  For example, any photos or images used will be described so that all can understand what they are trying to explain.
      2. What apps or plugins are available and their level and what standards do they meet?
      3. If pictures are used to describe what food or snacks or lunch breaks contain, these will be described and not include a picture of the lunch.
      4. More description of what to expect in a session as regards accessibility issues
      5. Ensure the website is fully accessible and that standards are met
      6. Ensure that the interactive session map/guide is published much earlier and/ or what level of accessibility the downloaded Excel files contain.
    2. Improve accessibility of each session offered during the IGF
      1. Add separate screens to each room so that one screen can be devoted to captioning and one screen for the speakers and remote speakers
    3. Ensure the IGF website is developed to be accessible at the start

    When developing websites, WCAG standards should be implemented and certified to promote the kind of guidelines and recommendations especially on the use of text readers for visually impaired community

      1. If the website is contracted out to another firm what is their accessibility ratings and which company conducted the accessibility testing
      2. Have a disability testing firm certify the website is accessible as often content is added at a later stage but there is no standard used so an accessible website can quickly become inaccessible
    1. Increase awareness of staff and community on how to create accessible sessions.
    2. Ensure that registration for the iGF is accessible to persons with disabilities and that the schedule is accessible
    3. Ensure the IGF Website follows web-standards as this is not the case with the current one
    1. The Website is very problematic.  Have to log in several times before the system responds and then when you want to add a session the website logs you out and have to log in again. Very difficult for people in general and adds significant barriers to people engaging and becoming part
    2. Increase awareness among other groups on how to make images, photos, and other info they present to be more accessible

    Costs should not be the deciding factor in having sessions accessible or in making sessions accessible. 

    Participants for this session suggested several Accessibility improvements to the IGF site and how persons with disabilities can participate fully in all sessions.

    Some of the visually impaired participants suggested that navigation of the building be improved so it would be more easily navigated by persons with visual impairments and possibly include tactile mapping or better maps
     

    Speakers told the audience about the problems they had with navigating the IGF and participating. They also described the importance of not only having an accessibility law but also with enforcement of that law.

    Vivien - talked briefly about community grants that .N is offering to the accessibility community, and acknowledged that dot NZ is just beginning this journey. Dr Shabbir stated he was keen to see other CCTLD, and Registrars use this type of giving back to the community.

     

     

     

     

    IGF 2023 Town Hall #80 How Submarine Cables Enhance Digital Collaboration

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    1) Submarine Cables act in themselves as research infrastructures, that act as sensory networks to pick up information from the sea bed, such as temperature data and audio data.

    ,

    2) Submarine Cables can serve as a diplomatic tool to enhance collaboration between regions of the world.

    Calls to Action

    The i) EC and ii) Ministry of Internal Affairs and Communications of Japan, who signed the Memorandum of Cooperation to support secure and resilient submarine cable connectivity must develop the Indicative joint support actions in the document swiftly.

    ,

    2) The role of the public sector NRENs / RENs can be as a conduit between Governments, Funding Agencies and the Commercial sector, whilst defending the interests of R&E. National governments should be made aware of this ability.

    IGF 2023 WS #327 Advocacy to Action: Engaging Policymakers on Digital Rights

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    Multi-Level Engagement for Digital Policy: Civil society organizations should leverage local governance as a testing ground for digital policies, especially in countries with bureaucratic hurdles at the national level. This approach can create a ripple effect, inspiring other municipalities and eventually influencing national policy.

    ,

    Evidence-Based Advocacy with Parliamentary Allies: To effectively influence digital governance, civil society must focus on evidence-based advocacy and identify parliamentary allies who understand the nuances of digital rights. This dual approach can lead to more robust and informed legislation.

    Calls to Action

    Call to Action for Parliamentarians and Civil Society: Parliamentarians should form digital rights forums for evidence-based advocacy. Civil society must provide comprehensive research to support these efforts.

    ,

    Call to Action for Local Governments and Civil Society: Local governments should pilot digital policies in collaboration with civil society. These can serve as models for national policies.

    Session Report

    Session title: Advocacy to Action: Engaging Policymakers on Digital Governance

    Date: Tuesday, October 10th, 2023

    Time: 8:30-9:30 (Japan Standard Time, UTC +9)

    Workshop Organiser: Center for International Media Assistance (CIMA), USA

    Chairperson/Moderator: Nicholas Benequista, CIMA, USA

    Rapporteur/Note Taker: Daniel O'Maley, CIMA, USA

    List of Speakers and their institutional affiliations:

    • Honorable Sarah Opendi, Uganda Parliament, African Parliamentary Network on Internet Governance (APNIG), Uganda
    • Fernanda Martins, InternetLab, Brazil
    • Lisa Garcia, Foundation for Media Alternatives (FMA), Philippines
    • Camilo Arratia Toledo, InternetBolivia & Open Internet for Democracy Leaders

    Key Issues raised:

    • Importance of evidence-based advocacy
    • Role of local governance in digital policy
    • Need for parliamentary allies in digital rights
    • Challenges of digital literacy among policymakers
    • Importance of multi-stakeholder engagement

    If there are Presentations during the workshop session, please provide a 1-paragraph summary for each Presentation:

    Each panelist presented their perspectives on engaging policymakers in digital governance. They shared case studies, challenges, and strategies from their respective countries, emphasizing the need for evidence-based advocacy, local governance initiatives, and parliamentary allies.


    Please describe the Discussions that took place during the workshop session:

    The panelists discussed the importance of evidence-based advocacy, with Lisa Garcia sharing a case study where a collective effort led to a presidential veto on a SIM card registration law. Fernanda Martins emphasized the changing political landscape in Brazil and its impact on digital rights legislation. She also discussed the role of coalitions in monitoring legislative movements and the importance of having allies in parliament who understand digital rights.

    Honorable Sarah Opendi talked about the role of parliament as a bridge between the public and the executive, noting the lack of digital literacy among parliamentarians. She mentioned plans to create an unofficial parliamentary forum on internet governance in Uganda. Camilo Arratia Toledo advocated for the role of municipalities in digital policy, stating that local governance can serve as a testing ground for policies that can later be scaled nationally.


    Please describe any Participant suggestions regarding the way forward/potential next steps/key takeaways:

    Participants suggested that civil society organizations should focus on local governance as a starting point for digital policies, especially in countries with bureaucratic hurdles at the national level. This approach can create a ripple effect, inspiring other municipalities and eventually influencing national policy.

    Another suggestion was for civil society to focus on evidence-based advocacy and identify parliamentary allies who understand the nuances of digital rights. This dual approach can lead to more robust and informed legislation. Participants also emphasized the need for multi-level engagement, from local communities to national and international policymakers, to influence digital governance effectively.

    The session concluded with two Calls to Action: one urging local governments and civil society to collaborate on piloting digital policies, and another calling on parliamentarians and civil society to form digital rights forums for evidence-based advocacy.

    IGF 2023 DC-Jobs Internet for All To Livelihood for All

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    Internet for All is a must have for providing internet access to 2.6 billion people left out of the internet

    ,

    - We need to converge probabilistic technology (Artificial Intelligence) and Deterministic technology (Block chain) to leverage the full potential for jobs and to build trust. - Internet etiquette needs to be created.

    Calls to Action

    We need to create a business model for ensuring viability of the business on providing internet for all and we must accomplish this before the end of this decade

    ,

    We have to an engaging model for providing financial and digital literacy for all

    Session Report

    The session started with the session Chairman, Dr. Rajendra Pratap Gupta, Chair of Dynamic Coalition on Internet & Jobs, unveiling the Project CREATE and demonstrating how technology can be used to create livelihoods by leveraging the technology ecosystem. 

    This was followed by a panel discussion amongst the distinguished panelists and from those in the audience. 

     

    Internet for All to Livelihood for All

    Session by DC-Jobs

    Key Takeaways - Speaker Wise

    Dr. Rajendra Pratap Gupta

    • Not having the internet is worse than being physically/mentally handicapped.
    • At the current speed at which the internet is reaching people, it will take 25 years more for the internet to reach everyone.
    • People should be at the center of all we do.
    • We must choose between a small number of large companies and a large number of small ones. However, there is room for everyone to co-exist.
    • 2.6 billion people don’t have access to the internet at all.
    • Ensure internet for all to give livelihoods for all.
    • Tech-enabled ecosystem needs to be developed to achieve this.
    • We need to create financial and digital literacy to achieve this goal.
    • Can the internet for all be a reality?
      • 2.6 billion people will add immensely to the economy.
      • It is not about productivity or profits; it is about people.
      • The Internet can be made freely accessible:
        • Advertisement-based - completely free.
        • Subscription - ad-free for a payment who can afford it.
    • LMICs should be the Centre of action, not just footnotes at international forums. They still are aid seekers, and we must make them growth creators through Project CREATE.
    • For some, it is not about 5G or 6G but just access to some basic internet service.
    • We need to converge probabilistic technology (Artificial Intelligence) and Deterministic technology (Blockchain) to leverage the full potential for jobs and to build trust.
    • Internet etiquette needs to be created.

    Mr. Dino Cataldo DELL’ACCIO

    • Before considering technology's risks and benefits, we need to think about accessibility.
    • To get access, you need to be recognized.
    • Affordability is the next step after accessibility.
    • Blockchain has played a major role in making technology accessible and affordable.
    • The issue of security and privacy is also very important to focus on.
    • Blockchain is a new form of social contract.
    • If developed with enough guardrails, technology can be a game changer for all.
    • AI is probabilistic and Blockchain is deterministic. Blockchain can be used to create a trustworthy data source, which can then be fed to AI Models. This is how both things can work together. 
    • We need to identify what each individual needs by using blockchain and AI.
    • We need to be more granular and have a more human-centric approach.
    • We need to identify the common denominators and common systemic issues to tackle this issue.

    Mr. Gunjan Sinha

    • The Internet is going through a revolution and an evolution thanks to AI.
    • AI should not create billions for a few but create billions of jobs.
    • Recommendations
      • Action-oriented - digital literacy and financial literacy as in Project CREATE. We need to learn from the Big Techs - they know the psychology of internet users. Users are getting addicted to the internet. We should consider how learning can be made addictive, just like social media. Learning should be more instant.
    • On one axis is accessibility, and on the other is how to make learning on the internet an addictive thing.
    • Making the internet addictive for good (Using gamification). Byte-sized content for learning.
    • Common people need to be addicted to learning via the Internet. If everyone learns about blockchain and AI, a lot can be achieved.
    • AI can be used to tailor content and make it personalized for every user.
    • The top 500 universities must contribute an amount towards Universal Literacy at scale. It is their civic responsibility.

    Ms. Connie Man Hei Siu

    • There exists a major digital divide.
    • Technology has the potential to reach remote areas. Which can make the dream of giving access to the internet achievable.
    • Economic barriers can hinder the process of making the Internet accessible to all.
    • There are various other complex issues - cybersecurity, data privacy, etc.
    • Youth organizations can play a major role in making this dream a reality,

    Ms. Shawnna Hoffman

    • The gig economy and freelancing have created a lot of jobs during COVID-19.
    • In Kenya, many people have used the internet to create online stores to sell their products.
    • Remote working has been a great opportunity for many.
    • AI can really transform based on what the person needs.

    Mr. T.B. Dinesh

    • The idea of content access has not progressed.
    • People don’t know how to navigate the internet even if they have access.
    • How to make the internet accessible by using technology for low literates.
    • Web annotation is a simple process that can be enabled to make content more accessible.
    • Annotations can be audio or visual, like pictures or video. It can be linked to the textual content to make it more understandable. Anyone can do it.
    • We need to see the significance of community-based internet or mesh networks.
    • We are at the point of time where everyone can self-publish.

     

    The session ended with a call to

    1. Host the Jobs Summit in mid-2024 and focus on how to create Livelihoods for all

    2) To develop a sustainable model for 'Internet for All'. 

    3) Create 'Internet Etiquette'. 

     

    IGF 2023 Lightning Talk #85 AI governance and competition: a research presentation

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    The greater data availability and the mainstreaming of AI systems in digital markets might lead to new phenomena of algorithmic collusions.

    ,

    The legal framework is uncertain and competition law enforcement unfit for the digital age.

    Calls to Action

    The solution can be found in the ex ante approach by shaping AI governance to include competition law compliance programs.

    Session Report

    # KEY POINTS

    * PhD research on "Competition Law and Enforcement in the era Big Data and AI", with three main points (technical, legal, enforcement)

    * Technical findings include the potential for AI-based systems to facilitate cartels under specific conditions, such as using machine learning/deep learning, unsupervised learning, and transparent oligopoly/duopolies in the market structure.

    * Legal findings indicate that AI-enabled collusions may not always be considered cartels and sanctioned, except for some cases like the Messenger case where humans used AI to implement a cartel.

    * Enforcement efforts by competition authorities involve various approaches, including AI-based systems for parameter analysis, monitoring and analytics, web scraping, and econometric instruments. Proving collusion can be challenging in some cases.

     

    # FULL REPORT

    # Session

    ## Emilia

    Presents Veronica Piccolo and the format of the session: 15 mins for presentation, 15 mins for Q&A

     

    ## Veronica

    Veronica Piccolo is a lawyer, originally from Italy, part of Youth Standing Group (YSG) of Internet Society. She currently works for the European Commission. All the opinions expressed here are not on their behalf. She just completed a PhD research in Law and Economics at Ca' Foscari University in Venice with the title of

    "Competition Law and Enforcement in the era Big Data and AI". Using a comparative, multidisciplinary approach, she focuses on the legal and technological aspects.

     

    Her PhD research focused on three main points:

    1. Technical

    2. Legal

    3. Institutional and Enforcement

     

    ## Technical findings

    The question here was: "Are AI based system able to facilitate cartels?"

    Those findings are based on an experiment developed by Calvano et al. at the University of Bologna. The experiment involved getting simulated markets to interact with each other using Q-learning. The key objective here was to understand if two AI algorithms could possibly collide and set the same prices?

    The findings suggest that when some specific conditions that are met, based on the same design, they could learn to interact to each other. The elements are the following:

     

    * Design: use of Machine learning/Deep learning; most likely, with the same technology being deployed on the market, there are good opportunities that those interact with each other.

    * Data: unsupervised learning is more likely to facilitate collusion

    * Market Structure: transparent oligopoly/duopolies, with prices known and interchangeable products.

     

    Some criticism was moved to the approach, such as the experiment not working under normal market dynamics. This was contrasted by Brown and Makay and found that over two years, the sellers that could sell using the algorithms set the price above marginal cost.

     

    Also, Ezrachi and Stucke devised models of collusion:

    Messenger: cartel set up by humans and algorithms implement it

    Predictable Agent: AI used to monitor the market and swiftly react to competitors price change.

    Hub & Spoke: presence of coordinators and coordinated nodes, such as a platform marketplace and the spokes. So, the prices of the spokes are coordinated by the hub.

    Digital Eye: the "cartel of the future, where market players would be able to predict the price change by competitors using AI and adjust accordingly

     

    ## Legal FIndings

    The question here was: "Can AI-enabled collusions be considered as cartels and sanctioned?"

    The findings suggest no. Besides the hardly exposable Digital Eye, there are some real cases that can be explored. For the Messenger case, the UK competition authority (CMA, case Trod/GB Eye) determined that a cartel was determined by humans that used AI to implement it. In that case, CMA sanctioned Trod, but not GB, because the former applied for the, specific procedure for leniency. For the Predictable Agent case, we have examples of AI system used to monitor and swiftly change prices. Calvano et al. exposed dynamics of Q-learning, where the algorithms learn to collude and keep the price above marginal costs, more in particular between Bertrand-Nash price and monopoly price. Brown and Mackay bring the case of over-the-counter drugs, where algorithms are used to generate supracompetitive prices through non-collusive mechanism.

     

    Sometimes, it can be very hard to expose this type of cartels. The legal interpretation of what a cartel can be does not correspond to what an economist would identify. Cartel investigations from an economical perspective look for very clear evidences, but for legal theory this can not always be done.

     

    ## Enforcement

    The question here was: "How are competition authorities tackling the issue?"

    The European authorities have equipped themselves to tackle the complexity. The Italian AGCM is using AI-based systems for reverse-engineer parameters generated by the AI of the undertakings. In Poland, they are trying to find if there are terms not compliant with consumer right law. In Greece and Spain, authorities are performing augmented market monitoring and analytics to expose when the market price fluctuates too much, therefore signaling the need for an investigation. In the Czech Republic, the authority is doing web scraping and econometric instrument to detect manipulation.

    In some cases, competition authorities might not be certainly able to prove collusion, also given that companies can provide information to prove the contrary.

     

    # OPEN FLOOR

    How can AI governance be shaped to include competition law compliance programs?

     

    ### 1

    Q: As an umbrella association, we are working with other organization to understand how anti-trust law can be applied and communicated to the rest of civil society. Is it possible to adopt a multistakeholder to frame the governance of AI?

     

    A: Compliance programs must be based on multistakeholder model, but still there is nobody representing smaller businesses, just marking another cost for them. Lowest cost for being compliant.

     

    ### 2 Francisco Livardia - Diplomat from Panama

    Q: How would be the relationship of AI Governance and compensation of damages? How can the fair market price applied for compensating damages?

    A: In Europe we have a poor framework for private enforcing of anti-trust law. In the USA and other countries, if you are a damaged competitor, you can go before a judge and ask for reimbursement. In the EU is up to public competition authorities to carry out investigations and fine accordingly. However, if a cartel is suspected and one of the cartel members applies for a lineancy, they can disclose the cartel themselves and do not get fines by exposing other cartel members.

     

    ### 3

    Q: Can we talk about the new EU policies about data sharing? They seem to enforcing data sharing, actively damaging competition.

     

    A: The Data Act is still just a proposal. However, interestingly, we have the Digital Market Act (DMA). Article 6 says that the gatekeeper has to make the data available to other businesses, including possible competitors. The open question here is, does this make easier to facilitate market collusions?

     

    ### 4

    Q: Competition Law is just for big companies. Can we model an AI program as a information theory system and train it to report on how other models are treating data?

     

    A: Compliance programs could benefit that, however this would require the datasets to be f

    rozen in order to expand the state and run more and more iterations.

    IGF 2023 Lightning Talk #10 AI in education: the future or a storm in a glass?

    Updated:
    AI & Emerging Technologies
    Key Takeaways:

    1) The most important problem in the near future will be the interaction between human and artificial cognitive systems during complex cognitive processes such as learning, comprehension and decision making. The process of identifying a problem when solving tasks can be performed by both humans and AI: humans understand the emotional components best, while AI is well equipped to collect and process data to identify possible gaps.

    ,

    2) It is important to think through the processes of interaction with AI, without perceiving technology as a danger. The threat is not in artificial intelligence tools, but a backward and outdated approach to learning and education.

    Calls to Action

    1) For educational organizations: think about how best to develop regulation of the use of artificial intelligence at the level of internal regulations in order to make the learning process more efficient

    ,

    2) For governments: When discussing the development of AI legislation do not forget to take into account all the aspects of the complex AI systems. The possible restriction of the AI use in education need to be avoided, as it will have a positive impact in this filed.

    Session Report

    AI in education: the future or a storm in a glass?

    On October 8, 2023, the Youth Council of the Coordination Center for TLD .RU/.РФ together with the International Information Security School held a section "AI in education: the future or a storm in a glass", the main topic of which was the use of artificial intelligence technologies in educational processes in schools and universities.

    Representatives of the Coordination Center Andrey Aleynikov and Pavel Pozdnyakov, as well as members of the Youth Council and the IIB School Arevik Martirosyan and Vera Terekhova highlighted the pros and cons of using artificial intelligence in education, based on the results of current international research, and conducted a survey among the audience about their opinions on the issues raised during the session.

    When discussing the digitization of education, it's essential to bear in mind that global challenges and the rapid pace of technological advancement are reshaping our world. Consequently, they are transforming the learning experience of individuals, necessitating the acquisition of skills that were unimaginable just half a century ago. To adapt to these new challenges, significant reforms in teaching and learning methods are required, accessible to everyone regardless of their background or social affiliation.

    The gap between those who can utilize basic digital technologies, including the internet and artificial intelligence, and those who lack access is a massive concern. While some educators can afford to employ innovative teaching techniques, others are compelled to focus more on fundamental aspects of the teaching process, like sourcing materials to teach their students adequately. The situation is further complicated by the fact that the digital divide exists on multiple fronts: between developed and developing countries, various socio-economic groups within countries, technology owners and users, and even those whose professions are enhanced by artificial intelligence and those whose fields of expertise may be jeopardized by its rise. Therefore, considering the diversity of these issues, we should expect that education will gradually transform within the framework of a complex process, rather than through a single wave of changes. 

    The most important problem in the near future will be the interaction between human and artificial cognitive systems during complex cognitive processes such as learning, comprehension and decision making. The process of identifying a problem when solving tasks can be performed by both humans and AI: humans understand the emotional components best, while AI is well equipped to collect and process data to identify possible gaps.

    However, the use of AI can carry not only advantages. The ability to create high-quality content in almost real time is increasingly being used in the modern world to manipulate public opinion, spread misinformation and form a distorted view of reality.

    Therefore, the use of AI in education should be approached with some caution, but without strict prohibitions. It is necessary to develop regulation.

    It is important to think through the processes of interaction with AI, without perceiving technology as a danger. The threat is not in artificial intelligence tools, but a backward and outdated approach to learning and education.

    IGF 2023 Open Forum #22 Jointly Share the Responsibilities in the Digital Era

    Updated:
    Global Digital Governance & Cooperation
    Key Takeaways:

    Today, the phenomenal development of information technology revolution and digital economy is transforming the way of production and life, exerting far-reaching influence over social and economic development of states, global governance system and human civilization.

    ,

    It is essential for all parties to develop a deeper understanding of the Internet technologies and digital governance. In addition, we need to explore sound approaches to digital society governance, so as to adapt to the rapidly changing technological landscape.

    Calls to Action

    IGF is an important platform under the United Nations. All parties need to work together, on the basis of mutual respect and trust, to solve difficult issues, strengthen areas of weakness and improve rules of digital governance, constantly develop governance landscape featuring multilateral participation and multi-party participation, and jointly build a community with a shared future in cyberspace.

    ,

    We shall seize the opportunities in the digital era to unleash the potential of digital productivity, enhance mutual trust through dialogues and exchanges to prevent digital security risks, guide multiple parties to actively participate in building a sound digital governance ecosystem, and promote cooperation on digital governance to improve the global digital governance system.

    IGF 2023 Day 0 Event #63 Call for action: Building a hub for effective cybersecurity

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    It is crucial to close the skills gap to attract more young people and women in cybersecurity. Industry needs people not only with professional competences but also transversal ones (such as: critical thinking, creativity, holistic thinking, team work). It is very important to bring the right people together to cooperate.

    Calls to Action
    It is necessary to close the gap between what university are doing and what industry wants.
    Session Report

    During the session a concept of the Cybersecurity HUB was presented as an online space where IS3C experts bring together representatives of key stakeholders: tertiary educators, industry, governments, ministries of education and students in am to bridge skills gap in cybersecurity field. To move from theory to practice, this is the objective of ‘The Cybersecurity Hub’ that is strongly based on findings and recommendations from the research report “Closing the gap between the needs of the cybersecurity industry and the skills of tertiary education graduates” (the research report available at: https://is3coalition.org/docs/study-report-is3c-cybersecurity-skills-gap/).

    The sessions had a round table format. During the session the speakers representing industry, business, government, civil society and tertiary education establishments from six continents expressed their views on, i.a. partners to involve in the development of the hub to make it rapidly effective, strategies to bring more women and young people into cybersecurity workforce and also how to adapt education to better meet the challenges of the digital transformation.

    After the introduction given by moderators Wout de Natris from IS3C (presentation of the IS3C and goals of the session) and Janice Richardson, Insight (presentation of the main findings and recommendation from the study) the following speakers presented their perspective, insight and best practices: Maciej Groń, NASK/Poland (Cyber Science Coalition, “Partnership for the Cybersecurity” program, creation of new ISAC’s, cyber hygiene training for: university students, local governments, public health-sector, VIPs), Julia Piechna, NASK/Poland (Youth IGF Poland project and engaging tertiary education students and graduates), Anna Rywczyńska, NASK/Poland (formal education from entry levels - challenges and best practices in the implementation of the cybersecurity in the educational system and school curriculum), Deniz Susar, UN (how to cooperate on the international and multi-sector level; good practices from UN’s perspective), Professor Youki Kadobayashi, NAIST, professor at the Industrial Cyber Security Centre of Excellence/Japan (examples of actions undertaken to bridge the skills gap in cybersecurity sector),  Raúl Echeberría, chair on an industry organisation in Latin-America (the level of the implementation cybersecurity policies in Latin-America in business not directly related to IT & cybersecurity (eg. transport, trade, finance and insurance, health care, food industry), Mr. Hikohiro Y Lin, PWC Japan (do tertiary graduates meet the expectations of the private sector, strategies to have more specialists that meet the business needs), João Moreno Falcão, Vice Chair of the ISOC Youth SG and YouthLAC IGF/Brasil (how to diversify the cybersecurity workforce and encourage more women and young people to enter the sector), Ismaila Jawara, Founder of GamCON Infosec Community, Gambia Revenue Authority (how to diversify the cybersecurity workforce and encourage more women and young people to enter the sector). The summary of the session was made by Larry Magid, CEO ConnectSafely, Columnist Mercury News, Host of ConnectSafely Report for CBS News/US.

    Active online participation was also facilitated by opening the floor for online audience via Mentimeter. The audience (online and onsite) had to prioritise the key functions of the hub and also vote for the most important practical steps that should be prioritised to launch and build the HUB. The voting showed that the most important function of the HUB is to (order according to the priority assigned): 1. promote collaboration between industry, universities and the cybersecurity workforce, 2. enhance cybersecurity skills at all levels of education, 3. gather and scale up good practice from cybersecurity and tertiary sectors, 4. raise interest in careers in the cybersecurity industry, and 5. provide online training from top experts on emerging topics. Defining strategic plan (goals, objectives, long-term vision of hub) according to the session’s participants is the most important practical step that should be prioritised to launch and build the HUB.

    The most important conclusions from the discussion during the panel:

    - It is crucial to close the skills gap to attract more young people and women in cybersecurity.

    - Industry needs people not only with professional competences but also transversal ones (such as: critical thinking, creativity, holistic thinking, team work).

    - It is very important to bring the right people together to cooperate.

    - Educators focus on coding but not teaching young people about how things function, what is the backbone of the internet, how does cloud security work, etc. Also lots of graduates have insufficient knowledge about real world applications.

    - Companies training their own people, employees to know today’s products but not the base to adapt to changes.

    - Cybersecurity is important for primary and secondary education. Many think it should be mandatory.

    - It is necessary to close the gap between what university are doing and what industry wants.

    - Universities teach people how to invent AI but industry needs people who can use AI.

    - It is important to create opportunities in developing countries which offer great human talent potential.

    - Massive scalable solutions are needed.

    - Attacks are moving faster than solutions and human resource allocation also fails to keep up.

    - Traditional teaching should be replaced by modern, inclusive methods that provide space for experimentation and learning through practice, gaining deep knowledge through experience.

    - More opportunities for young people should be offered by e.g networking – places for new comers to learn from experts what also leaves space for informal information sharing.

    - It is important to encourage mid-career shift and also to take steps to retain the workforce. For retaining talents it is also important to taking actions to reduce stressful working conditions.

    - Industry needs cybersecurity people in all fields (factories, farms etc).

     

    IGF 2023 Day 0 Event #108 Financing Broadband Networks of the Future to bridge digital

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    Public policymakers explained their policies such as a National Connectivity Plan, subsidies, regulation on competition and spectrum, public-private collaboration, establishment of non-profit company for investment to rural area.

    ,

    Tech company highlighted their investment to network infrastructure, such as submarine cables in Africa have a big impact on ecconomic effect. Operator mentioned their experience on investment to fibre network in Latin America.

    Calls to Action

    As a wish list to policymakers, from civil society' perspective, the importantce of reducing digital divide by investment and capacity building for local area was mentioned. Oeprator and tech company highlighted importance of affordability and openness, such as access to backbone and spectrum at reasonable price, open access network model and access to open data.

    ,

    Policy makers mentioned that they need to not only work on matters on supply side including supporting development and investment of network infrastrucutre, but also look at demand side, such as promoting use-case on high-capacity networks.

    IGF 2023 Lightning Talk #1 Breaking Barriers: Empowering Girls Through the First Female Coding Club in Cambodia, Sisters of Code

    Updated:
    Digital Divides & Inclusion
    Key Takeaways:

    Sisters of Code provides an empowering and practical model for establishing female coding clubs globally to increase female representation in technology.

    Calls to Action

    We encourage all stakeholders - companies, organizations, governments, and individuals - to explore opportunities to establish and support female coding clubs in your communities, inspired by the impactful model of Cambodia's Sisters of Code. Together we can create a more diverse, equitable and innovative tech industry where women and girls have equal opportunities for better future.

    Session Report

    The lightning talk highlighted the Sisters of Code program in Cambodia, which provides coding education and technology training to girls to increase female representation in the tech industry. The talk explored the empowering impact of this first female coding club in Cambodia and addressed the importance of similar initiatives globally.

    Key takeaways included the practical model Sisters of Code provides for establishing after-school coding clubs for girls and the call to action for stakeholders to support and replicate such empowering programs for women and girls in tech in other countries.

    During the Q&A, attendees asked whether the Sisters of Code model could be successfully created in other countries. The speaker confirmed that the methodology and experience from Cambodia can be shared to set up clubs addressing the gender divide in tech in any enthusiastic community with local support.

    Regarding training on online security, the speaker explained that digital safety is an important part of the Sisters of Code curriculum and is taught from the start.

    Overall, the audience was highly engaged and inspired by the presentation. Attendees gave encouraging feedback and positive comments about the empowering mission of Sisters of Code and its potential for impact.

     

     to send, shift + ⏎ to add 

    IGF 2023 Day 0 Event #16 Youth participation: co-creating the Insafe network

    Updated:
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    Joint multi-stakeholder efforts are needed to ensure a safer and better internet for all children and young people globally. Pan-European and global networks, like the Insafe-INHOPE network, are crucial to achieve this goal.

    ,

    Online safety also means emotional and physical safety, hence work of national helplines and hotlines are crucial.

    Calls to Action

    Youth participation and empowerment need to be put into tangible action. Young people need to be given appropriate platforms to voice their opinion when decision on online safety policies are made or new products/tools are designed.

    Session Report

    The session was organised by the Insafe Coordinator, in collaboration with the Portuguese, Belgium, Maltese, Polish and UK Safer Internet Centres, who shared their best practices on youth participation and how to engage young people in co-creating and developing new initiatives and resources.  Moreover, the different centres explained how they work together with young people in developing awareness campaigns to effectively reach this target group, including children in vulnerable situations and to tackle online trends.

    In today's world, children are vulnerable for many reasons: poverty, disability, mental health problems, abuse or neglect, family breakdown, homelessness, discrimination, and social exclusion, not to mention migration and war zones. Several programs are designed to help social groups from different backgrounds, including those who are vulnerable. While these groups face different challenges, they share a common need for online safety in an increasingly complex social environment.

    However, all children can be considered vulnerable as they grow up in a world where decisions are made by others/adults, often with a very different perspective, and feel the pressure to adapt to a world where rights are not protected, risks are everywhere and technological developments in the digital environment are beyond imagination.

    The new European strategy for a Better Internet for Kids (BIK+), adopted by the European Commission in May 2022, aims to provide a delicate balance between digital participation, empowerment, and protection of children in the digital environment. BIK+ is an adaptation of the 2012 BIK strategy, following a decade in which technological developments have exceeded all expectations.

    Young people, as digital citizens of the future and growing up in a digital environment, deserve to have a say in developments, safeguards, and their rights, and to shape the world they will live in. The BIK+ strategy which was adopted after a long consultation process, aims to put children and young people at the forefront of the decisions that will be made by key stakeholders and industry in the digital environment in the coming years.